patent_num
int64
3.93M
10.2M
claim_num1
int64
1
519
claim_num2
int64
2
520
sentence1
stringlengths
40
15.9k
sentence2
stringlengths
88
20k
label
float64
0.5
1
8,094,122
1
6
1. A non-transitory computer readable medium having computer readable program code embodied therein for causing a computer to control the position of a visual pointer using an eye tracking apparatus by: receiving input from the eye tracking apparatus; moving a visual pointer from a first location to a second location that corresponds to a user's gaze position based on the input received from the eye tracking apparatus; providing a visual indicator between the first location and the second location; automatically changing the visual indicator to a reading guide in response to the eye tracking apparatus recognizing a user's gaze position pattern as a read mode, where the reading guide is located in a margin at the beginning of a line of text that is read; repositioning the reading guide in response to the eye tracking apparatus determining that the user approaches the end of a line of text; and in response to the eye tracking apparatus determining that the user's gaze positions are one of slowing down or stopping on a link in the text, exiting the read mode and changing the visual indicator to a pointer for a pointing device to enable the user to click on the link.
1. A non-transitory computer readable medium having computer readable program code embodied therein for causing a computer to control the position of a visual pointer using an eye tracking apparatus by: receiving input from the eye tracking apparatus; moving a visual pointer from a first location to a second location that corresponds to a user's gaze position based on the input received from the eye tracking apparatus; providing a visual indicator between the first location and the second location; automatically changing the visual indicator to a reading guide in response to the eye tracking apparatus recognizing a user's gaze position pattern as a read mode, where the reading guide is located in a margin at the beginning of a line of text that is read; repositioning the reading guide in response to the eye tracking apparatus determining that the user approaches the end of a line of text; and in response to the eye tracking apparatus determining that the user's gaze positions are one of slowing down or stopping on a link in the text, exiting the read mode and changing the visual indicator to a pointer for a pointing device to enable the user to click on the link. 6. A computer readable medium as in claim 1 , wherein the visual indicator provides a spatial relationship between the first location of the visual pointer and the second location of the visual pointer.
0.688272
4,433,601
37
38
37. A process, as claimed in claims 23, 24, 25 or 27 and further comprising the step of generating clock pulses defining a time duration of a musical bar in which the accompaniment notes occur and dividing the bar into a predetermined number of musical beats.
37. A process, as claimed in claims 23, 24, 25 or 27 and further comprising the step of generating clock pulses defining a time duration of a musical bar in which the accompaniment notes occur and dividing the bar into a predetermined number of musical beats. 38. A process, as claimed in claim 37, wherein the step of generating clock pulses comprises the steps of generating first beat tempo clock pulses during a first beat of the bar and second beat tempo clock pulses during a second beat of the bar, dividing the parameter signals into a first group corresponding to the first beat and a second group corresponding to the second beat, generating the first group during the first beat tempo clock pulses irrespective of the harmony selected, generating the second group during the second beat tempo clock pulses in the event there is no change in the selected harmony between the first and second beats, and for generating the first group during the second beat tempo clock pulses in the event there is a change in the selected harmony between the first and second beats.
0.858921
9,110,706
1
3
1. A system, comprising: a processor readable storage hardware device; a user interface; and a processor coupled to the processor readable hardware storage device and to the user interface, wherein the processor readable hardware storage device includes instructions that cause the processor to: execute a sequential application program comprising a data parallel portion that includes an expression, wherein the application is written in a high-level language and comprises both imperative operations and declarative operations; access the expression from a portion of the sequential application program that comprises a declarative operation; based on the expression, automatically generate an execution plan graph, the execution plan graph including a directed graph having vertices that represent processes and edges between the vertices that represent data channels, the execution plan graph for executing the expression in parallel at nodes of a compute cluster, including causing the processor to break the expression into a plurality of sub-expressions, each of the sub-expressions is a vertex in the directed graph; automatically generate vertex code for the vertices of the execution plan graph; automatically generate serialization code that allows data to be passed in the data channels between the vertices; provide the execution plan graph, the serialization code, and the vertex code to an execution engine of the compute cluster that manages parallel execution of the expression in the compute cluster based on the execution plan graph, the serialization code, and the vertex code; receive results of executing the execution plan graph in the compute cluster; and execute a portion of the sequential application program that comprises an imperative operation to present the results in the user interface.
1. A system, comprising: a processor readable storage hardware device; a user interface; and a processor coupled to the processor readable hardware storage device and to the user interface, wherein the processor readable hardware storage device includes instructions that cause the processor to: execute a sequential application program comprising a data parallel portion that includes an expression, wherein the application is written in a high-level language and comprises both imperative operations and declarative operations; access the expression from a portion of the sequential application program that comprises a declarative operation; based on the expression, automatically generate an execution plan graph, the execution plan graph including a directed graph having vertices that represent processes and edges between the vertices that represent data channels, the execution plan graph for executing the expression in parallel at nodes of a compute cluster, including causing the processor to break the expression into a plurality of sub-expressions, each of the sub-expressions is a vertex in the directed graph; automatically generate vertex code for the vertices of the execution plan graph; automatically generate serialization code that allows data to be passed in the data channels between the vertices; provide the execution plan graph, the serialization code, and the vertex code to an execution engine of the compute cluster that manages parallel execution of the expression in the compute cluster based on the execution plan graph, the serialization code, and the vertex code; receive results of executing the execution plan graph in the compute cluster; and execute a portion of the sequential application program that comprises an imperative operation to present the results in the user interface. 3. The system of claim 1 , wherein: the processor further generates callback code for optimizing execution in the compute cluster and provides the callback code to the execution engine of the compute cluster.
0.789474
9,116,978
1
3
1. A computer-implemented system for facilitating cross-subsystem queries of a plurality of building automation subsystems, comprising: an ontology database storing an ontological model for a building automation system (BAS), wherein the ontological model defines multiple different BAS object types, relationships between the BAS object types, and attributes of the BAS object types; a fact database storing instance values for the plurality of building automation subsystems and a logical type for each of the stored instance values, wherein the logical type identifies a particular attribute of the ontological model described by the stored instance value and represents, in a flat format, a portion of the ontological model that provides semantic type information for the stored instance value; and a query engine configured to decompose a cross-subsystem query received from an application into a plurality of subsystem queries using information of the fact database, wherein the query engine parses the logical types in the fact database to obtain the semantic type information for the stored instance values and uses the obtained semantic type information to identify one or more of the stored instance values relevant to the cross-subsystem query without requiring access to another database.
1. A computer-implemented system for facilitating cross-subsystem queries of a plurality of building automation subsystems, comprising: an ontology database storing an ontological model for a building automation system (BAS), wherein the ontological model defines multiple different BAS object types, relationships between the BAS object types, and attributes of the BAS object types; a fact database storing instance values for the plurality of building automation subsystems and a logical type for each of the stored instance values, wherein the logical type identifies a particular attribute of the ontological model described by the stored instance value and represents, in a flat format, a portion of the ontological model that provides semantic type information for the stored instance value; and a query engine configured to decompose a cross-subsystem query received from an application into a plurality of subsystem queries using information of the fact database, wherein the query engine parses the logical types in the fact database to obtain the semantic type information for the stored instance values and uses the obtained semantic type information to identify one or more of the stored instance values relevant to the cross-subsystem query without requiring access to another database. 3. The computer-implemented system of claim 1 , wherein the query engine utilizes the fact database to determine ordering for the subsystem queries.
0.863469
9,607,047
3
4
3. The method of claim 1 , wherein generating a search result comprises: identifying topics that are commonly associated with the keyword; and including the identified topics in the search result.
3. The method of claim 1 , wherein generating a search result comprises: identifying topics that are commonly associated with the keyword; and including the identified topics in the search result. 4. The method of claim 3 , wherein the search result comprises a plurality of topic categories, each category associated with an identified topic that is commonly associated with the keyword.
0.935298
7,966,321
1
8
1. A computer-implemented method, comprising: receiving, at a computer server system from a remote device, a search query; in response to receiving the search query, generating at least two sets of results that are responsive to the search query, including: generating a local result set that is responsive to the search query and that includes a plurality of search results that correspond to a geographic location to which the search query is determined to be directed, generating one or more non-local result sets that each include a plurality of non-local search results and are responsive to the search query, wherein the non-local search results do not correspond to a geographic location to which the search query is determined to be directed; determining for the particular received search query a local relevance indicium that indicates a likelihood that the search query is directed to location-specific search results for the geographic location to which the search query is determined to be directed, wherein the local relevance indicium is generated by a machine learning system that has been trained on prior search queries; determining a first display location for the local result set, the first display location being a location relative to the one or more non-local result sets and determined based on the local relevance indicium; and transmitting, from the computer server system to the remote device, code that when executed, generates a display on the remote device that shows the local result set displayed at a location relative to the one or more non-local result sets according to the first display location determined based on the local relevance indicium.
1. A computer-implemented method, comprising: receiving, at a computer server system from a remote device, a search query; in response to receiving the search query, generating at least two sets of results that are responsive to the search query, including: generating a local result set that is responsive to the search query and that includes a plurality of search results that correspond to a geographic location to which the search query is determined to be directed, generating one or more non-local result sets that each include a plurality of non-local search results and are responsive to the search query, wherein the non-local search results do not correspond to a geographic location to which the search query is determined to be directed; determining for the particular received search query a local relevance indicium that indicates a likelihood that the search query is directed to location-specific search results for the geographic location to which the search query is determined to be directed, wherein the local relevance indicium is generated by a machine learning system that has been trained on prior search queries; determining a first display location for the local result set, the first display location being a location relative to the one or more non-local result sets and determined based on the local relevance indicium; and transmitting, from the computer server system to the remote device, code that when executed, generates a display on the remote device that shows the local result set displayed at a location relative to the one or more non-local result sets according to the first display location determined based on the local relevance indicium. 8. The method of claim 1 , wherein the local relevance indicium is determined based on a set of rules that are applied to the search query.
0.807479
10,074,097
1
5
1. A computer-implemented method comprising: receiving a plurality of business categories, wherein each of the business categories is associated with (i) at least one category profile and (ii) a set of electronic messages, wherein the set of electronic messages are maintained in a storage device; receiving business information for an unclassified business, wherein the business information comprises at least information describing power consumption of the unclassified business and a zoning restriction classification associated with a location at which the unclassified business operates; comparing the business information to one or more of the category profiles to determine if the unclassified business corresponds with at least one of the plurality of business categories based at least in part on the power consumption of the unclassified business, wherein for a first business category of the plurality of business categories, the comparing comprises: (i) determining a degree of similarity value describing a degree to which the business information matches information contained within the one or more of the category profiles associated with the first business category; and (ii) determining that the unclassified business corresponds with the first business category when the degree of similarity value exceeds a predetermined threshold; in response to determining that the unclassified business corresponds with the first business category, associating the unclassified business with the first business category, wherein a first subset of the set of electronic messages are maintained in the storage device in association with the first business category; and controlling transmission of the set of electronic messages based on associations between businesses and the business categories, comprising: (i) selecting the first subset of the set of electronic messages from the storage device for transmission to remote devices associated with the unclassified business based on the unclassified business being associated with the first business category; and (ii) sending the first subset of the set of electronic messages to the remote devices associated with the unclassified business.
1. A computer-implemented method comprising: receiving a plurality of business categories, wherein each of the business categories is associated with (i) at least one category profile and (ii) a set of electronic messages, wherein the set of electronic messages are maintained in a storage device; receiving business information for an unclassified business, wherein the business information comprises at least information describing power consumption of the unclassified business and a zoning restriction classification associated with a location at which the unclassified business operates; comparing the business information to one or more of the category profiles to determine if the unclassified business corresponds with at least one of the plurality of business categories based at least in part on the power consumption of the unclassified business, wherein for a first business category of the plurality of business categories, the comparing comprises: (i) determining a degree of similarity value describing a degree to which the business information matches information contained within the one or more of the category profiles associated with the first business category; and (ii) determining that the unclassified business corresponds with the first business category when the degree of similarity value exceeds a predetermined threshold; in response to determining that the unclassified business corresponds with the first business category, associating the unclassified business with the first business category, wherein a first subset of the set of electronic messages are maintained in the storage device in association with the first business category; and controlling transmission of the set of electronic messages based on associations between businesses and the business categories, comprising: (i) selecting the first subset of the set of electronic messages from the storage device for transmission to remote devices associated with the unclassified business based on the unclassified business being associated with the first business category; and (ii) sending the first subset of the set of electronic messages to the remote devices associated with the unclassified business. 5. The computer-implemented method of claim 1 , wherein the business information further comprises a square footage associated with a building in which the unclassified business operates.
0.872268
9,092,420
1
8
1. An apparatus for automatically generating grammar for use in the processing of natural language, the apparatus comprising: a setting processor configured to set one domain as a target domain to be processed by an intention analysis system; a first extractor configured to extract a corpus relevant to the target domain from a collection of corpora and divide the corpus into sentences and tag the sentences; a classification unit configured to classify the extracted corpus into a domain action among one or more domain actions that correspond to the target domain; a class converter configured to convert one or more words included in the extracted corpus into classes; and a generator configured to generate a grammar based on the converted classes of the extracted corpus wherein one or more ungrammatical words or sentences are removed from the corpus.
1. An apparatus for automatically generating grammar for use in the processing of natural language, the apparatus comprising: a setting processor configured to set one domain as a target domain to be processed by an intention analysis system; a first extractor configured to extract a corpus relevant to the target domain from a collection of corpora and divide the corpus into sentences and tag the sentences; a classification unit configured to classify the extracted corpus into a domain action among one or more domain actions that correspond to the target domain; a class converter configured to convert one or more words included in the extracted corpus into classes; and a generator configured to generate a grammar based on the converted classes of the extracted corpus wherein one or more ungrammatical words or sentences are removed from the corpus. 8. The apparatus of claim 1 , wherein the generated grammar is configured to be used by the apparatus for recognizing an intended command by a user for operating the apparatus.
0.738872
7,783,614
14
15
14. A method of linking elements in a computer-generated document to corresponding data in a database, comprising: attaching a schema file associated with at least one intended use of the document to a document defining rules associated with a markup language to be applied to the document, wherein the markup language is XML and wherein the rules associated with the markup language to be applied to the document comprise names of elements of the markup language and data types associated with the names of the elements of the markup language applying the elements of the markup language to the document; determining if a table associated with the document exists within a document library; if no table is associated with the document, creating a table containing user-defined elements associated with the document; linking at least one markup language element in the document to corresponding data in the database, wherein linking the data fields in the database to the document comprises selecting the created table within a document library, the document library being maintained in the database where the table is associated with the document; writing a unique document identifier number to the document for linking the at least one markup language element in the document to corresponding data in the database; entering data into the database associated with a given markup language element in the document; in response to entering data into the database associated with the given markup language element in the document, automatically writing the data to the document in a location in the document associated with the given markup language element; entering data into the document associated with a given markup language element; in response to entering data into the document associated with the given markup language element, automatically writing the data entered into the document to a data field in the database linked to the given markup language element; providing at least one suggested document element according to the schema file associated with the at least one intended use of the document, wherein the at least one suggested document element comprises an element structure linked to at least one corresponding data field in the database; and enforcing at least one element constraint according to the schema file associated with the document type, wherein the element constraint comprises at least one piece of required data for at least one document element.
14. A method of linking elements in a computer-generated document to corresponding data in a database, comprising: attaching a schema file associated with at least one intended use of the document to a document defining rules associated with a markup language to be applied to the document, wherein the markup language is XML and wherein the rules associated with the markup language to be applied to the document comprise names of elements of the markup language and data types associated with the names of the elements of the markup language applying the elements of the markup language to the document; determining if a table associated with the document exists within a document library; if no table is associated with the document, creating a table containing user-defined elements associated with the document; linking at least one markup language element in the document to corresponding data in the database, wherein linking the data fields in the database to the document comprises selecting the created table within a document library, the document library being maintained in the database where the table is associated with the document; writing a unique document identifier number to the document for linking the at least one markup language element in the document to corresponding data in the database; entering data into the database associated with a given markup language element in the document; in response to entering data into the database associated with the given markup language element in the document, automatically writing the data to the document in a location in the document associated with the given markup language element; entering data into the document associated with a given markup language element; in response to entering data into the document associated with the given markup language element, automatically writing the data entered into the document to a data field in the database linked to the given markup language element; providing at least one suggested document element according to the schema file associated with the at least one intended use of the document, wherein the at least one suggested document element comprises an element structure linked to at least one corresponding data field in the database; and enforcing at least one element constraint according to the schema file associated with the document type, wherein the element constraint comprises at least one piece of required data for at least one document element. 15. The method of claim 14 , further comprising establishing data fields within the database for linking to corresponding markup language elements in the document.
0.894567
8,495,591
7
9
7. A method of parsing a plurality of preprocessor conditional branches of a preprocessor conditional directive statement comprising: using a processing unit, receiving an input from a caller, the input comprising the preprocessor conditional directive statement; using the processing unit, in order to have information available in each parsing path induced by mutually exclusive branches returned to the caller, serializing the input into a stream of tokens produced by following each parsing path induced by mutually exclusive branches of the preprocessor conditional directive statement interrupting a declaration by: labeling tokens belonging to a first parsing path with a first parsing path indicator; labeling tokens belonging to a second parsing path with a second parsing path indicator; fetching the tokens that belong to the first parsing path in a first pass and returning the tokens that belong to the first parsing path to the caller; and fetching the tokens that belong to the second parsing path in a second pass and returning the tokens that belong to the second parsing path to the caller, wherein parsing paths induced by mutually exclusive branches of the preprocessor conditional directive statement are detected by matching preprocessor conditional directives of the preprocessor conditional directive statement.
7. A method of parsing a plurality of preprocessor conditional branches of a preprocessor conditional directive statement comprising: using a processing unit, receiving an input from a caller, the input comprising the preprocessor conditional directive statement; using the processing unit, in order to have information available in each parsing path induced by mutually exclusive branches returned to the caller, serializing the input into a stream of tokens produced by following each parsing path induced by mutually exclusive branches of the preprocessor conditional directive statement interrupting a declaration by: labeling tokens belonging to a first parsing path with a first parsing path indicator; labeling tokens belonging to a second parsing path with a second parsing path indicator; fetching the tokens that belong to the first parsing path in a first pass and returning the tokens that belong to the first parsing path to the caller; and fetching the tokens that belong to the second parsing path in a second pass and returning the tokens that belong to the second parsing path to the caller, wherein parsing paths induced by mutually exclusive branches of the preprocessor conditional directive statement are detected by matching preprocessor conditional directives of the preprocessor conditional directive statement. 9. The method of claim 7 , further comprising generating a token buffer, the token buffer comprising a plurality of tokens of the declaration, wherein each token of the plurality of tokens of the declaration is labeled with at least one parsing path indicator corresponding to a parsing path to which each token belongs.
0.614458
6,151,571
7
8
7. A method as recited in claim 1, wherein the voice signal is received from an emergency response system.
7. A method as recited in claim 1, wherein the voice signal is received from an emergency response system. 8. A method as recited in claim 7, wherein the third party is a member of an emergency response team.
0.973167
9,377,951
10
11
10. The electronic device of claim 9 , the processor executing the instructions to perform the further step of highlighting the linguistic element of a current key selection with a third highlighting to distinguish the current key selection from the highlighted keys corresponding to the default language object.
10. The electronic device of claim 9 , the processor executing the instructions to perform the further step of highlighting the linguistic element of a current key selection with a third highlighting to distinguish the current key selection from the highlighted keys corresponding to the default language object. 11. The electronic device of claim 10 , wherein the predictive linguistic element is positioned in the language object at a location adjacent and subsequent to a current linguistic element corresponding to the current key selection.
0.861244
8,566,092
9
15
9. An apparatus for extracting a prosodic feature of a speech signal, comprising: a hardware processor including: a framing unit adapted to divide the speech signal into speech frames; a transformation unit adapted to transform the speech frames from time domain to frequency domain; a prosodic feature calculation unit adapted to calculate respective prosodic features for different frequency ranges; a first extracting unit adapted to extract a traditional acoustics feature for each speech frame; a calculating unit adapted to calculate, for each said prosodic feature, a feature associated with a current frame, a difference between the feature associated with the current frame and a feature associated with a previous frame, and a difference between the feature associated with the current frame and an average of respective features in a speech segment of the current frame; a second extracting unit adapted to extract a fundamental frequency of the current frame, a difference between the fundamental frequency of the current frame and a fundamental frequency of the previous frame, and a difference between the fundamental frequency of the current frame and an average of respective fundamental frequencies in the speech segment of the current frame; and a recognizing unit adapted to recognize speech associated with the speech signal based on the calculating of said calculating unit and the extracting of said second extracting unit, wherein the prosodic feature calculation unit includes one or more of the following units: a thickness feature calculation unit adapted to calculate a thickness feature of the speech signal for a first frequency range, wherein the thickness feature is based on frequency domain energy of the first frequency range; a strength feature calculation unit adapted to calculate a strength feature of the speech signal for a second frequency range, wherein the strength feature is based on time domain energy of the second frequency range; and a contour feature calculation unit adapted to calculate a contour feature of the speech signal for a third frequency range, wherein the contour feature is based on a time domain envelope of the third frequency range.
9. An apparatus for extracting a prosodic feature of a speech signal, comprising: a hardware processor including: a framing unit adapted to divide the speech signal into speech frames; a transformation unit adapted to transform the speech frames from time domain to frequency domain; a prosodic feature calculation unit adapted to calculate respective prosodic features for different frequency ranges; a first extracting unit adapted to extract a traditional acoustics feature for each speech frame; a calculating unit adapted to calculate, for each said prosodic feature, a feature associated with a current frame, a difference between the feature associated with the current frame and a feature associated with a previous frame, and a difference between the feature associated with the current frame and an average of respective features in a speech segment of the current frame; a second extracting unit adapted to extract a fundamental frequency of the current frame, a difference between the fundamental frequency of the current frame and a fundamental frequency of the previous frame, and a difference between the fundamental frequency of the current frame and an average of respective fundamental frequencies in the speech segment of the current frame; and a recognizing unit adapted to recognize speech associated with the speech signal based on the calculating of said calculating unit and the extracting of said second extracting unit, wherein the prosodic feature calculation unit includes one or more of the following units: a thickness feature calculation unit adapted to calculate a thickness feature of the speech signal for a first frequency range, wherein the thickness feature is based on frequency domain energy of the first frequency range; a strength feature calculation unit adapted to calculate a strength feature of the speech signal for a second frequency range, wherein the strength feature is based on time domain energy of the second frequency range; and a contour feature calculation unit adapted to calculate a contour feature of the speech signal for a third frequency range, wherein the contour feature is based on a time domain envelope of the third frequency range. 15. The apparatus according to claim 9 , wherein the prosodic feature calculation unit calculates the prosodic features based on each frame.
0.819121
8,924,197
29
30
29. The system of claim 17 , wherein the extensible engine has a main processing algorithm that engages portions of the tools and stores all intermediate results inside a plurality of data-structures.
29. The system of claim 17 , wherein the extensible engine has a main processing algorithm that engages portions of the tools and stores all intermediate results inside a plurality of data-structures. 30. The system of claim 29 , wherein the main processing algorithm engages portions of the tools in no specific order.
0.960901
9,031,979
12
14
12. The system of claim 10 , wherein the at least one processor is further configured to execute the computer-executable instructions to cause the system to perform the method further comprising: at least partially forming one or more search tables corresponding to the one or more search values; and at least partially forming one or more base tables corresponding to the one or more fields of the plurality of records of the linked hierarchical database; and wherein the merging, based at least in part on determining the aggregate weights, comprises combining at least a portion of the one or more search tables and the one or more base tables to form the merged table.
12. The system of claim 10 , wherein the at least one processor is further configured to execute the computer-executable instructions to cause the system to perform the method further comprising: at least partially forming one or more search tables corresponding to the one or more search values; and at least partially forming one or more base tables corresponding to the one or more fields of the plurality of records of the linked hierarchical database; and wherein the merging, based at least in part on determining the aggregate weights, comprises combining at least a portion of the one or more search tables and the one or more base tables to form the merged table. 14. The system of claim 12 , wherein the one or more search tables comprise zero or more common fields, and wherein the one or more base tables comprise zero or more common fields, and wherein the one or more base tables comprise record identifiers for each entity in the hierarchy.
0.940607
9,990,924
8
14
8. A speech interaction apparatus, comprising: a processor and a memory storing program; the processor is configured to execute the program stored in the memory, and perform operations comprising: acquiring speech data of a user; presetting a user attribute, wherein the user attribute comprises at least a gender attribute and an age attribute; and presetting multiple vocabularies corresponding to the gender attribute and multiple vocabularies corresponding to the age attribute; performing user attribute recognition on the speech data to obtain a first user attribute recognition result, wherein the user attribute is used to represent a user identity; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data, wherein the performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result comprises: determining vocabulary content corresponding to the first user attribute recognition result by searching, in a preset correspondence between the gender attribute and a vocabulary and a preset correspondence between the age attribute and a vocabulary, for a vocabulary corresponding to the first user attribute recognition result, and using a found vocabulary as the vocabulary content corresponding to the first user attribute recognition result.
8. A speech interaction apparatus, comprising: a processor and a memory storing program; the processor is configured to execute the program stored in the memory, and perform operations comprising: acquiring speech data of a user; presetting a user attribute, wherein the user attribute comprises at least a gender attribute and an age attribute; and presetting multiple vocabularies corresponding to the gender attribute and multiple vocabularies corresponding to the age attribute; performing user attribute recognition on the speech data to obtain a first user attribute recognition result, wherein the user attribute is used to represent a user identity; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data, wherein the performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result comprises: determining vocabulary content corresponding to the first user attribute recognition result by searching, in a preset correspondence between the gender attribute and a vocabulary and a preset correspondence between the age attribute and a vocabulary, for a vocabulary corresponding to the first user attribute recognition result, and using a found vocabulary as the vocabulary content corresponding to the first user attribute recognition result. 14. The apparatus according to claim 8 , wherein the operations further comprises: presetting a correspondence between a spectrum signature and a user attribute recognition result, wherein the user attribute recognition result comprises at least a gender attribute recognition result and an age attribute recognition result; and performing frequency domain transformation on the speech data to obtain a spectrum signature of the speech data; and searching in a preset correspondence between each spectrum signature and each user attribute recognition result, for a user attribute recognition result corresponding to the spectrum signature of the speech data, and using a found user attribute recognition result as the first user attribute recognition result of the speech data.
0.654973
8,396,582
1
6
1. An autonomous biologically based learning system, comprising: a manufacturing tool that produces an asset; a drift component that modifies a manufacturing recipe processed by the manufacturing tool at least in part using a set of driving variables and a predetermined probability distribution function to generate one or more adjusted manufacturing recipes for the asset, wherein the set of driving variables determine a particular sequence to modify a set of recipe parameters associated with the manufacturing recipe; an objective autonomous learning engine that infers one or more functions for the manufacturing tool based on the modified manufacturing recipe processed by the manufacturing tool, wherein the one or more functions predict asset output metrics for the produced asset; and an autonomous optimization engine that extracts a set of updated recipe parameters from a set of input measurements and the one or more inferred functions to generate an adjusted recipe within a predefined tolerance of a target value for the asset output metrics.
1. An autonomous biologically based learning system, comprising: a manufacturing tool that produces an asset; a drift component that modifies a manufacturing recipe processed by the manufacturing tool at least in part using a set of driving variables and a predetermined probability distribution function to generate one or more adjusted manufacturing recipes for the asset, wherein the set of driving variables determine a particular sequence to modify a set of recipe parameters associated with the manufacturing recipe; an objective autonomous learning engine that infers one or more functions for the manufacturing tool based on the modified manufacturing recipe processed by the manufacturing tool, wherein the one or more functions predict asset output metrics for the produced asset; and an autonomous optimization engine that extracts a set of updated recipe parameters from a set of input measurements and the one or more inferred functions to generate an adjusted recipe within a predefined tolerance of a target value for the asset output metrics. 6. The system of claim 1 , wherein, to infer the one or more functions, the objective autonomous learning engine relaxes one or more constraints for the asset output metrics.
0.852041
7,823,139
15
17
15. A machine storage medium having instructions stored thereon that when executed by a processor cause a system to: compile a source file in a first programming language into a parsed representation by a first compiler, the first compiler transforming the source file into first language tokens, and parsing the first language tokens into the parsed representation; receiving, by a transformation component, the parsed representation from a first semantic analysis, generate, by the transformation component, a token stream from the parsed representation produced by the first compiler and provide the token stream to the second compiler by the transformation component, wherein a plurality of compilation phases of the first compiler are skipped, wherein the token stream comprises second language tokens of the second programming language; receiving, by a second syntactic analysis phase of the second compiler the token stream from the transformation component and compiling the token stream into an object code in a second programming language by the second compiler, wherein a plurality of compilation phases of the second compiler are skipped; wherein the first compiler comprises a first lexical analysis, a first syntactic analysis, the first semantic analysis, a first optimization, and a first code generation; wherein the second compiler comprises a second lexical analysis, a second syntactic analysis, the second semantic analysis, a second optimization, and a second code generation; wherein the plurality of compilation phases of the first compiler that are skipped comprise the first optimization, the first code generation, and writing the object code as an output file; and wherein the plurality of compilation phases of the second compiler that are skipped comprise the second lexical analysis and accepting the object code as an input file.
15. A machine storage medium having instructions stored thereon that when executed by a processor cause a system to: compile a source file in a first programming language into a parsed representation by a first compiler, the first compiler transforming the source file into first language tokens, and parsing the first language tokens into the parsed representation; receiving, by a transformation component, the parsed representation from a first semantic analysis, generate, by the transformation component, a token stream from the parsed representation produced by the first compiler and provide the token stream to the second compiler by the transformation component, wherein a plurality of compilation phases of the first compiler are skipped, wherein the token stream comprises second language tokens of the second programming language; receiving, by a second syntactic analysis phase of the second compiler the token stream from the transformation component and compiling the token stream into an object code in a second programming language by the second compiler, wherein a plurality of compilation phases of the second compiler are skipped; wherein the first compiler comprises a first lexical analysis, a first syntactic analysis, the first semantic analysis, a first optimization, and a first code generation; wherein the second compiler comprises a second lexical analysis, a second syntactic analysis, the second semantic analysis, a second optimization, and a second code generation; wherein the plurality of compilation phases of the first compiler that are skipped comprise the first optimization, the first code generation, and writing the object code as an output file; and wherein the plurality of compilation phases of the second compiler that are skipped comprise the second lexical analysis and accepting the object code as an input file. 17. The machine storage medium of claim 15 , further comprising instructions that when executed cause the system to: perform the following compilation phase via the first compiler: accepting the source file in the first programming language as its input file.
0.65
7,617,184
13
20
13. A computer-readable storage medium storing computer-executable instructions that, when executed by a computer, cause the computer to perform a method for presenting materials corresponding to a navigation state, the method comprising: receiving a user selection of an expression of attribute-value pairs; producing a plurality of refinement options and a plurality of ancestors by processing, in each server of a plurality of servers, the expression of attribute-value pairs to produce at least one refinement option and at least one ancestor; combining the plurality of refinement options and plurality of ancestors to form combined refinement options, the combined refinement options including at least one refinement navigation state; determining the navigation state associated with the expression of attribute-value pairs; providing materials associated with the navigation state; and providing the combined refinement options, wherein the combining comprises: taking a union of the plurality of refinement options, determining a set of ancestors for each refinement option of the plurality of refinement options, from the plurality of ancestors produced in the plurality of servers, to form sets of ancestors, computing an intersection of all of the sets of ancestors, and computing the combined refinement options based on terms in the intersection of all sets of ancestors, including identifying at least two related terms among the plurality of refinement options, and computing, for the at least two related terms, a least common ancestor of the related terms, and wherein a first server of the plurality of servers acts as a master server and some of the plurality of servers act as slave servers, the method further comprising the master server distributing a request for a navigation state to a plurality of slave servers, the slave servers computing navigation states for requests and returning results to the master server, and the master server combining the results from the slave servers to obtain a navigation state corresponding to the request, wherein the combining of the results is based on the combining of the plurality of refinement options and plurality of ancestors.
13. A computer-readable storage medium storing computer-executable instructions that, when executed by a computer, cause the computer to perform a method for presenting materials corresponding to a navigation state, the method comprising: receiving a user selection of an expression of attribute-value pairs; producing a plurality of refinement options and a plurality of ancestors by processing, in each server of a plurality of servers, the expression of attribute-value pairs to produce at least one refinement option and at least one ancestor; combining the plurality of refinement options and plurality of ancestors to form combined refinement options, the combined refinement options including at least one refinement navigation state; determining the navigation state associated with the expression of attribute-value pairs; providing materials associated with the navigation state; and providing the combined refinement options, wherein the combining comprises: taking a union of the plurality of refinement options, determining a set of ancestors for each refinement option of the plurality of refinement options, from the plurality of ancestors produced in the plurality of servers, to form sets of ancestors, computing an intersection of all of the sets of ancestors, and computing the combined refinement options based on terms in the intersection of all sets of ancestors, including identifying at least two related terms among the plurality of refinement options, and computing, for the at least two related terms, a least common ancestor of the related terms, and wherein a first server of the plurality of servers acts as a master server and some of the plurality of servers act as slave servers, the method further comprising the master server distributing a request for a navigation state to a plurality of slave servers, the slave servers computing navigation states for requests and returning results to the master server, and the master server combining the results from the slave servers to obtain a navigation state corresponding to the request, wherein the combining of the results is based on the combining of the plurality of refinement options and plurality of ancestors. 20. The computer-readable storage medium of claim 13 , wherein the expression of attribute-value pairs is processed on a different partition of a collection of materials for different ones of the plurality of servers.
0.80694
8,793,261
10
11
10. A method as claimed in claim 9 , wherein each novelty score is determined to be zero if p ij<q ij .
10. A method as claimed in claim 9 , wherein each novelty score is determined to be zero if p ij<q ij . 11. A method as claimed in claim 10 , wherein each novelty score is determined on the basis of the following: Ψ ij = { ( S j * ⁢ p ij * ⁢ ln ⁢ ⁢ p ij ) + ( ( S - S j ) * ⁢ q ij * ⁢ ) - ( S * ⁢ t ij * ⁢ ln ⁢ ⁢ t ij ) , p ij ≥ q ij } P ij ≥ q ij ⁢ 0 , P ij < q ij where Ψ ij is the novelty score of the n-gram i in the document j.
0.943487
10,089,742
17
18
17. The computing system of claim 14 , the actions further comprising: steps for updating the segmentation map based on a trained recurrent neural network (RNN); and steps for employing the trained RNN to store an encoding of the segmentation map for a subsequent updating of the segmentation map.
17. The computing system of claim 14 , the actions further comprising: steps for updating the segmentation map based on a trained recurrent neural network (RNN); and steps for employing the trained RNN to store an encoding of the segmentation map for a subsequent updating of the segmentation map. 18. The computing system of claim 17 , wherein the trained RNN is a convolutional multimodal recurrent neural network (mRNN).
0.957827
8,879,805
1
4
1. A method for constructing a template image for recognition, comprising the steps of: obtaining a plurality of digitized images that belong to n categories, each image including location and gray level information of pixels of the image and category information of the image; for all categories, abstracting common features Ai for images belonging to a category Di (0<i≦n); comparing common features Ai of said category Di with common features Aj of a predetermined number m of categories other than category Di (0<j≦m≦n−1), to obtain discriminating features Σ(Aj T Ai) for category Di; and including features Σ(Aj T Ai) into said common features Ai to obtain template image Ai* for category Di, wherein the template image Ai* is obtained from the following formula in one single step: min A i , E i ⁢  A i  * + λ ⁢  E i  1 + η ⁢ ∑ j ≠ i ⁢  A j T ⁢ A i  F 2 s . t . ⁢ D i = A i + E i , wherein η and λ are constants.
1. A method for constructing a template image for recognition, comprising the steps of: obtaining a plurality of digitized images that belong to n categories, each image including location and gray level information of pixels of the image and category information of the image; for all categories, abstracting common features Ai for images belonging to a category Di (0<i≦n); comparing common features Ai of said category Di with common features Aj of a predetermined number m of categories other than category Di (0<j≦m≦n−1), to obtain discriminating features Σ(Aj T Ai) for category Di; and including features Σ(Aj T Ai) into said common features Ai to obtain template image Ai* for category Di, wherein the template image Ai* is obtained from the following formula in one single step: min A i , E i ⁢  A i  * + λ ⁢  E i  1 + η ⁢ ∑ j ≠ i ⁢  A j T ⁢ A i  F 2 s . t . ⁢ D i = A i + E i , wherein η and λ are constants. 4. The method according to claim 1 , wherein the plurality of digitized images includes face images belonging to n persons.
0.523256
9,170,994
10
14
10. A non-transitory computer readable medium including computer executable instructions, wherein the instructions, when executed by a processor, cause the processor to perform a method comprising: performing speech recognition to perform speech recognition of speech in source language to obtain a recognition result text as a result of the speech recognition; dividing the recognition result text into a plurality of parts to obtain a plurality of source language strings for translating from the source language into target language; translating the plurality of source language strings into a plurality of target language strings in a chronological order, the plurality of target language strings including a first target language strings and one or more second language string which chronologically precedes the first target language string; detecting ambiguity in interpretation of the speech corresponding to the first target language string, based on a relationship between the first target language string and the second target language strings; and adding an additional phrase to the first target language string if ambiguity is detected, the additional phrase being one of words and phrases to interpret uniquely a modification relationship between the first target language string and the second target language strings.
10. A non-transitory computer readable medium including computer executable instructions, wherein the instructions, when executed by a processor, cause the processor to perform a method comprising: performing speech recognition to perform speech recognition of speech in source language to obtain a recognition result text as a result of the speech recognition; dividing the recognition result text into a plurality of parts to obtain a plurality of source language strings for translating from the source language into target language; translating the plurality of source language strings into a plurality of target language strings in a chronological order, the plurality of target language strings including a first target language strings and one or more second language string which chronologically precedes the first target language string; detecting ambiguity in interpretation of the speech corresponding to the first target language string, based on a relationship between the first target language string and the second target language strings; and adding an additional phrase to the first target language string if ambiguity is detected, the additional phrase being one of words and phrases to interpret uniquely a modification relationship between the first target language string and the second target language strings. 14. The medium according to claim 10 , wherein the detecting the ambiguity analyzes the modification relationship of words and phrases between the first target language string and the second target language strings, and detects the ambiguity if at least one of the first target language string and the second target language strings has two or more modification relationship.
0.587912
10,055,457
8
14
8. A system comprising: a computer-readable medium having instructions stored thereon, which, when executed by a processor, cause the system to: obtain one or more query terms in a first query; and for each of the one or more query terms: search a standardized entity taxonomy to locate a standardized entity that most closely matches the query term, the standardized entity taxonomy comprising an entity identification for each of a plurality of different standardized entities; calculate a confidence score for a query term-standardized entity pair for the standardized entity that most closely matches the query term; in response to a determination that the confidence score transgresses a threshold, associate the query term with the entity identification corresponding to the standardized entity that most closely matches the query term; retrieve one or more query rewriting rules corresponding to an entity type of the standardized entity having the entity identification; and execute the one or more query rewriting rules to rewrite the first query such that the rewritten query is more restrictive than the first query.
8. A system comprising: a computer-readable medium having instructions stored thereon, which, when executed by a processor, cause the system to: obtain one or more query terms in a first query; and for each of the one or more query terms: search a standardized entity taxonomy to locate a standardized entity that most closely matches the query term, the standardized entity taxonomy comprising an entity identification for each of a plurality of different standardized entities; calculate a confidence score for a query term-standardized entity pair for the standardized entity that most closely matches the query term; in response to a determination that the confidence score transgresses a threshold, associate the query term with the entity identification corresponding to the standardized entity that most closely matches the query term; retrieve one or more query rewriting rules corresponding to an entity type of the standardized entity having the entity identification; and execute the one or more query rewriting rules to rewrite the first query such that the rewritten query is more restrictive than the first query. 14. The system of claim 8 , wherein the confidence score indicates a statistical likelihood that a user specifying the query term in a search query would have, under ideal circumstances, also entered the corresponding standardized entity in the search query, based on a confidence score model trained via a machine learning algorithm based on member profiles and member activities in a social networking service.
0.563559
8,316,348
2
3
2. The framework of claim 1 , further comprising a JavaScript library that is accessed by the data conglomeration engine.
2. The framework of claim 1 , further comprising a JavaScript library that is accessed by the data conglomeration engine. 3. The framework of claim 2 , the JavaScript library containing a set of JavaScript objects that represents JavaScript data, and a set of JavaScript functions that is used to format the set of JavaScript objects.
0.940181
8,977,576
15
16
15. The method of claim 11 wherein minimizing an objective comprises generating features in a learning problem.
15. The method of claim 11 wherein minimizing an objective comprises generating features in a learning problem. 16. The method of claim 15 wherein the learning problem is selected from the group consisting of: pattern recognition, training an artificial neural network, and software verification and validation.
0.95402
7,809,551
1
3
1. A system for retrieving documents related to a concept from a text corpus comprising: a computer comprising non-transitory storage media which stores: a set of at least four semantic classes, each including at least five keywords, which classes are combinable in different combinations thereof according to predefined syntactic rules to express the concept, a set of user-selected keywords for each of the semantic classes to be used in searching documents in the text corpus, each of a plurality of the sets of user-selected keywords including a plurality of user-selected keywords, at least some of the semantic classes including keywords which are used in relevant expressions in retrieved text when the constituent notion is being conveyed and including keywords having different meanings from other keywords of the same semantic class and which are not synonymous with the other keywords of the same semantic class, and a plurality of the syntactic rules to be applied to identified text portions which include one or more of the user-selected keywords, each of the syntactic rules identifying a pair of semantic classes comprising a respective first of the semantic classes and a respective second of the semantic classes, whereby different rules identify different pairs of semantic classes, the rule being satisfied when any first keyword from the first of the pair of semantic classes and any second keyword from the second of the pair of semantic classes are in any one of a plurality of syntactic relationships; and a concept matching module, which accesses the memory, and which identifies text portions within the text corpus which include one or more of the keywords and which applies each of the syntactic rules to the text portions and identifies those text portions which each satisfy at least one of the syntactic rules, and retrieves documents which include at least one of the identified text portions.
1. A system for retrieving documents related to a concept from a text corpus comprising: a computer comprising non-transitory storage media which stores: a set of at least four semantic classes, each including at least five keywords, which classes are combinable in different combinations thereof according to predefined syntactic rules to express the concept, a set of user-selected keywords for each of the semantic classes to be used in searching documents in the text corpus, each of a plurality of the sets of user-selected keywords including a plurality of user-selected keywords, at least some of the semantic classes including keywords which are used in relevant expressions in retrieved text when the constituent notion is being conveyed and including keywords having different meanings from other keywords of the same semantic class and which are not synonymous with the other keywords of the same semantic class, and a plurality of the syntactic rules to be applied to identified text portions which include one or more of the user-selected keywords, each of the syntactic rules identifying a pair of semantic classes comprising a respective first of the semantic classes and a respective second of the semantic classes, whereby different rules identify different pairs of semantic classes, the rule being satisfied when any first keyword from the first of the pair of semantic classes and any second keyword from the second of the pair of semantic classes are in any one of a plurality of syntactic relationships; and a concept matching module, which accesses the memory, and which identifies text portions within the text corpus which include one or more of the keywords and which applies each of the syntactic rules to the text portions and identifies those text portions which each satisfy at least one of the syntactic rules, and retrieves documents which include at least one of the identified text portions. 3. The system of claim 1 , wherein the plurality of syntactic relationships comprises at least four syntactic relationships.
0.891037
7,650,566
2
3
2. The method of claim 1 , further comprising determining whether the list is a picture bulleted list.
2. The method of claim 1 , further comprising determining whether the list is a picture bulleted list. 3. The method of claim 2 , wherein a specified element and attribute are included to store the picture bullet image information and picture bullet identifier when the list is a picture bullet list.
0.936697
8,700,577
1
8
1. A computer-implemented method comprising: generating a set of candidate conditional functional dependencies based on a set of candidate seeds by using an ontology of a data set, said data set comprising records comprising a plurality of attributes and a plurality of values for said attributes, said plurality of attributes comprising attributes having multiple and different values, wherein said ontology comprises links that indicate which of said attributes are related, said candidate seeds comprising instances of related attributes; applying said candidate conditional functional dependencies individually to said data set to obtain a set of corresponding result values for said candidate conditional functional dependencies; refining said candidate conditional functional dependencies individually, said refining comprising, for each of said conditional functional dependencies: incrementing a first count of records in a first subset of said plurality of records that are consistent with a conditional functional dependency, wherein all values in a pattern tuple of said conditional functional dependency match respective values in a record that is consistent with said conditional functional dependency; incrementing a second count of records in said first subset of said plurality of records that are inconsistent with said conditional functional dependency, wherein all values in a pattern tuple of the antecedent of said conditional functional dependency match respective values, but values in said pattern tuple of the consequent of said conditional functional dependency do not match respective values, in a record that is inconsistent with said conditional functional dependency; incrementing a third count of records in said first subset of said plurality of records that are not consistent with said conditional functional dependency and are not inconsistent with said conditional functional dependency; determining whether a first measure based on said first and third counts satisfies a first threshold value, wherein if said first measure fails to satisfy said first threshold value then a condition is removed from said antecedent of said conditional functional dependency and said refining then continues for a second subset of said plurality of records; and determining whether a second measure based on said second and third counts satisfies a second threshold value, wherein if said second measure fails to satisfy said second threshold value then said first measure is reduced and said refining then continues for said second subset of said plurality of records; terminating said applying and said refining when said candidate conditional functional dependencies individually reach a quiescent state; and selecting a relevant set of said candidate conditional functional dependencies to be used as data quality rules for said data set.
1. A computer-implemented method comprising: generating a set of candidate conditional functional dependencies based on a set of candidate seeds by using an ontology of a data set, said data set comprising records comprising a plurality of attributes and a plurality of values for said attributes, said plurality of attributes comprising attributes having multiple and different values, wherein said ontology comprises links that indicate which of said attributes are related, said candidate seeds comprising instances of related attributes; applying said candidate conditional functional dependencies individually to said data set to obtain a set of corresponding result values for said candidate conditional functional dependencies; refining said candidate conditional functional dependencies individually, said refining comprising, for each of said conditional functional dependencies: incrementing a first count of records in a first subset of said plurality of records that are consistent with a conditional functional dependency, wherein all values in a pattern tuple of said conditional functional dependency match respective values in a record that is consistent with said conditional functional dependency; incrementing a second count of records in said first subset of said plurality of records that are inconsistent with said conditional functional dependency, wherein all values in a pattern tuple of the antecedent of said conditional functional dependency match respective values, but values in said pattern tuple of the consequent of said conditional functional dependency do not match respective values, in a record that is inconsistent with said conditional functional dependency; incrementing a third count of records in said first subset of said plurality of records that are not consistent with said conditional functional dependency and are not inconsistent with said conditional functional dependency; determining whether a first measure based on said first and third counts satisfies a first threshold value, wherein if said first measure fails to satisfy said first threshold value then a condition is removed from said antecedent of said conditional functional dependency and said refining then continues for a second subset of said plurality of records; and determining whether a second measure based on said second and third counts satisfies a second threshold value, wherein if said second measure fails to satisfy said second threshold value then said first measure is reduced and said refining then continues for said second subset of said plurality of records; terminating said applying and said refining when said candidate conditional functional dependencies individually reach a quiescent state; and selecting a relevant set of said candidate conditional functional dependencies to be used as data quality rules for said data set. 8. The computer-implemented method from claim 1 , wherein said quiescent state is achieved for a specific one of said candidate conditional functional dependencies when said specific one of said candidate conditional functional dependencies has been applied individually to a series of said data segments without said refining altering said specific candidate conditional functional dependencies, wherein said series of said data segments contain an amount of data points equal in size to a predetermined window period and contain stable data.
0.698333
7,724,985
11
12
11. A computer usable storage medium having stored thereon instructions that when executed cause a computer system to perform a method for formatting an image, said method comprising: in response to receiving an image request from a device management tool, retrieving a vector image from a device that is a target of said image request, said vector image illustrating at least a portion of said device; locating at said device a set of style attributes for said vector image, said set of style attributes being stored apart from said vector image; and linking said set of style attributes with a first image style identifier embedded in said vector image.
11. A computer usable storage medium having stored thereon instructions that when executed cause a computer system to perform a method for formatting an image, said method comprising: in response to receiving an image request from a device management tool, retrieving a vector image from a device that is a target of said image request, said vector image illustrating at least a portion of said device; locating at said device a set of style attributes for said vector image, said set of style attributes being stored apart from said vector image; and linking said set of style attributes with a first image style identifier embedded in said vector image. 12. The computer usable storage medium of claim 11 , wherein the linking of said set of style attributes with said first image style identifier embedded in said vector image further comprises: locating a second image style identifier embedded in said vector image; and replacing said second image style identifier with said first image style identifier.
0.773718
8,234,285
1
2
1. A method performed by a data processing apparatus, the method comprising: selecting, by the data processing apparatus, object representations from a dataset storing a plurality of object representations, each object representation being an association of: an object identifier that identifies an object instance in a dataset and corresponds to an object; a context value that identifies a context of the object; and a set of feature values that identify features of the object; wherein each object identifier is unique in the dataset, and each context value is associated with one or more object identifiers; for each feature value: determining an inter-context score that is proportional to the number of different context values in the dataset that are associated with the feature value, wherein the inter-context score is based on a probability that a pair of object representations that each include a particular feature value are each associated with different context values; and determining an intra-context score that proportional to the number of times the feature value is associated with each context value, wherein the intra-context score is based on a probability that a pair of object representations that each include the particular feature value are each associated with the same context value; and for a selected pair of object representations, determining a similarity score based on a respective inter-context score and a respective intra-context score of each matching feature value in the set of features for the pair of object representations, the similarity score being a measure of the similarity of the object representations in the pair of object representations, the determining comprising, for each feature value, determining a feature weight for the feature value, the feature weight being based on a ratio of the inter-context score of the feature value to the intra-context score of the feature value.
1. A method performed by a data processing apparatus, the method comprising: selecting, by the data processing apparatus, object representations from a dataset storing a plurality of object representations, each object representation being an association of: an object identifier that identifies an object instance in a dataset and corresponds to an object; a context value that identifies a context of the object; and a set of feature values that identify features of the object; wherein each object identifier is unique in the dataset, and each context value is associated with one or more object identifiers; for each feature value: determining an inter-context score that is proportional to the number of different context values in the dataset that are associated with the feature value, wherein the inter-context score is based on a probability that a pair of object representations that each include a particular feature value are each associated with different context values; and determining an intra-context score that proportional to the number of times the feature value is associated with each context value, wherein the intra-context score is based on a probability that a pair of object representations that each include the particular feature value are each associated with the same context value; and for a selected pair of object representations, determining a similarity score based on a respective inter-context score and a respective intra-context score of each matching feature value in the set of features for the pair of object representations, the similarity score being a measure of the similarity of the object representations in the pair of object representations, the determining comprising, for each feature value, determining a feature weight for the feature value, the feature weight being based on a ratio of the inter-context score of the feature value to the intra-context score of the feature value. 2. The method of claim 1 , wherein: determining a similarity score comprises: generating a weighted vector for each set of features of the selected pair of object representations, each weighted vector having, for each feature value, a corresponding feature weight determined for the feature value; and determining a cosine similarity value from the weighted vectors.
0.786713
9,934,397
1
2
1. A method for controlling privacy in an application having face recognition features, the method comprising: receiving an input including a face recognition query and a digital image of a face; identifying a target user associated with a facial signature in a database based at least in part on a statistical correlation between a detected facial signature in the digital image and one or more facial signatures in the database; identifying a current context of the target user; extracting a profile of the target user from a profile database, wherein the extracted profile comprises one or more privacy preferences and a profile context, wherein the extracting is based on the current context matching the profile context by a matching threshold; and generating a customized profile of the target user, wherein the customized profile omits one or more elements of the profile of the target user based on the one or more privacy preferences and based on the current context of the target user.
1. A method for controlling privacy in an application having face recognition features, the method comprising: receiving an input including a face recognition query and a digital image of a face; identifying a target user associated with a facial signature in a database based at least in part on a statistical correlation between a detected facial signature in the digital image and one or more facial signatures in the database; identifying a current context of the target user; extracting a profile of the target user from a profile database, wherein the extracted profile comprises one or more privacy preferences and a profile context, wherein the extracting is based on the current context matching the profile context by a matching threshold; and generating a customized profile of the target user, wherein the customized profile omits one or more elements of the profile of the target user based on the one or more privacy preferences and based on the current context of the target user. 2. The method of claim 1 , wherein the received input is received from a sensor device, wherein the sensor device captures the digital image.
0.921141
8,583,438
11
13
11. A system comprising: a computer; a text analyzer implemented at least in part by the computer and configured for building, based on text, a lattice comprising speech units, wherein each speech unit in the lattice is obtained from a database comprising a plurality of candidate speech units; a search mechanism implemented at least in part by the computer and configured for finding, in the lattice, a sequence of speech units that conforms to the text; a pruning mechanism implemented at least in part by the computer and configured for pruning, from the sequence of speech units, any of the speech units in the sequence that, based on likelihood ratios and a prosody model that was trained using actual speech, are detected to have unnatural prosody, where the prosody model exhibits a bias toward detecting unnatural prosody; a detection mechanism implemented at least in part by the computer and configured for iterating the finding and the pruning until completion that is based on a condition selected from a group of conditions comprising: 1) every speech unit in the sequence corresponding to natural prosody, and 2) iterating a maximum number of iterations.
11. A system comprising: a computer; a text analyzer implemented at least in part by the computer and configured for building, based on text, a lattice comprising speech units, wherein each speech unit in the lattice is obtained from a database comprising a plurality of candidate speech units; a search mechanism implemented at least in part by the computer and configured for finding, in the lattice, a sequence of speech units that conforms to the text; a pruning mechanism implemented at least in part by the computer and configured for pruning, from the sequence of speech units, any of the speech units in the sequence that, based on likelihood ratios and a prosody model that was trained using actual speech, are detected to have unnatural prosody, where the prosody model exhibits a bias toward detecting unnatural prosody; a detection mechanism implemented at least in part by the computer and configured for iterating the finding and the pruning until completion that is based on a condition selected from a group of conditions comprising: 1) every speech unit in the sequence corresponding to natural prosody, and 2) iterating a maximum number of iterations. 13. The system of claim 11 wherein the pruning further comprises replacing the speech unit in the lattice with one of the candidate speech units.
0.718992
10,114,906
10
11
10. The computer-implemented method of claim 1 , wherein applying the physics model to the representation of the semi-structured document to automatically extract the set of data from the representation comprises: using the physics model to identify a location of a data element in the representation; using the location of the data element and the set of relationships to identify additional locations of one or more additional data elements in the representation; and using the location and the additional locations to extract the set of data from the representation.
10. The computer-implemented method of claim 1 , wherein applying the physics model to the representation of the semi-structured document to automatically extract the set of data from the representation comprises: using the physics model to identify a location of a data element in the representation; using the location of the data element and the set of relationships to identify additional locations of one or more additional data elements in the representation; and using the location and the additional locations to extract the set of data from the representation. 11. The computer-implemented method of claim 10 , wherein using the location of the data element and the set of relationships to identify the additional locations of the one or more additional data elements in the representation comprises: using the location of the data element and one or more parameters of a relationship between the data element and another data element to define a region in the representation in which the other data element is located.
0.941745
8,515,758
8
9
8. A computer-implemented method comprising: training a generic set of hidden Markov models using training data and at least a first vector mapping function, the first vector mapping function removing at least a portion of information not relevant to recognizing speech from feature vectors derived from the training data; training the first vector mapping function based at least in part on the training data while training the set of hidden Markov models; and training a second vector mapping function having a differing degree of freedom from the first vector mapping function based at least in part on the generic set of hidden Markov models.
8. A computer-implemented method comprising: training a generic set of hidden Markov models using training data and at least a first vector mapping function, the first vector mapping function removing at least a portion of information not relevant to recognizing speech from feature vectors derived from the training data; training the first vector mapping function based at least in part on the training data while training the set of hidden Markov models; and training a second vector mapping function having a differing degree of freedom from the first vector mapping function based at least in part on the generic set of hidden Markov models. 9. The method according to claim 8 , further comprising providing the trained generic set of hidden Markov models, the trained first vector mapping function, and the second vector mapping function to a speech recognition engine to use in recognizing speech from an unknown utterance.
0.877912
8,862,456
1
2
1. A method to translate displayed user-interface text of a computer application, comprising: intercepting, by a processor coupled to a memory and a screen, a command to display user-interface text on the screen in a first language, the command comprising the user-interface text to display in the first language; extracting user-interface text to translate from the command; querying a translation mechanism by use of the extracted user-interface text; receiving translated user-interface text in a second language from the translation mechanism; and displaying the translated user-interface text in the second language.
1. A method to translate displayed user-interface text of a computer application, comprising: intercepting, by a processor coupled to a memory and a screen, a command to display user-interface text on the screen in a first language, the command comprising the user-interface text to display in the first language; extracting user-interface text to translate from the command; querying a translation mechanism by use of the extracted user-interface text; receiving translated user-interface text in a second language from the translation mechanism; and displaying the translated user-interface text in the second language. 2. The method of claim 1 , wherein the translation mechanism is adapted to the computer application.
0.912281
9,257,052
3
5
3. The method of claim 1 wherein providing, by the question answering module, an answer to the question further comprises calculating a certainty score for each of the one or more of the answers.
3. The method of claim 1 wherein providing, by the question answering module, an answer to the question further comprises calculating a certainty score for each of the one or more of the answers. 5. The method of claim 3 wherein providing, by the question answering module, an answer to the question further comprises providing each answer and the certainty score for each answer.
0.954769
9,213,946
2
3
2. The method of claim 1 , wherein: the information about relationships among the one or more variables of the first generative model comprises information about relationships among first hidden variables and first observable variables; and the information about relationships among the one or more variables of the second generative model comprises information about relationships among second hidden variables and second observable variables.
2. The method of claim 1 , wherein: the information about relationships among the one or more variables of the first generative model comprises information about relationships among first hidden variables and first observable variables; and the information about relationships among the one or more variables of the second generative model comprises information about relationships among second hidden variables and second observable variables. 3. The method of claim 2 , wherein: the first generative model includes a first set of clusters, each cluster of the first set including one or more of the first observable variables and one or more of the first hidden variables, the first observable variables being represented as terminal nodes and the first hidden variables being represented as cluster nodes; and the second generative model includes a second set of clusters, each cluster of the second set including one or more of the second observable variables and one or more of the second hidden variables, the second observable variables being represented as terminal nodes and the second hidden variables being represented as cluster nodes, wherein the terminal nodes and cluster nodes are coupled together by weighted links, so that when an incoming link from a node is activated, a cluster node may be caused to activate with a probability proportional to the weight of the incoming link, and wherein an outgoing link from the cluster node to another node causes the other node to activate with a probability proportionate to the weight of the outgoing link, otherwise the other node is not activated.
0.60048
7,545,981
8
13
8. A method of rearranging a display of text within an image containing text, said method comprising: acquiring an image containing text by a solid-state image sensor; using a computer processor to: identify distinct regions of text within said image; generate sub-images from said image according to said distinct regions of text; and order said sub-images according to a predetermined order; and displaying said sub-images in said predetermined order on a graphical display device, wherein if said computer identifies side-by-side columns, said computer changes a presentation of said columns such that said columns are displayed on said graphical display device above and below each other instead of side-by-side.
8. A method of rearranging a display of text within an image containing text, said method comprising: acquiring an image containing text by a solid-state image sensor; using a computer processor to: identify distinct regions of text within said image; generate sub-images from said image according to said distinct regions of text; and order said sub-images according to a predetermined order; and displaying said sub-images in said predetermined order on a graphical display device, wherein if said computer identifies side-by-side columns, said computer changes a presentation of said columns such that said columns are displayed on said graphical display device above and below each other instead of side-by-side. 13. The method according to claim 8 , wherein said computer identifies said distinct regions by comparing said image to a document model.
0.89075
9,280,562
18
19
18. The method according to claim 17 , wherein f i ,iε[1,N] denotes a visual feature vector of images in a training database, where N is the size of the database, w j ,j ε[1,M] denotes the distinct textual words in a training annotation word set, where M is the size of annotation vocabulary in the training database, the visual features of images in the database, f i =[f i 1 ,f i 2 , . . . ,f i L ]iε[1, N] are known i.i.d. samples from an unknown distribution, having a visual feature dimension L, the specific visual feature annotation word pairs (f i ,w j ),i ε[1, N], jε[1,M] are known i.i.d. samples from an unknown distribution, associated with an unobserved semantic concept variable zε Z={z 1 , . . . z k }, in which each observation of one visual feature fεF={f i ,f 2 , . . . , f N } belongs to one or more concept classes z k and each observation of one word w εV={w 1 ,w 2 , . . . w M } in one image f i belongs to one concept class, in which the observation pairs (f i , w j ) are assumed to be generated independently, and the pairs of random variables (f i ,w j ) are assumed to be conditionally independent given the respective hidden concept z k , such that P ( f i ,w j |z k )= ( f i |z k ) P V ( w j |z k ); the visual feature and word distribution is treated as a randomized data generation process, wherein a probability of a concept is represented as P z (z k ); a visual feature is selected f i εF with probability P ℑ (f i |z k ); and a textual word is selected w j εV with probability P V (w j |z k ), from which an observed pair (f i ,w j ) is obtained, such that a joint probability model is expressed as follows: P ⁡ ( f i , w j ) = ⁢ P ⁡ ( w j ) ⁢ P ⁡ ( f i ❘ w j ) = ⁢ P ⁡ ( w j ) ⁢ ∑ k = 1 K ⁢ ⁢ ( f i ❘ z k ) ⁢ P ⁡ ( z k ❘ w j ) = ⁢ ∑ k = 1 K ⁢ P z ⁡ ( z k ) ⁢ ⁢ ( f i ❘ z k ) ⁢ P V ⁡ ( w j ❘ z k ) , and the visual features are generated from K Gaussian distributions, each one corresponding to a z k , such that for a specific semantic concept variable z k , the conditional probability density function of visual feature f i is expressed as: ⁢ ( f i ❘ z k ) = 1 2 ⁢ ⁢ π L / 2 ❘ ∑ k 1 / 2 ⁢ ⅇ - 1 2 ⁢ ( f i - μ k ) T ⁢ ∑ k - 1 ⁢ ( f i - μ k ) where Σ k and μ k are the covariance matrix and mean of visual features belonging to z k , respectively; and word concept conditional probabilities P V (●|Z), i.e., P V (w j |z k ) for kε [1,K], are estimated through fitting the probabilistic model to the training set.
18. The method according to claim 17 , wherein f i ,iε[1,N] denotes a visual feature vector of images in a training database, where N is the size of the database, w j ,j ε[1,M] denotes the distinct textual words in a training annotation word set, where M is the size of annotation vocabulary in the training database, the visual features of images in the database, f i =[f i 1 ,f i 2 , . . . ,f i L ]iε[1, N] are known i.i.d. samples from an unknown distribution, having a visual feature dimension L, the specific visual feature annotation word pairs (f i ,w j ),i ε[1, N], jε[1,M] are known i.i.d. samples from an unknown distribution, associated with an unobserved semantic concept variable zε Z={z 1 , . . . z k }, in which each observation of one visual feature fεF={f i ,f 2 , . . . , f N } belongs to one or more concept classes z k and each observation of one word w εV={w 1 ,w 2 , . . . w M } in one image f i belongs to one concept class, in which the observation pairs (f i , w j ) are assumed to be generated independently, and the pairs of random variables (f i ,w j ) are assumed to be conditionally independent given the respective hidden concept z k , such that P ( f i ,w j |z k )= ( f i |z k ) P V ( w j |z k ); the visual feature and word distribution is treated as a randomized data generation process, wherein a probability of a concept is represented as P z (z k ); a visual feature is selected f i εF with probability P ℑ (f i |z k ); and a textual word is selected w j εV with probability P V (w j |z k ), from which an observed pair (f i ,w j ) is obtained, such that a joint probability model is expressed as follows: P ⁡ ( f i , w j ) = ⁢ P ⁡ ( w j ) ⁢ P ⁡ ( f i ❘ w j ) = ⁢ P ⁡ ( w j ) ⁢ ∑ k = 1 K ⁢ ⁢ ( f i ❘ z k ) ⁢ P ⁡ ( z k ❘ w j ) = ⁢ ∑ k = 1 K ⁢ P z ⁡ ( z k ) ⁢ ⁢ ( f i ❘ z k ) ⁢ P V ⁡ ( w j ❘ z k ) , and the visual features are generated from K Gaussian distributions, each one corresponding to a z k , such that for a specific semantic concept variable z k , the conditional probability density function of visual feature f i is expressed as: ⁢ ( f i ❘ z k ) = 1 2 ⁢ ⁢ π L / 2 ❘ ∑ k 1 / 2 ⁢ ⅇ - 1 2 ⁢ ( f i - μ k ) T ⁢ ∑ k - 1 ⁢ ( f i - μ k ) where Σ k and μ k are the covariance matrix and mean of visual features belonging to z k , respectively; and word concept conditional probabilities P V (●|Z), i.e., P V (w j |z k ) for kε [1,K], are estimated through fitting the probabilistic model to the training set. 19. The method according to claim 18 , in which P ℑ (f i |z k ) is determined by maximization of the log-likelihood function: log ⁢ ∏ i = 1 N ⁢ ⁢ ⁢ ( f i ❘ Z ) u i = ∑ i = 1 N ⁢ u i ⁢ log ⁡ ( ∑ k = 1 K ⁢ P z ⁡ ( z k ) ⁢ ⁢ ( f i ❘ z k ) ) where u i is the number of annotation words for image f i , and P z (z k )) and P V (w j |z k ) are determined by maximization of the log-likelihood function: = log ⁢ ⁢ P ⁡ ( F , V ) = ∑ i = 1 N ⁢ ∑ j = 1 M ⁢ n ⁡ ( w i j ) ⁢ log ⁢ ⁢ P ⁡ ( f i , w j ) where n(w i j ) denotes the weight of annotation word w j , i.e., occurrence frequency, for image f i ; and the model is resolved by applying the expectation-maximization (EM) technique, comprising: (i) an expectation (E) step where the posterior probabilities are computed for the hidden variable z k based on the current estimates of the parameters; and (ii) an maximization (M) step, where parameters are updated to maximize the expectation of the complete-data likelihood log P(F,V,Z) given the posterior probabilities computed in the preceding E-step, whereby the probabilities can be iteratively determined by fitting the model to the training image database and the associated annotations.
0.677288
9,223,836
31
32
31. The system of claim 27 wherein the at least one negation rule comprises a prefix negation rule comprising that a prefix negation term appear before the key term in a same sentence as the key term within the selected proximity.
31. The system of claim 27 wherein the at least one negation rule comprises a prefix negation rule comprising that a prefix negation term appear before the key term in a same sentence as the key term within the selected proximity. 32. The system of claim 31 wherein the at least one negation rule comprises a suffix negation rule comprising that a suffix negation term appear after the key term in the same sentence without the prefix negating term within the selected proximity.
0.959675
9,317,594
1
33
1. A computer-implemented method for identifying data files that have a common characteristic, the method comprising: receiving a plurality of data files including one or more data files having the common characteristic; generating a list of key terms from the plurality of data files; classifying each data file of the plurality of data files within a hierarchical structure, the hierarchical structure including upper nodes and lower nodes configured to group data files having similar characteristics, wherein a data file is classified within a lower node of the hierarchical structure based on a psychological characteristic of the classified data file, wherein the psychological characteristic indicates a psychological state of the creator of the classified data file; identifying data files from the plurality of data files having an association with a social community, the social community being a homogenous sub-group of a larger population defined by one or more features, wherein the identified data files having the association with the social community are classified within a particular node of the hierarchical structure that is defined by the one or more features; updating the list of key terms based on an analysis of the identified data files; and using the updated list of key terms to identify other data files that have the common characteristic.
1. A computer-implemented method for identifying data files that have a common characteristic, the method comprising: receiving a plurality of data files including one or more data files having the common characteristic; generating a list of key terms from the plurality of data files; classifying each data file of the plurality of data files within a hierarchical structure, the hierarchical structure including upper nodes and lower nodes configured to group data files having similar characteristics, wherein a data file is classified within a lower node of the hierarchical structure based on a psychological characteristic of the classified data file, wherein the psychological characteristic indicates a psychological state of the creator of the classified data file; identifying data files from the plurality of data files having an association with a social community, the social community being a homogenous sub-group of a larger population defined by one or more features, wherein the identified data files having the association with the social community are classified within a particular node of the hierarchical structure that is defined by the one or more features; updating the list of key terms based on an analysis of the identified data files; and using the updated list of key terms to identify other data files that have the common characteristic. 33. The method of claim 1 , wherein the list is a topic definition, and wherein the key terms of the topic definition are associated with the common characteristic.
0.886111
4,677,659
3
4
3. The method as in claim 2, wherein: (a) said key pad is a standard touch-tone key pad and said electrical signals are produced by a TOUCH-TONE generator, each signal being in the form of one of twelve DTMF tones corresponding to the twelve keys of a TOUCH-TONE key pad.
3. The method as in claim 2, wherein: (a) said key pad is a standard touch-tone key pad and said electrical signals are produced by a TOUCH-TONE generator, each signal being in the form of one of twelve DTMF tones corresponding to the twelve keys of a TOUCH-TONE key pad. 4. The method as in claim 3, wherein: (a) said data entries in said data base each correspond to one of a plurality of names and associated addresses listed in a telephone directory, and; (b) said information associated with each of said data entries is a telephone number.
0.951458
10,108,722
15
16
15. A computer program product, the computer program product comprising: a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to receive a first query term from a user; computer readable program code configured to submit, to a search engine, a first query comprising the first query term; computer readable program code configured to receive, in response to submitting the first query, a first list comprising first search results having respective first rankings; computer readable program code configured to derive multiple keywords from the first query term; computer readable program code configured to submit, for each given derived keyword, a respective second query to the search engine, the respective second query comprising the first query term and the given derived keyword; computer readable program code configured to receive, in response to submitting each of the respective second queries, a respective second list comprising respective second search results having respective second rankings; computer readable program code configured to compute, for each given first search result appearing in one or more of the second lists, a stability score that computes a stability of the given first search result across the second queries by measuring a difference between the first ranking of the given first search result from the first query in the first list and the second rankings of the given first search result in the second lists; computer readable program code configured to re-rank the first search results appearing in one or more of the second list based on their respective first rankings and stability scores in a manner preferring first search results having stability scores indicative that their ranking in the second lists is greater than or equal their ranking in the first list; and computer readable program code configured to present the re-ranked first search results to the user.
15. A computer program product, the computer program product comprising: a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to receive a first query term from a user; computer readable program code configured to submit, to a search engine, a first query comprising the first query term; computer readable program code configured to receive, in response to submitting the first query, a first list comprising first search results having respective first rankings; computer readable program code configured to derive multiple keywords from the first query term; computer readable program code configured to submit, for each given derived keyword, a respective second query to the search engine, the respective second query comprising the first query term and the given derived keyword; computer readable program code configured to receive, in response to submitting each of the respective second queries, a respective second list comprising respective second search results having respective second rankings; computer readable program code configured to compute, for each given first search result appearing in one or more of the second lists, a stability score that computes a stability of the given first search result across the second queries by measuring a difference between the first ranking of the given first search result from the first query in the first list and the second rankings of the given first search result in the second lists; computer readable program code configured to re-rank the first search results appearing in one or more of the second list based on their respective first rankings and stability scores in a manner preferring first search results having stability scores indicative that their ranking in the second lists is greater than or equal their ranking in the first list; and computer readable program code configured to present the re-ranked first search results to the user. 16. The computer program product according to claim 15 , wherein the computer readable program code is configured to identify, in each of the second lists, a number of the second search results that are stable.
0.801136
8,554,599
12
14
12. A computer-readable storage medium containing computer-executable instructions that, as a result of being executed by a computer, control the computer to perform a process of regulating user action affecting one or more work items of a work item tracking system for a software development environment, the process comprising acts of: providing a software development environment including software development work item rules that regulate user action affecting work items during software development using the software development environment, each work item comprising a software element that can be accessed or modified by a user based on one or more of the software development work item rules, the software element being an element under development in the software development environment, the software development work item rules selectively authorizing user access and modification of specified work items; in response to a first user action, by a first user, affecting a first work item of the work item tracking system, determining, in a client computing device, one or more software development work item rules corresponding to the first work item; interpreting, in the client computing device, the one or more determined software development work item rules; responding, in the client computing device, to the first user action by authorizing user access or modification of the first work item based on the interpretation of the one or more determined software development work item rules; receiving from a user a modification to at least one software development work item rule of the work item rules; modifying the at least one software development work item rule according to the user modification; and responding to the first user action based on the modified software development work item rule, wherein the work item tracking system is distributed across at least a first network element and a second network element connected to the first network element by one or more communication media, the first network element comprising a first module, and the second network element comprising a second module, the process further comprising an act of: the first module receiving input from a user specifying a user action affecting the first work item, wherein the acts of determining, interpreting and responding are performed by the first module, and wherein the process further comprises: the second module interpreting the one or more determined work item rules.
12. A computer-readable storage medium containing computer-executable instructions that, as a result of being executed by a computer, control the computer to perform a process of regulating user action affecting one or more work items of a work item tracking system for a software development environment, the process comprising acts of: providing a software development environment including software development work item rules that regulate user action affecting work items during software development using the software development environment, each work item comprising a software element that can be accessed or modified by a user based on one or more of the software development work item rules, the software element being an element under development in the software development environment, the software development work item rules selectively authorizing user access and modification of specified work items; in response to a first user action, by a first user, affecting a first work item of the work item tracking system, determining, in a client computing device, one or more software development work item rules corresponding to the first work item; interpreting, in the client computing device, the one or more determined software development work item rules; responding, in the client computing device, to the first user action by authorizing user access or modification of the first work item based on the interpretation of the one or more determined software development work item rules; receiving from a user a modification to at least one software development work item rule of the work item rules; modifying the at least one software development work item rule according to the user modification; and responding to the first user action based on the modified software development work item rule, wherein the work item tracking system is distributed across at least a first network element and a second network element connected to the first network element by one or more communication media, the first network element comprising a first module, and the second network element comprising a second module, the process further comprising an act of: the first module receiving input from a user specifying a user action affecting the first work item, wherein the acts of determining, interpreting and responding are performed by the first module, and wherein the process further comprises: the second module interpreting the one or more determined work item rules. 14. The computer-readable storage medium of claim 12 , wherein the work item tracking system comprises a plurality of work items organized in a logical hierarchy, the plurality of work items including the one or more work items, wherein a first work item corresponds to a first level of the hierarchy, and a second work item corresponds to a second level of the hierarchy having precedence over the first level, wherein determining further comprises determining a first work item rule corresponding to the first work item, and determining a second work item rule corresponding to the second work item, and wherein interpreting comprises interpreting the first and second work item rules, and overriding the interpretation of the first work item rule with the interpretation of the second work item rule based, at least in part, on the second level of the hierarchy having precedence over the first level.
0.500552
8,645,107
1
5
1. A computer-implemented method of automatically adding constraints between entities in a subject computer-aided design (CAD) model of a real-world object, the method comprising: storing information regarding CAD model entities and related constraints in a computer database, wherein the CAD model entities belong to one or more components of at least one of the subject CAD model and other CAD models; for one CAD model entity of a given component that is one of to be added to and in the subject CAD model, accessing the computer database to determine constraints that have been previously used for at least the one CAD model entity of the given component, the determined constraints having been previously used in at least one of the subject CAD model and the other CAD models; and automatically adding to the subject CAD model a new constraint between at least the one CAD model entity of the given component and another entity in the subject CAD model based on the previously used constraints.
1. A computer-implemented method of automatically adding constraints between entities in a subject computer-aided design (CAD) model of a real-world object, the method comprising: storing information regarding CAD model entities and related constraints in a computer database, wherein the CAD model entities belong to one or more components of at least one of the subject CAD model and other CAD models; for one CAD model entity of a given component that is one of to be added to and in the subject CAD model, accessing the computer database to determine constraints that have been previously used for at least the one CAD model entity of the given component, the determined constraints having been previously used in at least one of the subject CAD model and the other CAD models; and automatically adding to the subject CAD model a new constraint between at least the one CAD model entity of the given component and another entity in the subject CAD model based on the previously used constraints. 5. The computer-implemented method of claim 1 further comprising indexing constraints that have been previously used for components stored in the computer database and wherein accessing the computer database to determine constraints that have been previously used includes accessing an index for the given component.
0.606965
7,536,408
10
11
10. A method of indexing documents in a document collection, each document having an associated identifier, the method comprising: providing a list of valid phrases, wherein each phrase on the list appears a minimum number of times in the document collection, and predicts at least one other phrase, wherein for each phrase g j , g k predicts g j where an information gain I of g k with respect to g j exceeds a predetermined threshold, the information gain I being a function of A(j,k) and E(j,k), where A(j,k) is a measure of an actual co-occurrence rate of g j and g k , and E(j,k) is an expected co-occurrence rate g j and g k ; accessing a plurality of documents in the document collection; for each accessed document, identifying, by operation of a processor adapted to manipulate data within a computer system, phrases from the list of valid phrases that are present in the document; and for each identified phrase in the document, indexing, by operation of a processor adapted to manipulate data within a computer system, the document by storing the identifier of the document in a posting list of the phrase.
10. A method of indexing documents in a document collection, each document having an associated identifier, the method comprising: providing a list of valid phrases, wherein each phrase on the list appears a minimum number of times in the document collection, and predicts at least one other phrase, wherein for each phrase g j , g k predicts g j where an information gain I of g k with respect to g j exceeds a predetermined threshold, the information gain I being a function of A(j,k) and E(j,k), where A(j,k) is a measure of an actual co-occurrence rate of g j and g k , and E(j,k) is an expected co-occurrence rate g j and g k ; accessing a plurality of documents in the document collection; for each accessed document, identifying, by operation of a processor adapted to manipulate data within a computer system, phrases from the list of valid phrases that are present in the document; and for each identified phrase in the document, indexing, by operation of a processor adapted to manipulate data within a computer system, the document by storing the identifier of the document in a posting list of the phrase. 11. The method of claim 10 , wherein the predetermined threshold is about 1.5.
0.926966
9,111,003
1
3
1. A method for efficiently identifying dynamic content of a webpage, the method comprising: (a) accessing, by a virtual browser of a plurality of virtual browsers executing on a device intermediary to a plurality of clients and a plurality of servers a first stored data file representing a first version of a web page and a first abstract syntax tree corresponding to the first stored data file, the abstract syntax tree comprising at least one static node, the static node including stored content; (b) identifying, by the virtual browser of the plurality of virtual browsers, non-matching dynamic content between the first stored data file and a second data file representing a second version of the web page without using a second abstract syntax tree corresponding to the second data file; and (c) replacing, by the virtual browser, the at least one static node corresponding to the non-matching dynamic content in the first abstract syntax tree with a token that identifies the portion of the abstract syntax tree containing the non-matching dynamic content.
1. A method for efficiently identifying dynamic content of a webpage, the method comprising: (a) accessing, by a virtual browser of a plurality of virtual browsers executing on a device intermediary to a plurality of clients and a plurality of servers a first stored data file representing a first version of a web page and a first abstract syntax tree corresponding to the first stored data file, the abstract syntax tree comprising at least one static node, the static node including stored content; (b) identifying, by the virtual browser of the plurality of virtual browsers, non-matching dynamic content between the first stored data file and a second data file representing a second version of the web page without using a second abstract syntax tree corresponding to the second data file; and (c) replacing, by the virtual browser, the at least one static node corresponding to the non-matching dynamic content in the first abstract syntax tree with a token that identifies the portion of the abstract syntax tree containing the non-matching dynamic content. 3. The method of claim 1 , wherein step (b) further comprises determining, by the virtual browser, which portions of the second version of the web page are dynamic.
0.761628
9,426,521
10
13
10. A method to detect garbled closed captioning data, comprising: receiving an encoded video data stream containing closed captioning data; decoding the encoded video data stream, and to reorder frames in the encoded video stream into display order detecting closed captioning data in the decoded, reordered video data stream; extracting individual data elements from the closed captioning data; storing a count of the total number of data elements in the closed captioning data in a memory as a total data element count; storing a count of the total number of data elements in the closed captioning data having a particular characteristic in the memory as a total data element characteristic count; determining a metric as a function of the total data element count and the total data element characteristic count; and providing an alert in accordance with the determined metric.
10. A method to detect garbled closed captioning data, comprising: receiving an encoded video data stream containing closed captioning data; decoding the encoded video data stream, and to reorder frames in the encoded video stream into display order detecting closed captioning data in the decoded, reordered video data stream; extracting individual data elements from the closed captioning data; storing a count of the total number of data elements in the closed captioning data in a memory as a total data element count; storing a count of the total number of data elements in the closed captioning data having a particular characteristic in the memory as a total data element characteristic count; determining a metric as a function of the total data element count and the total data element characteristic count; and providing an alert in accordance with the determined metric. 13. The method recited in claim 10 , wherein the particular characteristic is one of a word length that is at least 10 characters and a range of word lengths, wherein the lower bound of the range of word lengths is at least 10 characters.
0.839406
7,496,593
1
21
1. A computer-implemented system for creating one or more multi-relational ontologies having a predetermined structure, the system comprising: at least one processor and a memory having instructions causing the processor to: (i) create an upper ontology that includes: a set of predetermined concept types, a set of predetermined relationship types, a set of concept type pairs, and for each concept type pair, a set of relationships permitted to be used to connect the concept types of the concept type pair; (ii) receive raw data and arrange the raw data into a plurality of individual assertions according to the upper ontology, each assertion comprising a first concept, a second concept, and a relationship between the first and second concept, wherein the first and second concept of each assertion have a concept type from the set of predetermined concept types, wherein the relationship of each assertion has a relationship type from the set of predetermined relationship types, wherein the first and second concept of each assertion belong to a concept type pair of the set of concept type pairs and are connected by a relationship from the set of possible relationships permitted for the concept type pair, wherein at least one concept within the plurality of assertions is part of more than one assertion, wherein one or more relationships in the plurality of individual assertions comprise relationships unconstrained by any hierarchical arrangement of concepts, wherein each concept within each assertion of the plurality of individual assertions is associated with a label, a concept type, and at least one property, and wherein the at least one property includes at least a version of a data source from which the concept was derived; and (iii) store on at least one data storage device: the plurality of individual assertions as one or more multi-relational ontologies, and one or more pieces of evidence supporting information contained in each assertion of the plurality of assertions, wherein each of the one or more pieces of evidence are linked to their corresponding assertion such that each of the one or more pieces of evidence are able to be accessed along with their corresponding assertion, and wherein the one or more pieces of evidence are each associated with at least a data source from which the evidence is derived.
1. A computer-implemented system for creating one or more multi-relational ontologies having a predetermined structure, the system comprising: at least one processor and a memory having instructions causing the processor to: (i) create an upper ontology that includes: a set of predetermined concept types, a set of predetermined relationship types, a set of concept type pairs, and for each concept type pair, a set of relationships permitted to be used to connect the concept types of the concept type pair; (ii) receive raw data and arrange the raw data into a plurality of individual assertions according to the upper ontology, each assertion comprising a first concept, a second concept, and a relationship between the first and second concept, wherein the first and second concept of each assertion have a concept type from the set of predetermined concept types, wherein the relationship of each assertion has a relationship type from the set of predetermined relationship types, wherein the first and second concept of each assertion belong to a concept type pair of the set of concept type pairs and are connected by a relationship from the set of possible relationships permitted for the concept type pair, wherein at least one concept within the plurality of assertions is part of more than one assertion, wherein one or more relationships in the plurality of individual assertions comprise relationships unconstrained by any hierarchical arrangement of concepts, wherein each concept within each assertion of the plurality of individual assertions is associated with a label, a concept type, and at least one property, and wherein the at least one property includes at least a version of a data source from which the concept was derived; and (iii) store on at least one data storage device: the plurality of individual assertions as one or more multi-relational ontologies, and one or more pieces of evidence supporting information contained in each assertion of the plurality of assertions, wherein each of the one or more pieces of evidence are linked to their corresponding assertion such that each of the one or more pieces of evidence are able to be accessed along with their corresponding assertion, and wherein the one or more pieces of evidence are each associated with at least a data source from which the evidence is derived. 21. The system of claim 1 , wherein the one or more pieces of evidence are each further associated with one or more tags indicating whether users of specific access levels may interface with assertions derived from the data source and evidence derived from the data source.
0.644531
9,720,983
13
17
13. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: obtaining information associated with a mobile application of interest; determining a plurality of similar applications to the mobile application of interest along with a respective similarity score by querying a database of structured data; determining a plurality of keywords from the similar applications having a threshold level of statistical information relating to a performance of a respective keyword with each respective similar application; and extracting a new keyword for the mobile application of interest to use for identifying content items for presentation on the mobile application of interest by: calculating a projected statistical value for the new keyword using a weighted least regression of the statistical information relating to the performance of the plurality of keywords weighted with the respective similarity score of an associated similar application, and determining the projected statistical value for the new keyword exceeds a threshold value.
13. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: obtaining information associated with a mobile application of interest; determining a plurality of similar applications to the mobile application of interest along with a respective similarity score by querying a database of structured data; determining a plurality of keywords from the similar applications having a threshold level of statistical information relating to a performance of a respective keyword with each respective similar application; and extracting a new keyword for the mobile application of interest to use for identifying content items for presentation on the mobile application of interest by: calculating a projected statistical value for the new keyword using a weighted least regression of the statistical information relating to the performance of the plurality of keywords weighted with the respective similarity score of an associated similar application, and determining the projected statistical value for the new keyword exceeds a threshold value. 17. The system of claim 13 , wherein extracting the new keyword includes calculating projected statistical information for the application of interest for one or more non-overlapping keywords from the plurality of similar applications.
0.839918
6,167,409
56
59
56. A process for making a digital information product comprising computer data signals defining a digital form of a digital document, wherein the document can be one of several different types and with varying content, the process comprising: sending a request for at least part of a document; accessing a definition of additional content for a type of the document; generating an additional content component according to the definition of additional content for the type of the document; receiving a selected portion of the content of the document, the portion having been selected in accordance with the request; combining the additional content component with the content of the portion of the document to obtain a digital form of the document, and encoding the digital form in a computer data signal.
56. A process for making a digital information product comprising computer data signals defining a digital form of a digital document, wherein the document can be one of several different types and with varying content, the process comprising: sending a request for at least part of a document; accessing a definition of additional content for a type of the document; generating an additional content component according to the definition of additional content for the type of the document; receiving a selected portion of the content of the document, the portion having been selected in accordance with the request; combining the additional content component with the content of the portion of the document to obtain a digital form of the document, and encoding the digital form in a computer data signal. 59. The process of claim 56, wherein the step of transforming includes the step of applying a declarative specification for the document type of the document to the elements in the selected portion of the document.
0.880313
8,774,519
3
4
3. The non-transitory program storage device of claim 2 , wherein the instructions to cause the processor to generate a candidate vector for each of a second plurality of pixels, comprise instructions to cause the processor to: select one of the plurality of evaluation regions; and generate a candidate vector for each of a second plurality of pixels from the selected evaluation region.
3. The non-transitory program storage device of claim 2 , wherein the instructions to cause the processor to generate a candidate vector for each of a second plurality of pixels, comprise instructions to cause the processor to: select one of the plurality of evaluation regions; and generate a candidate vector for each of a second plurality of pixels from the selected evaluation region. 4. The non-transitory program storage device of claim 3 , wherein the instructions to cause the processor to select one of the plurality of evaluation regions, comprise instructions to cause the processor to select an evaluation region in which a landmark is expected.
0.883681
9,646,606
1
8
1. A method of performing speech recognition that is performed by one or more computers of an automated speech recognizer, the method comprising: receiving, by the one or more computers, data that indicates multiple candidate transcriptions for an utterance, wherein the one or more computers are in communication with (i) a first search system that provides a search service of a first domain, and (ii) a second search system that provides a search service for a second domain, the second domain being different from the first domain; for each particular candidate transcription of the candidate transcriptions: receiving, by the one or more computers, data from the first search system that provides the search service for the first domain, the data from the first search system indicating first search results that the search service for the first domain identifies as relevant to the particular candidate transcription; determining, by the one or more computers, a first score based on the first search results that the search service for the first domain identifies as relevant to the particular candidate transcription; receiving, by the one or more computers, data from the second search system that provides the search service for the second domain, the data from the second search system indicating second search results that the search service for the second domain identifies as relevant to the particular candidate transcription; determining, by the one or more computers, a second score based on the second search results that the search service for the second domain identifies as relevant to the particular candidate transcription; providing, by the one or more computers, (i) the first score that is determined based on the first search results and (ii) the second score that is determined based on the second search results as input to a classifier, wherein the classifier has been trained, using scores that represent characteristics of different search results from different domains, to indicate a likelihood that a transcription is correct based on scores for multiple different domains; and receiving, by the one or more computers and from the trained classifier, a classifier output in response to at least the first score and the second score, the classifier output indicating a likelihood that the particular candidate transcription is correct; selecting, by the one or more computers, a transcription for the utterance, from among the multiple candidate transcriptions, based on the classifier outputs; and providing, by the one or more computers, the transcription as output of the automated speech recognizer.
1. A method of performing speech recognition that is performed by one or more computers of an automated speech recognizer, the method comprising: receiving, by the one or more computers, data that indicates multiple candidate transcriptions for an utterance, wherein the one or more computers are in communication with (i) a first search system that provides a search service of a first domain, and (ii) a second search system that provides a search service for a second domain, the second domain being different from the first domain; for each particular candidate transcription of the candidate transcriptions: receiving, by the one or more computers, data from the first search system that provides the search service for the first domain, the data from the first search system indicating first search results that the search service for the first domain identifies as relevant to the particular candidate transcription; determining, by the one or more computers, a first score based on the first search results that the search service for the first domain identifies as relevant to the particular candidate transcription; receiving, by the one or more computers, data from the second search system that provides the search service for the second domain, the data from the second search system indicating second search results that the search service for the second domain identifies as relevant to the particular candidate transcription; determining, by the one or more computers, a second score based on the second search results that the search service for the second domain identifies as relevant to the particular candidate transcription; providing, by the one or more computers, (i) the first score that is determined based on the first search results and (ii) the second score that is determined based on the second search results as input to a classifier, wherein the classifier has been trained, using scores that represent characteristics of different search results from different domains, to indicate a likelihood that a transcription is correct based on scores for multiple different domains; and receiving, by the one or more computers and from the trained classifier, a classifier output in response to at least the first score and the second score, the classifier output indicating a likelihood that the particular candidate transcription is correct; selecting, by the one or more computers, a transcription for the utterance, from among the multiple candidate transcriptions, based on the classifier outputs; and providing, by the one or more computers, the transcription as output of the automated speech recognizer. 8. The method of claim 1 , wherein selecting, by the one or more computers, the transcription utterance, from among the multiple candidate transcriptions comprises: ranking, by the one or more computers, the multiple candidate transcriptions based on the classifier outputs; and selecting, by the one or more computers, the highest-ranked candidate transcription.
0.905518
8,886,624
16
17
16. The system of claim 15 , wherein the processor further comprises: an associated score calculation module configured to numerically express, as an associated score, the association between the associated keyword or the extended keyword and the other keywords based on the association indicator; and a ranking score calculation module configured to calculate a ranking score for each purpose of usage based on the associated score and the independent indicator, wherein the search module is configured to provide, based on the ranking score, the associated keyword or the extended keyword with respect to the search word, the association indicator comprises at least one of a purchase association indicator, an advertising association indicator, a service data association indicator, an exposure association indicator, a subject context association indicator, a knowledge shopping association indicator, and a duplication indicator of each association indicator, and the independent indicator comprises a plurality of indicators including at least one of a common indicator, a cost per click (CPC) indicator, and a cost per mille (CPM) indicator.
16. The system of claim 15 , wherein the processor further comprises: an associated score calculation module configured to numerically express, as an associated score, the association between the associated keyword or the extended keyword and the other keywords based on the association indicator; and a ranking score calculation module configured to calculate a ranking score for each purpose of usage based on the associated score and the independent indicator, wherein the search module is configured to provide, based on the ranking score, the associated keyword or the extended keyword with respect to the search word, the association indicator comprises at least one of a purchase association indicator, an advertising association indicator, a service data association indicator, an exposure association indicator, a subject context association indicator, a knowledge shopping association indicator, and a duplication indicator of each association indicator, and the independent indicator comprises a plurality of indicators including at least one of a common indicator, a cost per click (CPC) indicator, and a cost per mille (CPM) indicator. 17. The system of claim 16 , wherein the associated score calculation module is configured to calculate a single keyword associated score by applying an individual weight to the association indicator.
0.914384
8,244,709
10
13
10. A system comprising: a comparing engine to compare, using one or more processors, a first search result with a second search result to automatically identify from the first search result at least one data item that is new or modified as compared to the second search result, the first search result comprising a first set of data items satisfying a first set of user-specific search criteria and the second search result comprising a second set of data items satisfying a second set of user-specific search criteria; and a notification engine to send a notification of a result of the comparing to a user device.
10. A system comprising: a comparing engine to compare, using one or more processors, a first search result with a second search result to automatically identify from the first search result at least one data item that is new or modified as compared to the second search result, the first search result comprising a first set of data items satisfying a first set of user-specific search criteria and the second search result comprising a second set of data items satisfying a second set of user-specific search criteria; and a notification engine to send a notification of a result of the comparing to a user device. 13. The system of claim 10 , wherein a frequency of the comparing is determined by a user input.
0.87027
7,756,930
57
62
57. An apparatus for determining a reputation of a message sender, comprising: one or more processors; means for obtaining two or more lists from two or more list providers; means for determining which lists of the two or more lists indicate the message sender; means for extracting from each list of the two or more lists indicating the message sender an individual score for the message sender, representing an individual probability that the message sender sent an unsolicited message; and means for computing a reputation score for the message sender from the individual scores for the message sender from each list of the two or more lists indicating the message sender; wherein the step of computing the reputation score comprises determining an aggregate score based on the individual score for each list of the two or more lists by performing at least one of a Chi Squared calculation, a Robinson calculation and a Bayes calculation on the individual scores for the message sender.
57. An apparatus for determining a reputation of a message sender, comprising: one or more processors; means for obtaining two or more lists from two or more list providers; means for determining which lists of the two or more lists indicate the message sender; means for extracting from each list of the two or more lists indicating the message sender an individual score for the message sender, representing an individual probability that the message sender sent an unsolicited message; and means for computing a reputation score for the message sender from the individual scores for the message sender from each list of the two or more lists indicating the message sender; wherein the step of computing the reputation score comprises determining an aggregate score based on the individual score for each list of the two or more lists by performing at least one of a Chi Squared calculation, a Robinson calculation and a Bayes calculation on the individual scores for the message sender. 62. The apparatus of claim 57 , further comprising means for receiving a request for the reputation of the message sender.
0.944946
8,738,705
16
20
16. A computer program product having a nontransitory computer-readable storage medium storing computer-executable code, the code comprising: an object classifier module configured to: receive information identifying a set of malicious groups associated with a social networking system, wherein a group is an entity represented in the social networking system that users can join, the malicious groups predetermined to be associated with a type of malicious activity, determine a measure of interactions of the user with the malicious group, select users associated with the malicious groups, wherein each user is selected based on the determined measure of interactions of the user with the malicious groups, and select a set of potentially malicious groups associated with the selected users; a keyword search module configured to: receive keywords associated with the type of malicious activity, and search for occurrences of the keywords in content received from users of the potentially malicious groups; the object classifier module, further configured to: determine a level of association of each potentially malicious group with the type of malicious activity based on the occurrences; and a group store configured to: store information describing the level of association of each potentially malicious group with the type of malicious activity.
16. A computer program product having a nontransitory computer-readable storage medium storing computer-executable code, the code comprising: an object classifier module configured to: receive information identifying a set of malicious groups associated with a social networking system, wherein a group is an entity represented in the social networking system that users can join, the malicious groups predetermined to be associated with a type of malicious activity, determine a measure of interactions of the user with the malicious group, select users associated with the malicious groups, wherein each user is selected based on the determined measure of interactions of the user with the malicious groups, and select a set of potentially malicious groups associated with the selected users; a keyword search module configured to: receive keywords associated with the type of malicious activity, and search for occurrences of the keywords in content received from users of the potentially malicious groups; the object classifier module, further configured to: determine a level of association of each potentially malicious group with the type of malicious activity based on the occurrences; and a group store configured to: store information describing the level of association of each potentially malicious group with the type of malicious activity. 20. The computer program product of claim 16 , wherein the object classifier module is further configured to: add a potentially malicious group to the set of malicious groups responsive to determining that the potentially malicious group is associated with the type of malicious activity.
0.669725
8,639,517
1
3
1. A method comprising: generating, via a processor, a set of features characterizing an association between a user input and a conversation context using prior user inputs; determining, by normalizing a length of the user input to a previous input in the prior user inputs and using a data-driven machine learning approach, whether the user input is associated with an existing topic related to a previous conversation context; and when the user input is associated with the existing topic, generating a response to the user input using information associated with the user input and content associated with any previous user input on the existing topic.
1. A method comprising: generating, via a processor, a set of features characterizing an association between a user input and a conversation context using prior user inputs; determining, by normalizing a length of the user input to a previous input in the prior user inputs and using a data-driven machine learning approach, whether the user input is associated with an existing topic related to a previous conversation context; and when the user input is associated with the existing topic, generating a response to the user input using information associated with the user input and content associated with any previous user input on the existing topic. 3. The method of claim 1 , wherein the data-driven machine learning approach is applied using one of a decision tree, Adaboost, Support Vector Machines, and Maxent.
0.801932
8,566,090
14
17
14. The system of claim 11 , wherein a weighted finite-state automaton represents partial orderings of word pairs in the domain-specific training data.
14. The system of claim 11 , wherein a weighted finite-state automaton represents partial orderings of word pairs in the domain-specific training data. 17. The system of claim 14 , wherein the weighted finite-state automaton is generated by computing frequencies for how often a first word precedes a second word in any training data sentence, how often the first word depends on and precedes the second word, and how often the first word depends on and follows the second word.
0.810244
8,811,775
2
21
2. The method of claim 1 , further comprising causing a descriptive label associated with the at least one image to be displayed with each image cluster.
2. The method of claim 1 , further comprising causing a descriptive label associated with the at least one image to be displayed with each image cluster. 21. The method of claim 2 , wherein the descriptive label includes an event name.
0.96087
9,448,999
1
2
1. A method that uses a processor for detecting similar documents, comprising: extracting entities from a first web document and a second web document from among web documents; determining importance contribution elements corresponding to the entities in the first web document and the second web document; calculating, by the processor, weights for the entities based on the determined importance contribution elements; calculating, by the processor, characteristic indexes for each of the first web document and the second web document, by extracting hash values for the entities, and subsequently calculating the characteristic indexes by applying the weights to the hash values for the entities; determining whether the first web document and the second web document are similar to each other based on the calculated characteristic indexes; deleting one of the first web document and the second web document if the first web document and the second web document are determined to be similar to each other, and wherein each of the importance contribution elements includes a frequency of which each of the entities is duplicated within the web documents.
1. A method that uses a processor for detecting similar documents, comprising: extracting entities from a first web document and a second web document from among web documents; determining importance contribution elements corresponding to the entities in the first web document and the second web document; calculating, by the processor, weights for the entities based on the determined importance contribution elements; calculating, by the processor, characteristic indexes for each of the first web document and the second web document, by extracting hash values for the entities, and subsequently calculating the characteristic indexes by applying the weights to the hash values for the entities; determining whether the first web document and the second web document are similar to each other based on the calculated characteristic indexes; deleting one of the first web document and the second web document if the first web document and the second web document are determined to be similar to each other, and wherein each of the importance contribution elements includes a frequency of which each of the entities is duplicated within the web documents. 2. The method of claim 1 , further comprising: clustering the web documents into one cluster.
0.89261
5,561,446
1
7
1. A system for wireless remote information retrieval and pen-based data entry, comprising: a) a central computer system having memory means containing a plurality of digitally stored forms and for storing data; b) a wireless network coupled to said central computer system, said wireless network including at least one transceiver; c) at least one portable pen-based computer having a position sensitive display, a stylus for writing on said position sensitive display, and a wireless communication means for communicating with said wireless network through said at least one transceiver; d) interface means defining areas on said position sensitive display responsive to said stylus for selecting one of said plurality of digitally stored forms for display on said position sensitive display; wherein upon selection of one of said plurality of digitally stored forms, said one of said plurality of digitally stored forms is transmitted by said central computer system to said pen-based computer through said wireless network and is displayed on said position sensitive display, handwriting written on said position sensitive display with said stylus is transmitted to said central computer system through said wireless network as electronic ink, and said central computer system associates said electronic ink with said one of said plurality of digitally stored forms by creating an association reference code and stores said electronic ink with said association reference code in said memory means for storing data without duplicating said one of said plurality of digitally stored forms so that said electronic ink and said association reference code are stored together with each other, but separate from said one of said plurality of digitally stored forms.
1. A system for wireless remote information retrieval and pen-based data entry, comprising: a) a central computer system having memory means containing a plurality of digitally stored forms and for storing data; b) a wireless network coupled to said central computer system, said wireless network including at least one transceiver; c) at least one portable pen-based computer having a position sensitive display, a stylus for writing on said position sensitive display, and a wireless communication means for communicating with said wireless network through said at least one transceiver; d) interface means defining areas on said position sensitive display responsive to said stylus for selecting one of said plurality of digitally stored forms for display on said position sensitive display; wherein upon selection of one of said plurality of digitally stored forms, said one of said plurality of digitally stored forms is transmitted by said central computer system to said pen-based computer through said wireless network and is displayed on said position sensitive display, handwriting written on said position sensitive display with said stylus is transmitted to said central computer system through said wireless network as electronic ink, and said central computer system associates said electronic ink with said one of said plurality of digitally stored forms by creating an association reference code and stores said electronic ink with said association reference code in said memory means for storing data without duplicating said one of said plurality of digitally stored forms so that said electronic ink and said association reference code are stored together with each other, but separate from said one of said plurality of digitally stored forms. 7. A system according to claim 1, wherein: said plurality of digitally stored forms includes at least one form listing multiple choice items and when said at least one form is displayed on said position sensitive display, one of said multiple choice items is selected by touching said position sensitive display with said stylus at a location corresponding to said one of said multiple choice items.
0.533879
8,949,371
19
20
19. The non-transitory computer readable storage medium of claim 18 , further including instructions that, when executed by the processor, cause the processor to: serialize the first Bloom filter, the second Bloom filter and the list of token type patterns into an index file.
19. The non-transitory computer readable storage medium of claim 18 , further including instructions that, when executed by the processor, cause the processor to: serialize the first Bloom filter, the second Bloom filter and the list of token type patterns into an index file. 20. The non-transitory computer readable storage medium of claim 19 , further including instructions that, when executed by the processor, cause the processor to: periodically distribute an updated serialized file over a secure network.
0.899059
8,566,092
1
2
1. A method for extracting a prosodic feature of a speech signal, comprising: dividing the speech signal into speech frames; transforming the speech frames from time domain to frequency domain; calculating respective prosodic features for different frequency ranges; extracting a traditional acoustics feature for each speech frame; calculating, for each said prosodic feature, a feature associated with a current frame, a difference between the feature associated with the current frame and a feature associated with a previous frame, and a difference between the feature associated with the current frame and an average of respective features in a speech segment of the current frame; extracting a fundamental frequency of the current frame, a difference between the fundamental frequency of the current frame and a fundamental frequency of the previous frame, and a difference between the fundamental frequency of the current frame and an average of respective fundamental frequencies in the speech segment of the current frame; and recognizing speech associated with the speech signal based on said calculating for each said prosodic feature and said extracting the fundamental frequency, wherein said calculating the respective prosodic features for different frequency ranges includes one or more of the following: calculating a thickness feature of the speech signal for a first frequency range, wherein the thickness feature is based on frequency domain energy of the first frequency range; calculating a strength feature of the speech signal for a second frequency range, wherein the strength feature is based on time domain energy of the second frequency range; and calculating a contour feature of the speech signal for a third frequency range, wherein the contour feature is based on a time domain envelope of the third frequency range.
1. A method for extracting a prosodic feature of a speech signal, comprising: dividing the speech signal into speech frames; transforming the speech frames from time domain to frequency domain; calculating respective prosodic features for different frequency ranges; extracting a traditional acoustics feature for each speech frame; calculating, for each said prosodic feature, a feature associated with a current frame, a difference between the feature associated with the current frame and a feature associated with a previous frame, and a difference between the feature associated with the current frame and an average of respective features in a speech segment of the current frame; extracting a fundamental frequency of the current frame, a difference between the fundamental frequency of the current frame and a fundamental frequency of the previous frame, and a difference between the fundamental frequency of the current frame and an average of respective fundamental frequencies in the speech segment of the current frame; and recognizing speech associated with the speech signal based on said calculating for each said prosodic feature and said extracting the fundamental frequency, wherein said calculating the respective prosodic features for different frequency ranges includes one or more of the following: calculating a thickness feature of the speech signal for a first frequency range, wherein the thickness feature is based on frequency domain energy of the first frequency range; calculating a strength feature of the speech signal for a second frequency range, wherein the strength feature is based on time domain energy of the second frequency range; and calculating a contour feature of the speech signal for a third frequency range, wherein the contour feature is based on a time domain envelope of the third frequency range. 2. The method according to claim 1 , wherein said calculating the thickness feature of the speech signal for the first frequency range includes calculating the thickness feature based on amplitude spectrums corresponding to all spectral bins in the first frequency range.
0.816396
9,880,999
7
8
7. A computer-implemented method for processing text, the computer-implemented method comprising: using one or more processors configured to execute a natural language processing application, including: the one or more processors receiving a candidate term via a user interface, the candidate term comprising one or more natural language words; the one or more processors applying a first semantic analysis technique to a digital corpus based on the candidate term, the digital corpus comprising natural language, thereby discovering a first set of concepts associated with the candidate term, and each concept included in the first set of concepts comprising a respective one or more natural language terms, each of which is at least one of explicitly or implicitly associated with the candidate term; the one or more processors applying a second semantic analysis technique to the first set of concepts discovered by the first semantic analysis technique, thereby discovering a second set of concepts associated with the candidate term, the application of the second semantic analysis technique including: mining a set of concept association rules from the digital corpus, the set of concept association rules generated based on record-links included in a plurality of records of the digital corpus, and the mining of the set of concept association rules including: for each candidate rule corresponding to the set of concept association rules, (i) determining a respective measure of support based on a number of occurrences, in a set of transactions of the digital corpus, of a set of antecedent concepts of the each candidate rule together with a set of consequence concepts of the each candidate rule; and (ii) determining a respective measure of confidence based on the respective measure of support and a number of occurrences, in the set of corpus transactions, of the set of antecedent concepts of the each candidate rule; and determining a set of candidate rules as the set of concept association rules, the size of the set of concept association rules limited based on a set of rule-limiting parameters, and the set of rule-limiting parameters including at least one of: a number of concepts included in the set of consequence concepts, a minimum strength of the respective measure of support, or a minimum strength of the respective measure of confidence; and mining the set of concept association rules for the second set of concepts, each concept included in the second set of concepts comprising a respective one or more natural language terms, each of which is latently associated with the candidate term; the one or more processors generating a concept space for the candidate term from the first set of concepts and the second set of concepts, the concept space for the candidate term being a subset of a total set of concepts included in the digital corpus; the one or more processors searching, using the generated concept space, the digital corpus for a first set of records corresponding to at least a portion of the first set of concepts of the generated concept space and a second set of records corresponding to at least a portion of the second set of concepts of the generated concept space; the one or more processors retrieving, from the digital corpus, at least a portion of each record included in the second set of records corresponding to the at least the portion of the expansion subset of concepts of the generated concept space; and the one or more processors displaying, at the user interface, the retrieved at least the portion of the each record included in the second set of records corresponding to the at least the portion of the expansion subset of concepts of the generated concept space.
7. A computer-implemented method for processing text, the computer-implemented method comprising: using one or more processors configured to execute a natural language processing application, including: the one or more processors receiving a candidate term via a user interface, the candidate term comprising one or more natural language words; the one or more processors applying a first semantic analysis technique to a digital corpus based on the candidate term, the digital corpus comprising natural language, thereby discovering a first set of concepts associated with the candidate term, and each concept included in the first set of concepts comprising a respective one or more natural language terms, each of which is at least one of explicitly or implicitly associated with the candidate term; the one or more processors applying a second semantic analysis technique to the first set of concepts discovered by the first semantic analysis technique, thereby discovering a second set of concepts associated with the candidate term, the application of the second semantic analysis technique including: mining a set of concept association rules from the digital corpus, the set of concept association rules generated based on record-links included in a plurality of records of the digital corpus, and the mining of the set of concept association rules including: for each candidate rule corresponding to the set of concept association rules, (i) determining a respective measure of support based on a number of occurrences, in a set of transactions of the digital corpus, of a set of antecedent concepts of the each candidate rule together with a set of consequence concepts of the each candidate rule; and (ii) determining a respective measure of confidence based on the respective measure of support and a number of occurrences, in the set of corpus transactions, of the set of antecedent concepts of the each candidate rule; and determining a set of candidate rules as the set of concept association rules, the size of the set of concept association rules limited based on a set of rule-limiting parameters, and the set of rule-limiting parameters including at least one of: a number of concepts included in the set of consequence concepts, a minimum strength of the respective measure of support, or a minimum strength of the respective measure of confidence; and mining the set of concept association rules for the second set of concepts, each concept included in the second set of concepts comprising a respective one or more natural language terms, each of which is latently associated with the candidate term; the one or more processors generating a concept space for the candidate term from the first set of concepts and the second set of concepts, the concept space for the candidate term being a subset of a total set of concepts included in the digital corpus; the one or more processors searching, using the generated concept space, the digital corpus for a first set of records corresponding to at least a portion of the first set of concepts of the generated concept space and a second set of records corresponding to at least a portion of the second set of concepts of the generated concept space; the one or more processors retrieving, from the digital corpus, at least a portion of each record included in the second set of records corresponding to the at least the portion of the expansion subset of concepts of the generated concept space; and the one or more processors displaying, at the user interface, the retrieved at least the portion of the each record included in the second set of records corresponding to the at least the portion of the expansion subset of concepts of the generated concept space. 8. The computer-implemented method of claim 7 , wherein using the one or more processors configured to execute the natural language processing application further includes: the one or more processors displaying, on the user interface, a representation of the concept space of the candidate term, the representation including, for each one or more latently-associated concepts of the candidate term, a respective indication of its association with a respective explicitly-associated or implicitly-associated concept from which the respective latent association was derived; and the one or more processors optionally displaying, on the user interface, a representation of knowledge other than the concept space that is discovered as being associated with the candidate term.
0.644894
9,324,338
1
11
1. A method for enhancing an input noisy signal, wherein the input noisy signal is a mixture of a clean speech signal and a noise signal, comprising: determining from the input noisy signal, using a model of the clean speech signal and a model of the noise signal, sequences of hidden variables including at least one sequence of hidden variables representing an excitation component of the clean speech signal, at least one sequence of hidden variables representing a filter component of the clean speech signal, and at least one sequence of hidden variables representing the noise signal, wherein the model of the clean speech signal includes a non-negative source-filter dynamical system (NSFDS) constraining the hidden variables representing the excitation component to be statistically dependent over time and constraining the hidden variables representing the filter component to be statistically dependent over time, and wherein the sequences of hidden variables include hidden variables determined as a non-negative linear combination of non-negative basis functions; and generating an output signal using a product of corresponding hidden variables representing the excitation and the filter components, wherein steps of the method are performed by a processor.
1. A method for enhancing an input noisy signal, wherein the input noisy signal is a mixture of a clean speech signal and a noise signal, comprising: determining from the input noisy signal, using a model of the clean speech signal and a model of the noise signal, sequences of hidden variables including at least one sequence of hidden variables representing an excitation component of the clean speech signal, at least one sequence of hidden variables representing a filter component of the clean speech signal, and at least one sequence of hidden variables representing the noise signal, wherein the model of the clean speech signal includes a non-negative source-filter dynamical system (NSFDS) constraining the hidden variables representing the excitation component to be statistically dependent over time and constraining the hidden variables representing the filter component to be statistically dependent over time, and wherein the sequences of hidden variables include hidden variables determined as a non-negative linear combination of non-negative basis functions; and generating an output signal using a product of corresponding hidden variables representing the excitation and the filter components, wherein steps of the method are performed by a processor. 11. The method of claim 1 , wherein parameters of the model of the noise signal are estimated from a database of training noise signals.
0.777049
7,797,673
1
10
1. A computer-implemented method for applying a coding standard to a simulatable graphical model in a graphical modeling environment, the method comprising the steps of: providing a coding standard in the graphical modeling environment; applying the coding standard to the simulatable graphical model to detect violations of the coding standard in the simulatable graphical model; displaying violating segments of the simulatable graphical model differently than non-violating segments of the simulatable graphical model; and in response to users' selection of a selected one of violating segments, displaying information on a violation of the coding standard in the selected violating segment.
1. A computer-implemented method for applying a coding standard to a simulatable graphical model in a graphical modeling environment, the method comprising the steps of: providing a coding standard in the graphical modeling environment; applying the coding standard to the simulatable graphical model to detect violations of the coding standard in the simulatable graphical model; displaying violating segments of the simulatable graphical model differently than non-violating segments of the simulatable graphical model; and in response to users' selection of a selected one of violating segments, displaying information on a violation of the coding standard in the selected violating segment. 10. The method of claim 1 , further comprising the step of: automatically correcting the simulatable graphical model to remove the violations of the coding standard in the simulatable graphical model.
0.775785
9,501,298
10
13
10. A computer system comprising: a memory resource that stores a set of instructions and a schema, the schema logically representing a nodal hierarchy relating to execution of an application that is operable on a computing device in communication with the computing system, the nodal hierarchy including multiple nodes, including one or more category nodes and one or more content nodes; wherein at least one of the nodes of the nodal hierarchy includes an executable script; and one or more processors that use instructions from the memory resource to: access the schema in response to a selection from an operator of the computing device; using the schema and in response to an input from the operator of the computing device, providing a series of user interface content to the computing device, each user interface content corresponding to one of (i) one or more nodes, or (ii) a script content, generated as an output of an executed script that is associated with a selected node; and operate the application on the computing device using the schema.
10. A computer system comprising: a memory resource that stores a set of instructions and a schema, the schema logically representing a nodal hierarchy relating to execution of an application that is operable on a computing device in communication with the computing system, the nodal hierarchy including multiple nodes, including one or more category nodes and one or more content nodes; wherein at least one of the nodes of the nodal hierarchy includes an executable script; and one or more processors that use instructions from the memory resource to: access the schema in response to a selection from an operator of the computing device; using the schema and in response to an input from the operator of the computing device, providing a series of user interface content to the computing device, each user interface content corresponding to one of (i) one or more nodes, or (ii) a script content, generated as an output of an executed script that is associated with a selected node; and operate the application on the computing device using the schema. 13. The computer system of claim 10 , further comprising instructions for identifying and integrating an additional set of nodes in the schema in response to a pre-determined condition or event.
0.739946
8,028,226
9
11
9. A computer program product for analyzing document content for display with reduced cognitive load, the computer program product having a tangible computer-readable medium with a computer program embodied thereon, the computer program executed on a computer comprising: computer code for receiving a document for analysis; computer code for analyzing document content of the document; computer code for generating a set of salient words and phrases from the document content based upon the linguistic content of the words and phrases in the document; computer code for tagging the salient words and phrases in the set of salient words and phrases; and computer code for reading the set of salient words and phrases.
9. A computer program product for analyzing document content for display with reduced cognitive load, the computer program product having a tangible computer-readable medium with a computer program embodied thereon, the computer program executed on a computer comprising: computer code for receiving a document for analysis; computer code for analyzing document content of the document; computer code for generating a set of salient words and phrases from the document content based upon the linguistic content of the words and phrases in the document; computer code for tagging the salient words and phrases in the set of salient words and phrases; and computer code for reading the set of salient words and phrases. 11. The computer program product of claim 9 , further comprising computer program code for contextualizing the salient words and phrases.
0.62973
9,705,966
5
14
5. A system comprising: one or more processing units; computer-readable memory; instructions stored in the computer-readable memory to implement an asset service, executable on the one or more processing units, to: receive a work by an author; receive an alternate version of the work, the alternate version of the work including a change to the work; and receive data from an author device including a poll asking for indication of a preference between the work and the alternate version of the work; instructions stored in the computer-readable memory to implement a comment module, executable on the one or more processing units, to receive answers to the poll; instructions stored in the computer-readable memory to implement an activity service, executable on the one or more processing units, to create an event based on the asset service receiving the work and to store a record of the event in an author activity log, the event comprising at least an identification of the author, an identification of the work, and a type of the event; and instructions stored in the computer-readable memory to implement a notification service, executable on the one or more processing units, to create a notification about the event and send the notification to a device associated with a user who has previously selected to receive notifications for activities associated with one or more of the author or with the work.
5. A system comprising: one or more processing units; computer-readable memory; instructions stored in the computer-readable memory to implement an asset service, executable on the one or more processing units, to: receive a work by an author; receive an alternate version of the work, the alternate version of the work including a change to the work; and receive data from an author device including a poll asking for indication of a preference between the work and the alternate version of the work; instructions stored in the computer-readable memory to implement a comment module, executable on the one or more processing units, to receive answers to the poll; instructions stored in the computer-readable memory to implement an activity service, executable on the one or more processing units, to create an event based on the asset service receiving the work and to store a record of the event in an author activity log, the event comprising at least an identification of the author, an identification of the work, and a type of the event; and instructions stored in the computer-readable memory to implement a notification service, executable on the one or more processing units, to create a notification about the event and send the notification to a device associated with a user who has previously selected to receive notifications for activities associated with one or more of the author or with the work. 14. The system of claim 5 , further comprising instructions stored in the computer-readable memory to implement a preference service, executable on the one or more processing units, to store identification of one or more of other authors or other works for which a device associated with the author has generated an indication of interest in receiving notifications.
0.878647
9,817,824
11
13
11. A system for providing an electronic target document in a data communication network, the system comprising a first domain on a first server system, a second domain on a second server system and a third domain on a third server system in the data communication network, each domain comprising at least one computer program containing computer instructions to cause a processor to perform specific actions, wherein: the at least one computer program of the first domain is configured to provide a link to open a digital first form in the first domain, and to receive an activation of the link by a user from a user computer device; the at least one computer program of the second domain is configured to: provide a digital second form upon activation of the link by the user from the user computer device, the second form comprising a retrieval field whereby, when the retrieval field is activated by the user from the user computer device, the following steps are performed: providing a plurality of domain access fields at the user computer device; receiving an activation of a selected one of the domain access fields by the user from the user computer device; accessing a third domain linked to the selected domain access field; and retrieving target document data from the third domain, and upload the target document associated with the target document data to the first form of the first domain.
11. A system for providing an electronic target document in a data communication network, the system comprising a first domain on a first server system, a second domain on a second server system and a third domain on a third server system in the data communication network, each domain comprising at least one computer program containing computer instructions to cause a processor to perform specific actions, wherein: the at least one computer program of the first domain is configured to provide a link to open a digital first form in the first domain, and to receive an activation of the link by a user from a user computer device; the at least one computer program of the second domain is configured to: provide a digital second form upon activation of the link by the user from the user computer device, the second form comprising a retrieval field whereby, when the retrieval field is activated by the user from the user computer device, the following steps are performed: providing a plurality of domain access fields at the user computer device; receiving an activation of a selected one of the domain access fields by the user from the user computer device; accessing a third domain linked to the selected domain access field; and retrieving target document data from the third domain, and upload the target document associated with the target document data to the first form of the first domain. 13. The system of claim 11 , wherein the at least one computer program of the second domain further is configured to provide the second form by: retrieving the first form from the first domain; converting the first form into the second form, wherein the retrieval field is included in the second form.
0.65873
10,126,010
1
3
1. A controlling system for environmental comfort degree, comprising: a plurality of sensors for sensing a plurality of environment parameters indoor or outdoor; a plurality of indoor apparatuses for adjusting temperature and humidity of an indoor space; a controlling apparatus, operatively connected with the sensors and the indoor apparatuses, executing an auto-calculation procedure for calculating a current comfort-index of the indoor space based on the environment parameters, and calculating a target comfort temperature adjustment value and a target comfort humidity adjustment value for an indoor environment of the indoor space to reach a best comfort-index based on the current comfort-index; wherein the controlling apparatus controls the indoor apparatuses based on the target comfort temperature adjustment value and the target comfort humidity adjustment value for the indoor environment to reach a target temperature and a target humidity; and wherein, the controlling apparatus determines whether the controlling system is shut down and whether a person in the indoor space is left, if the controlling system is not shut down and the person stays in the indoor space, then the plurality of sensors re-sense the environment parameters, the controlling apparatus re-calculates the current comfort-index based on the environment parameters, re-calculates the target comfort temperature adjustment value and the target comfort humidity adjustment value based on the current comfort-index, and re-controls the indoor apparatuses based on the target comfort temperature adjustment value and the target comfort humidity adjustment value.
1. A controlling system for environmental comfort degree, comprising: a plurality of sensors for sensing a plurality of environment parameters indoor or outdoor; a plurality of indoor apparatuses for adjusting temperature and humidity of an indoor space; a controlling apparatus, operatively connected with the sensors and the indoor apparatuses, executing an auto-calculation procedure for calculating a current comfort-index of the indoor space based on the environment parameters, and calculating a target comfort temperature adjustment value and a target comfort humidity adjustment value for an indoor environment of the indoor space to reach a best comfort-index based on the current comfort-index; wherein the controlling apparatus controls the indoor apparatuses based on the target comfort temperature adjustment value and the target comfort humidity adjustment value for the indoor environment to reach a target temperature and a target humidity; and wherein, the controlling apparatus determines whether the controlling system is shut down and whether a person in the indoor space is left, if the controlling system is not shut down and the person stays in the indoor space, then the plurality of sensors re-sense the environment parameters, the controlling apparatus re-calculates the current comfort-index based on the environment parameters, re-calculates the target comfort temperature adjustment value and the target comfort humidity adjustment value based on the current comfort-index, and re-controls the indoor apparatuses based on the target comfort temperature adjustment value and the target comfort humidity adjustment value. 3. The controlling system in claim 1 , wherein the plurality of sensors comprises a thermos sensor, an IR sensor or a monitor, for detecting if a person enters the indoor space, and the controlling apparatus calculates the target comfort temperature adjustment value and the target comfort humidity adjustment value when the person enters the indoor space.
0.858618
8,296,666
1
17
1. A computer-implemented visualization system for information analysis, the system comprising: a user interface; a processor and a memory coupled thereto, the memory storing instructions and data therein to configure the execution of the processor to configure a space on the user interface for marshalling evidence therein, the processor further configured to: visually represent a plurality of information excerpts from at least one information source in a spatial arrangement in the space on the user interface; and receive user input to manipulate the spatial arrangement of the plurality of information excerpts with respect to one another on the user interface as directed by the user for defining the evidence; receive analysis content on the user interface for associating with the plurality of information excerpts to facilitate visual cognition of the evidence in accordance with the manipulated spatial arrangement.
1. A computer-implemented visualization system for information analysis, the system comprising: a user interface; a processor and a memory coupled thereto, the memory storing instructions and data therein to configure the execution of the processor to configure a space on the user interface for marshalling evidence therein, the processor further configured to: visually represent a plurality of information excerpts from at least one information source in a spatial arrangement in the space on the user interface; and receive user input to manipulate the spatial arrangement of the plurality of information excerpts with respect to one another on the user interface as directed by the user for defining the evidence; receive analysis content on the user interface for associating with the plurality of information excerpts to facilitate visual cognition of the evidence in accordance with the manipulated spatial arrangement. 17. The visualization system of claim 1 wherein the processor is further configured to represent the information on the user interface as data objects movable in said space and, selectively for a particular data object, to visualize a radial menu on the user interface comprising a plurality of slices presenting selectable actions arranged radially about the particular data object.
0.762112
8,577,872
1
2
1. One or more computer-readable storage media that store executable instructions to select photos, wherein the executable instructions, when executed by a computer, cause the computer to perform acts comprising: for each photo in a database configured to contain a plurality of photos: determining whether a user has tagged a first person in said photo; determining whether said user has been tagged in said photo by said first person; and increasing a score associated with said first person based on whether said user has tagged said first person in said photo, based on whether said user has been tagged by said first person in said photo, or based on whether said user and said first person appear in a photo together, and said score increasing by a greater amount for said user's tagging, or being tagged by, said first person than for said user's appearing in said photo with said first person; identifying a first set of people that comprises the people in said photos whose score exceeds a threshold; selecting a second set of said photos, wherein each photo in said second set is selected based on the fact that said photo contains a person from said first set of people; and displaying said second set of photos.
1. One or more computer-readable storage media that store executable instructions to select photos, wherein the executable instructions, when executed by a computer, cause the computer to perform acts comprising: for each photo in a database configured to contain a plurality of photos: determining whether a user has tagged a first person in said photo; determining whether said user has been tagged in said photo by said first person; and increasing a score associated with said first person based on whether said user has tagged said first person in said photo, based on whether said user has been tagged by said first person in said photo, or based on whether said user and said first person appear in a photo together, and said score increasing by a greater amount for said user's tagging, or being tagged by, said first person than for said user's appearing in said photo with said first person; identifying a first set of people that comprises the people in said photos whose score exceeds a threshold; selecting a second set of said photos, wherein each photo in said second set is selected based on the fact that said photo contains a person from said first set of people; and displaying said second set of photos. 2. The one or more computer-readable storage media of claim 1 , wherein said acts further comprise: removing a first photo from said second set based on said photo including a person who is not in said first set.
0.706371
7,796,142
11
13
11. The digital television decoder as claimed in claim 9 , wherein the pixmaps are chopped into rectangles which are drawn successively with each call of a background task.
11. The digital television decoder as claimed in claim 9 , wherein the pixmaps are chopped into rectangles which are drawn successively with each call of a background task. 13. The digital television decoder as claimed in claim 11 , wherein each call of the background task, comprises: reorganization of the pixmaps after a scroll of the document has been performed, and when no repositioning of the pixmaps has occurred, drawing of the first rectangle of a pixmap determined as a function of distance away from the visible part of the documents.
0.883072
9,922,344
17
18
17. The system of claim 16 , wherein the one or more processors are further configured to provide instructions that cause the user device to present the one or more advertisements in clusters, each cluster being for an individual query suggestion and including advertisements provided for the individual query suggestion.
17. The system of claim 16 , wherein the one or more processors are further configured to provide instructions that cause the user device to present the one or more advertisements in clusters, each cluster being for an individual query suggestion and including advertisements provided for the individual query suggestion. 18. The system of claim 17 , wherein the instructions further cause the user device to present a label for each cluster, the label for a cluster identifying the individual query suggestion corresponding to the cluster.
0.967656
10,013,454
26
28
26. One or more non-transitory computer-storage media storing computer-useable instructions that, when executed by a computing device, perform a method, the method comprising: causing display of a set of events that are search results of a search query that specifies a plurality of commands, each event corresponding to a portion of raw machine data associated with a timestamp extracted from the portion of raw machine data, the display of the set of events being in a table format that includes: one or more columns, each column comprising data items of an event attribute, the data items being of the set of events; and a plurality of rows forming cells with the one or more columns, each cell displaying a textual representation of at least one of the data items of the event attribute of a corresponding column, the textual representation being selectable by a user, the textual representation including at least some of the portion of raw machine data of a corresponding event; based on a user selection of a text portion of the textual representation in a corresponding cell: causing display of a list of options corresponding to the selected text portion of the textual representation in the corresponding cell; and causing one or more commands to be added to the plurality of commands specified in the search query, wherein the one or more commands are based on an option that is selected from the list of options and the selected text portion of the textual representation in the corresponding cell.
26. One or more non-transitory computer-storage media storing computer-useable instructions that, when executed by a computing device, perform a method, the method comprising: causing display of a set of events that are search results of a search query that specifies a plurality of commands, each event corresponding to a portion of raw machine data associated with a timestamp extracted from the portion of raw machine data, the display of the set of events being in a table format that includes: one or more columns, each column comprising data items of an event attribute, the data items being of the set of events; and a plurality of rows forming cells with the one or more columns, each cell displaying a textual representation of at least one of the data items of the event attribute of a corresponding column, the textual representation being selectable by a user, the textual representation including at least some of the portion of raw machine data of a corresponding event; based on a user selection of a text portion of the textual representation in a corresponding cell: causing display of a list of options corresponding to the selected text portion of the textual representation in the corresponding cell; and causing one or more commands to be added to the plurality of commands specified in the search query, wherein the one or more commands are based on an option that is selected from the list of options and the selected text portion of the textual representation in the corresponding cell. 28. The one or more computer-storage media of claim 26 , the method further comprising receiving one or more command elements of the one or more commands entered into a form by the user, the option being displayed in the list of options as the form.
0.769444
9,411,327
17
18
17. The system of claim 12 , wherein generating the first matrix further comprises: deconstructing the first matrix using singular value decomposition into the product of a second matrix, a third matrix, and a fourth matrix; constructing a fifth matrix using portions of the second matrix, the third matrix, and the fourth matrix, wherein the portions are defined by a quality control constant, and wherein the fifth matrix is an approximation of the first matrix.
17. The system of claim 12 , wherein generating the first matrix further comprises: deconstructing the first matrix using singular value decomposition into the product of a second matrix, a third matrix, and a fourth matrix; constructing a fifth matrix using portions of the second matrix, the third matrix, and the fourth matrix, wherein the portions are defined by a quality control constant, and wherein the fifth matrix is an approximation of the first matrix. 18. The system of claim 17 , wherein the indicator of the probability, for each of the substrings and for each of the plurality of building automation system point types, is equal to the cosine distance of a vector describing the selected substring and a vector describing the selected building automation system point type.
0.944444
8,400,313
6
7
6. An arousal state classifying device for classifying an arousal state of an object person, the arousal state classifying device characterized by comprising: blink data acquisition means for acquiring blink data of at least one eye of the object person at the time of blinking; a first pattern model generated by the arousal state classification model generating device according to claim 1 ; first feature data extraction means for extracting first feature data corresponding to the first pattern model from the blink data acquired by the blink data acquisition means; blink waveform identification means for identifying a specific type of blink waveform corresponding to the first feature data extracted by the first feature data extraction means based on the first feature data and the first pattern model; second feature data generation means for generating second feature data including data on an occurrence ratio of each of the specific types of blink waveforms based on an identification result by the blink waveform identification means with respect to the blink data of the object person acquired in a sequence of analysis intervals; a second pattern model generated by the arousal state classification model generating device; and arousal state classification means for classifying the arousal state of the object person based on the second feature data generated by the second feature data generation means and the second pattern model.
6. An arousal state classifying device for classifying an arousal state of an object person, the arousal state classifying device characterized by comprising: blink data acquisition means for acquiring blink data of at least one eye of the object person at the time of blinking; a first pattern model generated by the arousal state classification model generating device according to claim 1 ; first feature data extraction means for extracting first feature data corresponding to the first pattern model from the blink data acquired by the blink data acquisition means; blink waveform identification means for identifying a specific type of blink waveform corresponding to the first feature data extracted by the first feature data extraction means based on the first feature data and the first pattern model; second feature data generation means for generating second feature data including data on an occurrence ratio of each of the specific types of blink waveforms based on an identification result by the blink waveform identification means with respect to the blink data of the object person acquired in a sequence of analysis intervals; a second pattern model generated by the arousal state classification model generating device; and arousal state classification means for classifying the arousal state of the object person based on the second feature data generated by the second feature data generation means and the second pattern model. 7. The arousal state classifying device according to claim 6 , characterized in that the blink data is electro-oculogram (EOG) waveform data or moving picture of eye region.
0.923179
7,836,425
1
2
1. A computer readable storage medium containing instructions for implementing a source code generator comprising: an interface configured to receive a user-specification of at least one spreadsheet that includes spreadsheet data for which source code is to be generated; a data acquisition interface configured to receive the spreadsheet data; a parser configured to extract information from the spreadsheet data received by the data acquisition interface and identify at least one formula included in the spreadsheet data, the at least one formula comprising at least one function; an information processor configured to perform a data transformation of the information extracted by the parser into source code representative of the spreadsheet data, the information processor being further operative to transform the at least one identified formula into source code representative thereof, the source code representative of the identified formula being operative to call at least one function from a library of available functions representative of available spreadsheet functions; and wherein the source code representative of the spreadsheet data represents the general data and behavior aspects of the at least one spreadsheet at runtime.
1. A computer readable storage medium containing instructions for implementing a source code generator comprising: an interface configured to receive a user-specification of at least one spreadsheet that includes spreadsheet data for which source code is to be generated; a data acquisition interface configured to receive the spreadsheet data; a parser configured to extract information from the spreadsheet data received by the data acquisition interface and identify at least one formula included in the spreadsheet data, the at least one formula comprising at least one function; an information processor configured to perform a data transformation of the information extracted by the parser into source code representative of the spreadsheet data, the information processor being further operative to transform the at least one identified formula into source code representative thereof, the source code representative of the identified formula being operative to call at least one function from a library of available functions representative of available spreadsheet functions; and wherein the source code representative of the spreadsheet data represents the general data and behavior aspects of the at least one spreadsheet at runtime. 2. The computer readable storage medium as recited in claim 1 , wherein the available spreadsheet functions further comprise at least one user defined function.
0.723183
8,135,755
8
10
8. A computer-implemented system for editing schema for a database, the system comprising: a display that is configured to display an image representing a received list of template schema definitions, the list of template schema definitions comprising at least a two-level hierarchy of database field data types and at least one a plurality of data format specifying how stored data is to be displayed, the data types comprising at least one of attachment field type, date/time, identification number, and type of currency, wherein the list of template schema definitions is displayed in at least of a ribbon and a gallery, the ribbon comprising a top level including a plurality of user interface tabs and a bottom level including a plurality of user interface control groups, the plurality of user interface tabs comprising one or more of an editing tab, a format tab, a page layout tab, and an external data tab, the plurality of user interface control groups comprising a first user interface control group comprising controls for sorting operations, a second user interface group control group comprising controls for filtering operations, and a third user interface control group comprising at least one control for adding a field to a database table generated by a schema editor, the gallery comprising a plurality of menu items and icons for graphically displaying the list of template schema definition, wherein the at least one of the ribbon and the gallery further represents the schema in a template form; in response to a selection of the template schema, dragging and dropping a selected template schema field onto a displayed table grid to create a new schema in accordance with the selected template schema; a user interface for receiving commands from the user for modifying the schema and to change the displayed image in response to the received commands, wherein the schema is modified by: receiving a command in the user interface to add the field to the database table; generating a dialog box overlaying the user interface, the dialog box prompting a confirmation of the command to add the field to the database table; generating the database table in another user interface, in response to receiving the confirmation of the command, the database table comprising the added field, the added field residing in an Add New Field column in the database table, the another user interface further comprising a task pane adjacent to the database table, the task pane generated in response to the generation of the database table, the database table comprising the added field and the task pane being simultaneously displayed in the another user interface thereby facilitating user entry of additional fields to the database table without having to navigate away from the another user interface; and closing the database table in the another user interface to make the added field available in the user interface comprising the ribbon; a file reader that is configured to receive the list of template schema definitions; and a file generator that is configured to output an output file in response to the modified schema, wherein the output file is configured to be received by the file reader, and wherein the output file comprises instructions for generating a database structure and for manipulating data stored in the database structure, the instructions comprising an export command to export a current database table as a markup language file, wherein the output file is utilized to populate the list of schema definitions, wherein the output file is utilized to re-edit the schema.
8. A computer-implemented system for editing schema for a database, the system comprising: a display that is configured to display an image representing a received list of template schema definitions, the list of template schema definitions comprising at least a two-level hierarchy of database field data types and at least one a plurality of data format specifying how stored data is to be displayed, the data types comprising at least one of attachment field type, date/time, identification number, and type of currency, wherein the list of template schema definitions is displayed in at least of a ribbon and a gallery, the ribbon comprising a top level including a plurality of user interface tabs and a bottom level including a plurality of user interface control groups, the plurality of user interface tabs comprising one or more of an editing tab, a format tab, a page layout tab, and an external data tab, the plurality of user interface control groups comprising a first user interface control group comprising controls for sorting operations, a second user interface group control group comprising controls for filtering operations, and a third user interface control group comprising at least one control for adding a field to a database table generated by a schema editor, the gallery comprising a plurality of menu items and icons for graphically displaying the list of template schema definition, wherein the at least one of the ribbon and the gallery further represents the schema in a template form; in response to a selection of the template schema, dragging and dropping a selected template schema field onto a displayed table grid to create a new schema in accordance with the selected template schema; a user interface for receiving commands from the user for modifying the schema and to change the displayed image in response to the received commands, wherein the schema is modified by: receiving a command in the user interface to add the field to the database table; generating a dialog box overlaying the user interface, the dialog box prompting a confirmation of the command to add the field to the database table; generating the database table in another user interface, in response to receiving the confirmation of the command, the database table comprising the added field, the added field residing in an Add New Field column in the database table, the another user interface further comprising a task pane adjacent to the database table, the task pane generated in response to the generation of the database table, the database table comprising the added field and the task pane being simultaneously displayed in the another user interface thereby facilitating user entry of additional fields to the database table without having to navigate away from the another user interface; and closing the database table in the another user interface to make the added field available in the user interface comprising the ribbon; a file reader that is configured to receive the list of template schema definitions; and a file generator that is configured to output an output file in response to the modified schema, wherein the output file is configured to be received by the file reader, and wherein the output file comprises instructions for generating a database structure and for manipulating data stored in the database structure, the instructions comprising an export command to export a current database table as a markup language file, wherein the output file is utilized to populate the list of schema definitions, wherein the output file is utilized to re-edit the schema. 10. The system of claim 8 wherein the display is further configured to display data from the database.
0.87561
9,836,502
1
16
1. A computer-implemented method comprising: receiving a selection of one or more identifiers of panel templates among a plurality of identifiers of panel templates, wherein each identifier of the plurality of identifiers is associated with a panel template that includes a query and a format for displaying an associated panel in a dashboard, wherein selecting the one or more identifiers of panel templates comprises: dragging each identifier of the one or more identifiers of panel templates onto a representation of a dashboard in a displayed dashboard-creation page; and dropping each dragged identifier at an associated position in the dashboard-creation page, each associated position being indicative of where the associated panel appears when the dashboard is displayed; in response to selecting an identifier of the one or more identifiers of panel templates: adding a reference to an associated panel template of the selected identifier in the associated panel in the dashboard-creation page; and adding to the dashboard-creation page an indication of the panel associated with the selected identifier; in response to a user action for a particular panel in the dashboard-creation page, executing a query included in a panel template referenced by the particular panel to generate data for display in that particular panel within the dashboard-creation page; and visualizing, within the particular panel within the dashboard-creation page, data resulting from execution of the query in the panel template referenced by the particular panel.
1. A computer-implemented method comprising: receiving a selection of one or more identifiers of panel templates among a plurality of identifiers of panel templates, wherein each identifier of the plurality of identifiers is associated with a panel template that includes a query and a format for displaying an associated panel in a dashboard, wherein selecting the one or more identifiers of panel templates comprises: dragging each identifier of the one or more identifiers of panel templates onto a representation of a dashboard in a displayed dashboard-creation page; and dropping each dragged identifier at an associated position in the dashboard-creation page, each associated position being indicative of where the associated panel appears when the dashboard is displayed; in response to selecting an identifier of the one or more identifiers of panel templates: adding a reference to an associated panel template of the selected identifier in the associated panel in the dashboard-creation page; and adding to the dashboard-creation page an indication of the panel associated with the selected identifier; in response to a user action for a particular panel in the dashboard-creation page, executing a query included in a panel template referenced by the particular panel to generate data for display in that particular panel within the dashboard-creation page; and visualizing, within the particular panel within the dashboard-creation page, data resulting from execution of the query in the panel template referenced by the particular panel. 16. The method of claim 1 , wherein the format for displaying the visualization of data specified in a panel's referenced panel template corresponds to a bar chart, a pie chart, a line graph, a scatter plot, a bubble chart, data visualization, or a table.
0.768182
9,459,995
5
6
5. The apparatus of claim 1 , wherein the interconnected components include different types of hardware components, and the object model expresses physical and functional relationships among the interconnected components including the different types of hardware components.
5. The apparatus of claim 1 , wherein the interconnected components include different types of hardware components, and the object model expresses physical and functional relationships among the interconnected components including the different types of hardware components. 6. The apparatus of claim 5 , wherein the interconnected components further include different types of software components, and the object model expresses physical and functional relationships among the interconnected components including the different types of hardware components and the different types of software components, and wherein a rule of the collection of rules specifies a relationship between a hardware component and software component of respectively the different types of hardware components and the different types of software components, and a test of the tests is for compliance with the rule.
0.850847
10,061,866
1
4
1. A server that fulfills a literal query of a user, the server comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the server to: identify, for the literal query, at least two literal query results; generate result probabilities for the at least two literal query results based on results that were previously selected by other users who previously submitted the literal query, the result probabilities reflecting a probability that a corresponding literal query result matches an intent of the user in submitting the literal query; identify a sort order according to the result probabilities of the at least two literal query results; determines, for the literal query, an adjusted query; evaluate the adjusted query in order to identify, for the adjusted query, one or more adjusted query results; generate an interpreted probability for the adjusted query based on result probabilities of at least some of the adjusted query results, the interpreted probability reflecting a probability that the user intended the adjusted query; identify, within the sort order, an adjustment position that is between a first literal query result having a higher result probability than the interpreted probability, and a second literal query result having a lower result probability than the interpreted probability; and present the at least two literal query results and insert, at the adjustment position, an adjustment option describing the adjusted query.
1. A server that fulfills a literal query of a user, the server comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the server to: identify, for the literal query, at least two literal query results; generate result probabilities for the at least two literal query results based on results that were previously selected by other users who previously submitted the literal query, the result probabilities reflecting a probability that a corresponding literal query result matches an intent of the user in submitting the literal query; identify a sort order according to the result probabilities of the at least two literal query results; determines, for the literal query, an adjusted query; evaluate the adjusted query in order to identify, for the adjusted query, one or more adjusted query results; generate an interpreted probability for the adjusted query based on result probabilities of at least some of the adjusted query results, the interpreted probability reflecting a probability that the user intended the adjusted query; identify, within the sort order, an adjustment position that is between a first literal query result having a higher result probability than the interpreted probability, and a second literal query result having a lower result probability than the interpreted probability; and present the at least two literal query results and insert, at the adjustment position, an adjustment option describing the adjusted query. 4. The server of claim 1 , wherein the memory stores further instructions that, when executed by the processor, cause the server to further: generate a query adjustment set that correlates literal queries and corresponding adjusted queries, each adjusted query having an interpreted probability higher than a result probability of at least one result of a corresponding literal query; and determining the adjusted query by reference to a previously generated query adjustment set.
0.612278
8,719,295
1
5
1. A method, comprising: for a temporal hierarchy of aggregation statistics associated with a plurality of database records, wherein the temporal hierarchy comprises two or more aggregation statistics levels and each level has a different temporal granularity associated therewith, iteratively modifying the temporal hierarchy to at least one of: (a) minimize a storage usage cost while satisfying a temporal hierarchy update constraint and a query response time constraint by removing one or more levels of the temporal hierarchy; (b) reduce a temporal hierarchy update time and a query response time while satisfying a storage usage constraint by adding one or more levels to the temporal hierarchy; and (c) minimize a query response time for frequently applied queries that do not shift in time while satisfying the storage usage constraint by adding one or more nodes to the temporal hierarchy, wherein the resulting temporal hierarchy that achieves at least one of (a), (b) and (c) is identified as an optimal temporal hierarchy; wherein the temporal hierarchy is stored in a memory and the iterative modifying of the temporal hierarchy is performed by a processor device.
1. A method, comprising: for a temporal hierarchy of aggregation statistics associated with a plurality of database records, wherein the temporal hierarchy comprises two or more aggregation statistics levels and each level has a different temporal granularity associated therewith, iteratively modifying the temporal hierarchy to at least one of: (a) minimize a storage usage cost while satisfying a temporal hierarchy update constraint and a query response time constraint by removing one or more levels of the temporal hierarchy; (b) reduce a temporal hierarchy update time and a query response time while satisfying a storage usage constraint by adding one or more levels to the temporal hierarchy; and (c) minimize a query response time for frequently applied queries that do not shift in time while satisfying the storage usage constraint by adding one or more nodes to the temporal hierarchy, wherein the resulting temporal hierarchy that achieves at least one of (a), (b) and (c) is identified as an optimal temporal hierarchy; wherein the temporal hierarchy is stored in a memory and the iterative modifying of the temporal hierarchy is performed by a processor device. 5. The method of claim 1 , wherein the optimal temporal hierarchy is used to materialize pre-computed results that are used to accelerate a response time to a query workload in the presence of newly inserted database records.
0.508734
7,577,901
24
25
24. A computer system, comprising: a bus; a data storage device coupled to the bus; and a processor coupled to the data storage device, the processor operable to receive instructions which, when executed by the processor, cause the processor to perform a method comprising: reproducing a paper document via a document reproduction system; creating a multimedia annotation for the paper document during the reproduction, wherein the multimedia annotation is an audio sound that is input by a user via a microphone of the document reproduction system; creating a first multimedia document by combining the paper document and the multimedia annotation, wherein the first multimedia document includes a first bar code that encodes the audio sound therein; processing the first multimedia document; decoding the multimedia annotation from the first bar code; extracting the audio sound; and playing the audio sound via a multimedia player.
24. A computer system, comprising: a bus; a data storage device coupled to the bus; and a processor coupled to the data storage device, the processor operable to receive instructions which, when executed by the processor, cause the processor to perform a method comprising: reproducing a paper document via a document reproduction system; creating a multimedia annotation for the paper document during the reproduction, wherein the multimedia annotation is an audio sound that is input by a user via a microphone of the document reproduction system; creating a first multimedia document by combining the paper document and the multimedia annotation, wherein the first multimedia document includes a first bar code that encodes the audio sound therein; processing the first multimedia document; decoding the multimedia annotation from the first bar code; extracting the audio sound; and playing the audio sound via a multimedia player. 25. The computer system of claim 24 , wherein a location indicator associated with the multimedia annotation is placed on the first multimedia document, wherein the location indicator indicates where the multimedia annotation can be retrieved and played.
0.813783
10,147,051
1
8
1. A method, in a data processing system comprising a processor and a memory having instructions which, when executed by the processor, cause the processor to generate candidate answers to an explanatory question, the method comprising: responsive to identifying an input question as the explanatory question, decomposing, by the data processing system, the explanatory question into one or more explanatory queries; identifying, by the data processing system, one or more passages within a corpus of information that comprise an explanatory clause that provides an explanatory answer to the explanatory question based on pre-determined explanatory clause terms, wherein a passage within the one or more passages within the corpus of information that comprises the explanatory clause is identified by the method comprising: comparing, by the data processing system, each identified clause within a passage to a set of previously identified explanatory clauses; and responsive to the identified clause within a passage corresponding to one of the set of previously identified explanatory clauses, tagging, by the data processing system, the clause within the passage with an ‘EXPLANATORY’ tag; receiving, by the data processing system, hypothesis evidence with one or more passages comprising explanatory clauses from the corpus of information; generating, by the data processing system, one or more candidate explanatory answers based on hypothesis evidence; ranking and merging, by the data processing system, the one or more candidate explanatory answers; and outputting, by the data processing system, the one or more candidate explanatory answers.
1. A method, in a data processing system comprising a processor and a memory having instructions which, when executed by the processor, cause the processor to generate candidate answers to an explanatory question, the method comprising: responsive to identifying an input question as the explanatory question, decomposing, by the data processing system, the explanatory question into one or more explanatory queries; identifying, by the data processing system, one or more passages within a corpus of information that comprise an explanatory clause that provides an explanatory answer to the explanatory question based on pre-determined explanatory clause terms, wherein a passage within the one or more passages within the corpus of information that comprises the explanatory clause is identified by the method comprising: comparing, by the data processing system, each identified clause within a passage to a set of previously identified explanatory clauses; and responsive to the identified clause within a passage corresponding to one of the set of previously identified explanatory clauses, tagging, by the data processing system, the clause within the passage with an ‘EXPLANATORY’ tag; receiving, by the data processing system, hypothesis evidence with one or more passages comprising explanatory clauses from the corpus of information; generating, by the data processing system, one or more candidate explanatory answers based on hypothesis evidence; ranking and merging, by the data processing system, the one or more candidate explanatory answers; and outputting, by the data processing system, the one or more candidate explanatory answers. 8. The method of claim 1 , wherein identifying the passage within the one or more passages within the corpus of information that comprises the explanatory clause is further identified by the method comprising: appending, by the data processing system, the identified clause to a set of previously identified explanatory clauses.
0.714783
7,840,577
1
6
1. A system for processing search query submissions, the system comprising: a data set that maps individual terms to sets of related terms, said data set stored in a computer memory and being based at least partly on a term co-occurrence analysis, said data set comprising a plurality of entries, each of which maps an individual term to a respective set of related terms; and a query processing system that uses the data set to identify alternative spellings of misspelled search terms of multiple-term search queries, said query processing system comprising computer hardware that executes software.
1. A system for processing search query submissions, the system comprising: a data set that maps individual terms to sets of related terms, said data set stored in a computer memory and being based at least partly on a term co-occurrence analysis, said data set comprising a plurality of entries, each of which maps an individual term to a respective set of related terms; and a query processing system that uses the data set to identify alternative spellings of misspelled search terms of multiple-term search queries, said query processing system comprising computer hardware that executes software. 6. The system of claim 1 , wherein the data set comprises search term correlation data for each of a plurality of search fields.
0.834197
8,631,028
8
9
8. A computer implement method for processing one or more inputted XPath queries against one or more inputted XML documents stored in a plurality of computer hardware processors (CPU) and memory, comprising: loading an XML document into computer memory; generating a first index that comprises unique root to leaf paths (SUM-Index), a second index that comprises tree nodes grouped by unique path identifiers (PS-Index), and a third index that comprises values of the tree nodes grouped by path identifiers (PV-Index) from the XML document, wherein the SUM-Index, the PS-Index and the PV-Index each have at least one root to leaf unique path identifier (PID) and are linked together by at least one PID originating from the SUM-Index; annotating the SUM-Index with PID for each unique root to leaf path; stores the SUM-Index, PS-Index, and PV-Index on column stores distributed across a plurality of CPUs partitioned by PID: parsing and splitting an XPath query at articulation points into multiple partial queries; determining cursor type index access methods; executing a multiple partial queries against the SUM-Index to generate a list of applicable one or more PID values in the PS-Index and the PV-Index that satisfies partial query segments; generating a set of ancestor-descendant PID identifiers list from an associated SUM-Index tree by extracting annotated PID values to initialize a simple cursor or a multi-predicate branching path cursor (MPBP); searches the PS-Index that is a search index that is portioned on PID; generating a result sequence using the simple cursor or the MPBPC cursor from a PS-Index tree; searches the PV-Index that is a search index that is partitioned on PID; filtering the result sequence of nodes by using a PV-Index tree; and producing one or more outputted XML documents from a final result sequence of nodes.
8. A computer implement method for processing one or more inputted XPath queries against one or more inputted XML documents stored in a plurality of computer hardware processors (CPU) and memory, comprising: loading an XML document into computer memory; generating a first index that comprises unique root to leaf paths (SUM-Index), a second index that comprises tree nodes grouped by unique path identifiers (PS-Index), and a third index that comprises values of the tree nodes grouped by path identifiers (PV-Index) from the XML document, wherein the SUM-Index, the PS-Index and the PV-Index each have at least one root to leaf unique path identifier (PID) and are linked together by at least one PID originating from the SUM-Index; annotating the SUM-Index with PID for each unique root to leaf path; stores the SUM-Index, PS-Index, and PV-Index on column stores distributed across a plurality of CPUs partitioned by PID: parsing and splitting an XPath query at articulation points into multiple partial queries; determining cursor type index access methods; executing a multiple partial queries against the SUM-Index to generate a list of applicable one or more PID values in the PS-Index and the PV-Index that satisfies partial query segments; generating a set of ancestor-descendant PID identifiers list from an associated SUM-Index tree by extracting annotated PID values to initialize a simple cursor or a multi-predicate branching path cursor (MPBP); searches the PS-Index that is a search index that is portioned on PID; generating a result sequence using the simple cursor or the MPBPC cursor from a PS-Index tree; searches the PV-Index that is a search index that is partitioned on PID; filtering the result sequence of nodes by using a PV-Index tree; and producing one or more outputted XML documents from a final result sequence of nodes. 9. The method of claim 8 , wherein the executing query searches the PS-Index that is a search index that is partitioned on PID, or has a composite key of PID and preorder.
0.734472
8,972,322
1
6
1. An apparatus for extending a default model for user context reasoning, the apparatus comprising: a first generating unit configured to generate first relationship information about a relationship between a concept included in the default model and a concept included in a linked model by matching the concept of the default model with the concept of the linked model based on conception relationship information, the conception relationship information comprising at least one of information about identical concepts, information about similar concepts and information about upper and lower concepts; a second generating unit configured to generate second relationship information about a relationship between a concept included in the linked model and a concept included in linked data by matching the concept of the linked model with the concept of the linked data; and a combining unit configured to combine the default model and the linked model based on the first relationship information, and to combine the linked model and the linked data based on the second relationship information; and an internal memory located inside of the apparatus and configured to store the default model.
1. An apparatus for extending a default model for user context reasoning, the apparatus comprising: a first generating unit configured to generate first relationship information about a relationship between a concept included in the default model and a concept included in a linked model by matching the concept of the default model with the concept of the linked model based on conception relationship information, the conception relationship information comprising at least one of information about identical concepts, information about similar concepts and information about upper and lower concepts; a second generating unit configured to generate second relationship information about a relationship between a concept included in the linked model and a concept included in linked data by matching the concept of the linked model with the concept of the linked data; and a combining unit configured to combine the default model and the linked model based on the first relationship information, and to combine the linked model and the linked data based on the second relationship information; and an internal memory located inside of the apparatus and configured to store the default model. 6. The apparatus of claim 1 , wherein the first relationship information and the second relationship information include at least one of identity information, similarity information, and hierarchy information.
0.748193
9,292,522
1
2
1. A method for performing operations on structured text stored in a computing system, the method comprising: converting structured text in a computing system into strings of tokens representing text formats; identifying repeating patterns of said text formats within said structured computer text; determining pattern transformation procedures for transforming text strings within said structured computer text; building transformation algorithms for performing said pattern transformation procedures on said text strings; applying said algorithms to said text strings within said structured computer text that match a pattern; whereby said structured computer text is transformed from a first pattern to a second pattern; accepting a first input array of strings containing actual lexeme types; determining all possible text patterns within said first input array; building a third output array of strings representing text patterns in said structured computer text; accepting said third output array of strings representing text patterns in said structured computer text; removing all text patterns within said third output array containing text patterns of smaller size; removing all patterns within said third output array containing text patterns that can be generated by shifting elements in other text patterns.
1. A method for performing operations on structured text stored in a computing system, the method comprising: converting structured text in a computing system into strings of tokens representing text formats; identifying repeating patterns of said text formats within said structured computer text; determining pattern transformation procedures for transforming text strings within said structured computer text; building transformation algorithms for performing said pattern transformation procedures on said text strings; applying said algorithms to said text strings within said structured computer text that match a pattern; whereby said structured computer text is transformed from a first pattern to a second pattern; accepting a first input array of strings containing actual lexeme types; determining all possible text patterns within said first input array; building a third output array of strings representing text patterns in said structured computer text; accepting said third output array of strings representing text patterns in said structured computer text; removing all text patterns within said third output array containing text patterns of smaller size; removing all patterns within said third output array containing text patterns that can be generated by shifting elements in other text patterns. 2. The method according to claim 1 , further including: accepting a first input string of said structured computer text; transforming said structured computer text to lexeme strings containing lexeme types; building a first output array of said lexeme strings containing all the lexeme types of said first input string.
0.72309
9,367,526
14
15
14. The method of claim 13 further comprising reassigning a group of words to another class by: identifying, based the received word, a set of words occurring in a similar language context; and reassigning the received word and the set of words to another class by assigning a common class identifier.
14. The method of claim 13 further comprising reassigning a group of words to another class by: identifying, based the received word, a set of words occurring in a similar language context; and reassigning the received word and the set of words to another class by assigning a common class identifier. 15. The method of claim 14 wherein optimizing further comprises receiving, in an iterative manner, clusters of words and reassigning class identifiers for increasing the likelihood of the language model predicting a cluster in the production application.
0.962724
9,870,591
10
11
10. The system of claim 9 , wherein said credentialing engine evaluates said credentialed expertise (E) for said expert based on an empirical relation, said empirical relation being: E =( P F11 +P F12 + . . . +P F1N )×( P F21 +P F22 + . . . P F2N ) X . . . X ( P FZ1 +P FZ2 + . . . +P FZN ), wherein: P F11 is nonzero and represents a credentialed segmented profile score for a first segmented profile of a first expert by a first respondent, P F12 is nonzero and represents a credentialed segmented profile score for said first segmented profile of said first expert by a second respondent, P F1N is nonzero and represents a credentialed segmented profile score for said first segmented profile of said first expert by an Nth respondent, P F21 is nonzero and represents a credentialed segmented profile score for a second segmented profile of said first expert by said first respondent, P F22 is nonzero and represents a credentialed segmented profile score for said second segmented profile of said first expert by said second respondent, P F2N is nonzero and represents a credentialed segmented profile score for said second segmented profile of said first expert by said Nth respondent, P FZ1 is nonzero and represents a credentialed segmented profile score for a Zth segmented profile of said first expert by said first respondent, P FZ2 is nonzero and represents a credentialed segmented profile score for said Zth segmented profile of said first expert by said second respondent, P FZN is nonzero and represents a credentialed segmented profile score for said Zth segmented profile of said first expert by said Nth respondent, and wherein said empirical relation above considers profiles scores for entire segmented digital profiles from 1 to Z, wherein said empirical relation above considers all respondents from 1 to N.
10. The system of claim 9 , wherein said credentialing engine evaluates said credentialed expertise (E) for said expert based on an empirical relation, said empirical relation being: E =( P F11 +P F12 + . . . +P F1N )×( P F21 +P F22 + . . . P F2N ) X . . . X ( P FZ1 +P FZ2 + . . . +P FZN ), wherein: P F11 is nonzero and represents a credentialed segmented profile score for a first segmented profile of a first expert by a first respondent, P F12 is nonzero and represents a credentialed segmented profile score for said first segmented profile of said first expert by a second respondent, P F1N is nonzero and represents a credentialed segmented profile score for said first segmented profile of said first expert by an Nth respondent, P F21 is nonzero and represents a credentialed segmented profile score for a second segmented profile of said first expert by said first respondent, P F22 is nonzero and represents a credentialed segmented profile score for said second segmented profile of said first expert by said second respondent, P F2N is nonzero and represents a credentialed segmented profile score for said second segmented profile of said first expert by said Nth respondent, P FZ1 is nonzero and represents a credentialed segmented profile score for a Zth segmented profile of said first expert by said first respondent, P FZ2 is nonzero and represents a credentialed segmented profile score for said Zth segmented profile of said first expert by said second respondent, P FZN is nonzero and represents a credentialed segmented profile score for said Zth segmented profile of said first expert by said Nth respondent, and wherein said empirical relation above considers profiles scores for entire segmented digital profiles from 1 to Z, wherein said empirical relation above considers all respondents from 1 to N. 11. The system of claim 10 , wherein said document reviewing and scoring engine evaluates aggregate crowdsourced document score (ACDS) based on credentialed expertise and other attributes of said crowdsourced experts, based on an empirical relation, said empirical relation being: ACDS={( E 1 +E 2 +E 3 + . . . +E X ) W 1 +( R 1 +R 2 +R 3 + . . . +R X ) W 2 +( O 1 +O 2 +O 3 + . . . +O X ) W 3 }( D 1 +D 2 +D 3 + . . . +D X )CI wherein: E 1 , E 2 , E 3 , . . . E X represent respective credentialed expertise of X number of crowdsourced experts, R 1 , R 2 , R 3 , . . . R X represent respective reputation of said X number of crowdsourced experts, O 1 , O 2 , O 3 , . . . O X represent respective officiality of said X number of crowdsourced experts, D 1 , D 2 , D 3 . . . D X represent respective document scores earned by said X number of crowdsourced experts, and CI represents a non-linear crowdsourcing index.
0.901632
8,219,226
1
2
1. A device comprising: a codec configured to be coupled to a High Definition Audio (HDA) bus; wherein the codec includes a memory configured to store one or more overriding responses, each of which is associated with a corresponding HDA verb; an HDA interface configured to receive a first HDA verb from the HDA bus; a programmable processor configured to determine whether the first HDA verb is associated with one of the overriding responses stored in the memory; wherein if the programmable processor determines that the first HDA verb is associated with one of the overriding responses stored in the memory, then the programmable processor causes the associated one of the stored overriding responses to be provided to the HDA bus; and wherein if the programmable processor determines that the first HDA verb is not associated with one of the overriding responses stored in memory, then a hardwired response associated with the first HDA verb is provided to the HDA bus.
1. A device comprising: a codec configured to be coupled to a High Definition Audio (HDA) bus; wherein the codec includes a memory configured to store one or more overriding responses, each of which is associated with a corresponding HDA verb; an HDA interface configured to receive a first HDA verb from the HDA bus; a programmable processor configured to determine whether the first HDA verb is associated with one of the overriding responses stored in the memory; wherein if the programmable processor determines that the first HDA verb is associated with one of the overriding responses stored in the memory, then the programmable processor causes the associated one of the stored overriding responses to be provided to the HDA bus; and wherein if the programmable processor determines that the first HDA verb is not associated with one of the overriding responses stored in memory, then a hardwired response associated with the first HDA verb is provided to the HDA bus. 2. The device of claim 1 , wherein the codec is configured to generate an interrupt associated with receipt of the first HDA verb, and wherein the programmable processor of the codec is configured to determine whether the first HDA verb is associated with one of the overriding responses stored in the memory in response to the interrupt.
0.665347
8,892,417
7
8
7. The computer program product of claim 6 wherein the story generation request comprises data that is indicative of at least one story angle from the angle set data structure.
7. The computer program product of claim 6 wherein the story generation request comprises data that is indicative of at least one story angle from the angle set data structure. 8. The computer program product of claim 7 wherein the story generation request further comprises data indicative of at least one member of the group consisting of (1) the subject, and (2) at least a portion of the processed data.
0.915441
8,306,806
18
19
18. A tangible computer readable memory storing computer-executable instructions that, when executed, cause one or more processors to perform acts comprising: retrieving one or more bilingual web pages based at least on a search term, each bilingual web page including a plurality of terms that comprise the search term and one or more additional terms in a first language; forming a plurality of candidate translation pairs by selecting one or more candidate translations for each of the plurality of terms, and forming each candidate translation pair by including one of the one or more candidate translations and one of the plurality of terms; extracting one or more translation layout patterns from the plurality of candidate translation pairs; computing one or more features based on the plurality of terms, the one or more features including a feature that identifies lexical boundaries of terms, the feature being a product of a symmetric conditional probability and a context dependency that are derived from frequencies of words in the plurality of terms; deriving a term translation in a second language for the search term in the first language based on a hidden conditional random field (HCRF) model that includes the one or more candidate translations, the one or more translation layout patterns, and the one or more features, the HCRF model including a hidden variable that represents the one or more translation layout patterns; and displaying the term translation and the search term to a user.
18. A tangible computer readable memory storing computer-executable instructions that, when executed, cause one or more processors to perform acts comprising: retrieving one or more bilingual web pages based at least on a search term, each bilingual web page including a plurality of terms that comprise the search term and one or more additional terms in a first language; forming a plurality of candidate translation pairs by selecting one or more candidate translations for each of the plurality of terms, and forming each candidate translation pair by including one of the one or more candidate translations and one of the plurality of terms; extracting one or more translation layout patterns from the plurality of candidate translation pairs; computing one or more features based on the plurality of terms, the one or more features including a feature that identifies lexical boundaries of terms, the feature being a product of a symmetric conditional probability and a context dependency that are derived from frequencies of words in the plurality of terms; deriving a term translation in a second language for the search term in the first language based on a hidden conditional random field (HCRF) model that includes the one or more candidate translations, the one or more translation layout patterns, and the one or more features, the HCRF model including a hidden variable that represents the one or more translation layout patterns; and displaying the term translation and the search term to a user. 19. The tangible computer readable memory of claim 18 , wherein the deriving includes: forming a plurality of label sequences based on the one or more candidate translations, the one or more translation layout patterns, and one or more features; computing a probability for each label sequence that indicates likelihood that the label sequence contains the term translation; and obtaining the term translation from one of the plurality label sequences that has a highest probability.
0.501033
9,786,201
5
6
5. A method for transforming audio information to a haptic language expressed through a wearable article, including a vest or one or more straps, comprising: using a signal processor to receive an audio input and simultaneously generate a plurality of electrical driving signals according to a predefined mapping from audio signals comprising portions of said audio input to each of said plurality of electrical driving signals; and using the plurality of electrical driving signals to drive a network of a plurality of vibratory motors incorporated into the wearable article, wherein the plurality of electrical driving signals generated by the signal processor are used to drive the plurality of electrical vibratory motors according to a predefined mapping of audio signals comprising portions of said audio input to a plurality of different regions of the wearable article.
5. A method for transforming audio information to a haptic language expressed through a wearable article, including a vest or one or more straps, comprising: using a signal processor to receive an audio input and simultaneously generate a plurality of electrical driving signals according to a predefined mapping from audio signals comprising portions of said audio input to each of said plurality of electrical driving signals; and using the plurality of electrical driving signals to drive a network of a plurality of vibratory motors incorporated into the wearable article, wherein the plurality of electrical driving signals generated by the signal processor are used to drive the plurality of electrical vibratory motors according to a predefined mapping of audio signals comprising portions of said audio input to a plurality of different regions of the wearable article. 6. The method of claim 5 , further comprising transforming the audio data into the plurality of electrical driving signals by obtaining audio data in a prescribed format; organizing the audio data into tracks; mapping the tracks to one of the plurality of different regions of the wearable article; and using the respective tracks of data to drive the motors in the corresponding region.
0.501289
8,321,371
25
26
25. The article of claim 24 , wherein instructions causing a machine to determine comprises: performing an initialization.
25. The article of claim 24 , wherein instructions causing a machine to determine comprises: performing an initialization. 26. The article of claim 25 wherein instructions causing a machine to link the plurality of attributes to the plurality of response templates comprises instructions to: form a megacategory, with the megacategory linking a combination of attributes to one of the plurality of response templates using one of the plurality of Boolean expressions.
0.789474
9,244,921
1
5
1. A term relationship identification method used in connection with a social network having a plurality of social networking partners, the method comprising: identifying by a processor at least a first term in a first search phrase of a first electronic search performed by a first one of the social networking partners; identifying by the processor a first document as a search result of the first electronic search; identifying by the processor at least a second term in a second search phrase of a second electronic search performed by a second one of the social networking partners, wherein the second term is not included in the first search phrase and wherein the second search phrase includes the first term and the second term; identifying by the processor a search result set generated in response to the second search; determining by the processor an association between the first search phrase and the second search phrase responsive to the second one of the social networking partners making a selection from the search result set, the selection being the same document as the first document from the search result of the first electronic search, wherein the first document does not include the second term; and improving by the processor a subsequent search result based upon the determined association of the first search phrase with the second search phrase, wherein the second term not included in the first search phrase and not included in the first document is added to an index of the first document to include the first document in the subsequent search result.
1. A term relationship identification method used in connection with a social network having a plurality of social networking partners, the method comprising: identifying by a processor at least a first term in a first search phrase of a first electronic search performed by a first one of the social networking partners; identifying by the processor a first document as a search result of the first electronic search; identifying by the processor at least a second term in a second search phrase of a second electronic search performed by a second one of the social networking partners, wherein the second term is not included in the first search phrase and wherein the second search phrase includes the first term and the second term; identifying by the processor a search result set generated in response to the second search; determining by the processor an association between the first search phrase and the second search phrase responsive to the second one of the social networking partners making a selection from the search result set, the selection being the same document as the first document from the search result of the first electronic search, wherein the first document does not include the second term; and improving by the processor a subsequent search result based upon the determined association of the first search phrase with the second search phrase, wherein the second term not included in the first search phrase and not included in the first document is added to an index of the first document to include the first document in the subsequent search result. 5. The method of claim 1 , wherein the association between the first search phrase and the second search phrase only influences relevance and inclusion in search results for a subset of social networking partners.
0.620996