content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
SELF CHECK-IN SET UP STEPS Since Self Check-in is completely automated from start to finish and integrated with the booking process in MyPMS, there are only three steps for you to complete in MyPMS to set up Self Check-in for your site. How you customize these Letters is dependent on how you want run your Self Check-in process. To setup Self Check-in for your property, follow these steps. Read below for instructions on each step. STEP 3: Set Default Email Letters (SMS Messages)Letters used for Self Check-in "Self Check-in Exempt" and "eSign Exempt" Each booking has an option to be flagged as "Self Check-in Exempt" and/or "eSign Exempt" and these settings are managed via the Booking Data tab on the specific booking desired to be 'exempted' from the automated Self Check-in process. When flagged "Self Check-in Exempt", that particular booking will not send Self Check-in Letter(s) and/or SMS messages. When flagged "eSign Exempt", that particular booking will not require the eSign document be completed to enable a Self Checkin-in, but it will still be allowed Self Check unless "Self Check-in Exempt" is also flagged on. To lean how to use this setting see Self Check-in | Exempt Settings STEP 1: Enter Room Entry Information in MyPMS The Room Entry Instructions are displayed to the Guest in MyBooking and automatically sent to the Guest via email and/or SMS in the "Self Check-in Complete" Letter when the guest is checked-in. The instructions displayed and sent to guests is customized to each Room in the “Room Notes” field of each Room. The “Room Notes” merge field is then inserted as into the Email Letter and/or SMS Message. Therefore, you need to enter the Room Entry Instructions you want sent to guests in the “Room Notes” field for every Room in MyPMS, even if the instructions are the same for all Rooms. For step-by-step instructions, see Add or Edit Rooms STEP 2: Customize the Default Letters used in the Self Check-in Process. These are the Letters that will be used to communicate to Guests during the Self Check-in process. They are a very important part of the process as Guest Communication is crucial for the the Self Check-in process to work as seamlessly as possible. The Letters used for email are required for the Self Check-in process to function. SMS Messaging is optional and requires a subscription. There are three Letters that need to be created for the Self Check-in process, four if you are using eSign Digital Document Signing (Note: an SMS Message must also be created for each Letter if you are using SMS). See detailed information on Creating Default Letters We have provided Default Template Letters for both email and SMS Messages for you to customize in SETUP | LETTERS area of MyPMS, they are listed at the bottom of your Letters and SMS Messages Lists. Please review the Templates and feel free to use them “as-is” or customize to your property by adding or editing text, images or merge fields. See detailed information on Creating Default Letters If you are using SMS Messages, then see Add or Edit SMS Messages. STEP 3: Set Default Email Letters (SMS Messages)Letters used for Self Check-in". For step-by-step instructions, see Self Check-in | Default Letters Optionally, you can also schedule an additional "Self Check-in Start" Letter to be sent to the Guest before the check-in date using Auto Letters To get started, see SMS Module Pricing Feel free to use the Default Templates “as-is” or customize to your property by adding or editing text, images or merge fields. STEP 4: Set the Settings used to control the timing of the Self Check-in Process and the "eSign Digital Signature" Settings Self Check-in Communication Timing The "Communication" setting is used to control the START TIME of the Email and/ SMS sent to the guest to start the Self Check-in process on the arrival date of the booking. This setting controls the automated system trigger that automatically sends the "Start Self Check-in" email and/or SMS to the all guests on arrival date. For bookings made before or after this time setting, see There are two settings, one to control the time that the email and one to control when the SMS message. "eSign Digital Signature" Settings You can enable eSign Digital Document Signing as a required part of the Self Check-in process. This setting will automatically send a request for digital signature and can become a requirement for Self Check-in. Use the "esign" setting to control how the esign communication process functions with Self Check-in. There are four settings to choose from:
https://docs.bookingcenter.com/display/MYPMS/Self+Check-in+%7C+Setup
2021-01-15T20:46:56
CC-MAIN-2021-04
1610703496947.2
[]
docs.bookingcenter.com
Last modified: May 13, 2020 Overview The /usr/local/cpanel/scripts/balance_linked_node_quotas script lets you enforce disk use quotas for distributed cPanel accounts. These accounts use linked cPanel & WHM nodes. The system aggregates the disk usage for each cPanel account from the both the parent node and its child nodes. The script then adjusts each cPanel account’s disk use quota on both the parent and child nodes. The quota is closer to the actual disk space the cPanel account uses on each server. This helps ensure that distributed cPanel accounts do not exceed their quota limits. For example, you have a cPanel user with a disk use quota of 10 gigabytes (GB). This account uses 7.5 GB for mail services. You decide to offload the mail services to a cPanel & WHM child node. When you run this script, the system assigns 75% of the user’s disk quota to the child node, and 25% to the parent node. Run the script To run this script on the command line, use the following format: /usr/local/cpanel/scripts/balance_linked_node_quotas [options] Options Use the following options with this script:
https://docs.cpanel.net/whm/scripts/the-balance_linked_node_quotas-script/
2021-01-15T20:13:30
CC-MAIN-2021-04
1610703496947.2
[]
docs.cpanel.net
Overview Since NativeScript for Android embeds a JavaScript virtual machine (namely Google's V8) to process the JavaScript code, it also takes advantage of the debugger tools available for this virtual machine - the Chrome Developer Tools. The article assumes that you are familiar with JavaScript debugging in the Chrome Developer Tools. You will need the Chrome web browser installed locally. The current implementation supports two major scenarios: - Start debugging - starts an application with the debugger enabled - Attach/Detach debugger - attach/detach the debugger to a running application Start an application with the debugger attached The following command will build, deploy and run the application with the debugger attached: tns debug android Behind the scenes the debug command will build and start the target application and then it will find an available port and enable V8's debugger on that port. Finally, you'll get a url starting with chrome-devtools:// to copy/paste into Chrome to start the debug session. Features - Breakpoint debugging, stepping - Inline source maps support for transpiled code - Console evaluation Setting breakpoints in JavaScript The global debugger sets a V8 breakpointIn the script source. It is equivalent to setting a "manual" breakpoint in the Sources tab of Chrome DevTools. See this article for more information. Attach the debugger If you have a running application you can attach the debugger with the following command: tns debug android --start As in the previous scenario, the debugcommand will configure the V8 debugger port, forward the port, and output a url to paste into Chrome. Detach the debugger Detaching the debugger is as simple as closing the chrome-devtools tab. Notes The current implementation has hard-coded 30 seconds timeout for establishing a connection between the command line tool and the device/emulator. Debugging sources different than JavaScript (TypeScript, CoffeeScript, etc.) is only possible when source maps are inlined by the transpiler.
https://docs.nativescript.org/angular/core-concepts/android-runtime/debug/debug-cli
2021-01-15T21:23:29
CC-MAIN-2021-04
1610703496947.2
[array(['./debug-cli-screenshot.png', 'Image1'], dtype=object)]
docs.nativescript.org
Stacker has a flexible permissions system designed to allow you to share records, fields and actions with the right users. It can take a moment to get your head around how it works. This page will take you through it one step at a time. To set the permissions for a table navigate to Setup Home, select the table and then pick the Permissions tab. You will see the permissions that are currently being applied for your table: This is a single permission rule and it is broken down into sections: Which records: this is determined by the permission filter Which actions: will users be able to update and create records Which fields: exactly which fields are included The records, actions and fields all work together to define the permissions. The permission rule says that users will be able to perform these actions and these records with this access to these fields. A permission rule can grant access to All Records of a table, or only Some Records. In the case of Some Records a permissions filter is used to determine which records are available to each user. The format of these filters is a single condition that must match between the record and a users record. For example if you would like a user to only see Books records for which they are the writer the permissions filter would be: Property > Writer must match User Another common example would be if you want a user to only see items that belong to the same team as the user. Imagine in a B2B scenario your users work for agencies and you want all of your users who work at a particular agency to be able to see all of the Video records that are related to that agency. In this case the permissions filter would be: Video > Agency must match User > Agency You can match on any relationship fields. For the Airtable data source this includes Airtable lookup fields which are treated as read-only relationships in Stacker. You may need to make some changes to your Airtable structure to permission the records in the way that you want. If you need help doing this, get in touch. Every permission rule includes the read action, so your users will be able to read the records that have been matched by the permissions filter. There are two additional actions that you can toggle on and off: Update records Create records These settings control whether Stacker will show your users an edit button or a create button respectively. Field permissions is where we get into the real detail. For every enabled field on your table you can choose whether your users will be able to read the field, edit the field or set the field while creating a record. Stacker automatically disallows combinations that don't make sense here, for example you can't edit data that you can't read. Simply click on the ticks to turn them into crosses and vice versa. One key principle of permissions in Stacker is that they are additive. Permission rules can grant permissions but they cannot take access away. This means that if a user cannot do something it is because they don't have a permission rule enabling them to do it, it is not because there is a permission rule stopping them doing it. This may seem like a subtle difference but with multiple permission rules applying to users the distinction does make a difference. Understanding this point will help you avoid any permission surprises. Permission rules get more functionality in Advanced Permissions. Read more: About the differences between Standard and Advanced Permissions.
https://docs.stacker.app/customizing-your-app/controlling-access/how-do-permissions-work
2021-01-15T20:24:06
CC-MAIN-2021-04
1610703496947.2
[]
docs.stacker.app
Converts each character in the text into proper case, meaning it will capitalize the first first letter of every word and convert the rest into lowercase. proper( text ) text: (Text) The text to convert into proper case. Text You can experiment with this function in the test box below. Test Input Test Output Test Output proper("coNvert eaCH cHaRacter iNTo ProPeR caSe") returns Convert Each Character Into Proper Case On This Page
https://docs.appian.com/suite/help/18.4/fnc_text_proper.html
2021-01-15T20:28:41
CC-MAIN-2021-04
1610703496947.2
[]
docs.appian.com
You are viewing the RapidMiner Studio documentation for version 9.5 - Check here for latest version Forecast Validation (Time Series) SynopsisThis operator performs a validation of a forecast model, which predicts the future values of a time series. Description The operator creates sliding windows from the input time series, specified by the time series attribute parameter. In each validation step the training window is provided at the inner training set port of the Training subprocess. Its size is defined by the parameter window size. The training window can be used to train a forecast model (e.g. an ARIMA model, by the ARIMA operator), which has to be provided to the model port of the Training subprocess. The inner test set port of the Testing subprocess, contains the values of the test window. Its size is defined by the parameter horizon size. The forecast model of the Training subprocess is used to predict these values. Contrary to the Cross Validation operator the number of values which has to be forecasted by the forecast model has to be equal to the horizon size. Thus, the forecasted values are already added to the ExampleSet provided at the test set port, an additional Apply Forecast operator is not necessary. The attribute holding the test window values has the label role, while the attribute holding the forecasted values has the prediction role. Thus a Performance operator (e.g. Performance (Regression)) can be used to calculate the performance of the forecast. For the next validation fold, the training and the test windows are shifted by k values, defined by the parameter step size. If the parameter no overlapping windows is set to true, the step size is set to a value so that neither the training window nor the test window are overlapping (step size = window size + horizon size). The Forecast Validation operator delivers the forecast model of the last fold, which was trained on the last training window in the time series. It also deliver all test set ExampleSets, appended to one ExampleSet and the averaged Performance Vector. This operator works on all time series (numerical, nominal and time series with date time values). Input example set (IOObject) The ExampleSet which contains the time series data as an attribute. Output model (Model) The forecast model of the last fold, which was trained on the last training window in the time series. example set (IOObject) The ExampleSet that was given as input is passed through without changes. test result set (IOObject) All test set ExampleSets, appended to one ExampleSet. performance (Performance Vector) This is an expandable port. You can connect any performance vector (result of a Performance operator) to the result port of the inner Testing subprocess. The performance output port delivers the average of the performances over all folds of the validation Parameters - time_series_attribute The time series attribute holding the time series values for which the forecast model shall be build. The required attribute can be selected from this option. The attribute name can be selected from the drop down box of the parameter if the meta data is known the training window. The ExampleSet provided at the training set port of the Training subprocess will have window size number of examples. The window size has to be smaller or equal to the length of the time series.Range: - no_overlapping_windows If this parameter is set to true, the parameter stepsize is determined automatically, so that all windows and horizons don't overlap. The stepsize is set to window size + horizon size and horizon size.Range: - horizon_size The number of values in the test window. The ExampleSet provided at the test set port of the Testing subprocess will have horizon size number of examples. It will have an attribute holding the original time series values in the test window (attribute name is the name of the time series attribute parameter), and an attribute holding the values in the test window, forecasted by the forecast model from the Training subprocess (attribute name is forecast of <time series attribute>). In addition, the ExampleSet has an attribute with the forecast position, ranging from 1 to horizon size. If the parameter has indices is set to true the ExampleSet has also an attribute holding the last index value of the training window.Range: - enable_parallel_execution This parameter enables the parallel execution of the inner processes. Please disable the parallel execution if you run into memory problems.Range: Tutorial Processes Validate the performance of an ARIMA model for Lake Huron In this process the Forecast Validation operator is used to validate the performance of an ARIMA model for the Lake Huron data set. The ARIMA model is trained on a training window with a size of 20. This model is used to forecast the next 5 ( horizon size ) values of the time series. The forecasted values are compared to the original ones, to calculate the performance of the forecast model. The step size is set to 5, so the training and test windows are shifted by 5 in each validation fold.
https://docs.rapidminer.com/9.5/studio/operators/modeling/time_series/validation/forecast_validation.html
2021-01-15T21:20:46
CC-MAIN-2021-04
1610703496947.2
[]
docs.rapidminer.com
Stacker allows you to create instant apps you can use to share data with and interact with your customers, partners, students, staff or anyone else. Stacker lets you make apps that look like this: Or like this: Here's a quick demo of what Stacker can do. You can see more tutorial videos on our Learn page. Stacker already has the features that every app needs: Support for the data you already have in Airtable and Google sheets Lists to display data Forms to collect or edit data User management, registration & login Stacker works on mobile, tablets and desktop. You can connect the data you already have from Airtable or Google Sheets. Stacker's pricing plans are available on our website. If you have any questions about pricing then don't hesitate to contact us to discuss.
https://docs.stacker.app/
2021-01-15T21:36:39
CC-MAIN-2021-04
1610703496947.2
[]
docs.stacker.app
Parsing Text, JSON, YAML, XML, HTML Files and Strings¶ Introduction¶ Text, JSON, YAML, XML and HTML are widely used and handled formats in test automation. Accordingly, parsing these formats is a very common need that test automation engineers need to cater to. Arjuna provides its own objects to easy handle these content types in its helper classes in Tester Programming Interface. The corresponding objects are also returned by its other objects. Text¶ - Arjuna’s Textclass provides with various factory methods to easily create a Text file object to read content in various formats: - file_content: Returns content as string. - file_lines: Returns TextFileAsLinesobject to read file line by line. - delimited_file: Returns DelimTextFileWithLineAsMapor DelimTextFileWithLineAsSeqobject to read file line by line, parsed based on delimiter. Following sections show the usage. Reading Text File in One Go¶ Reading complete content of a text file is pretty simple: content = Text.file_content('/some/file/path/abc.text') Reading Text File Line by Line¶ Quite often you deal with reading of a text file line by line rather than as a text blob: file = Text.file_lines('/some/file/path/abc.text') for line in file: # line is a Python **str** object. # Do something about the line print(line) file.close() What Are Delimited Text Files?¶ Delimited files are in widespread use in test automation. These files contain line-wise content where different parts of a line are separated by a delimiter. For example: Tab Delimited File CSV File (Comma as the delimiter/separator) In the above examples, note that the first line is a header line which tells what each corresponding part of a line contains in subsequent lines. The delimited files can also be created without the header line. For example: Although the above is not suggested, however at times you consume files from an external source as such and do not have much of an option. Arjuna provides features to handle all of the above situations. Reading Delimited Text File with Header¶ Consider the following tab-delimited file (let’s name it abc.txt): To read the above file, you can use the following Python code: file = Text.delimited_file('/some/file/path/abc.text') for line in file: # line is a Python **dict** object e.g. {'Left' : '1', 'Right': 2, 'Sum' : 3} # Do something about the line print(line) file.close() Tab is the default delimiter. If any other delimiter is used, then it needs to be specified by passing the delimiter argument. For example, consider the following CSV file (let’s call it abc.csv): To read the above file, you can use the following Python code: file = Text.delimited_file('/some/file/path/abc.text', delimiter=',') for line in file: # line is a Python **dict** object e.g. {'Left' : '1', 'Right': 2, 'Sum' : 3} # Do something about the line print(line) file.close() Reading Delimited Text File WITHOUT Header¶ If the input file is without header line, you need to specify the same by passing header_line_present as False. The line is returned as a Python tuple object in this case instead of a dictionary object. Consider the following tab-delimited file without header line (let’s name it abc.txt): To read the above file, you can use the following Python code: file = Text.delimited_file('/some/file/path/abc.text', header_line_present=False) for line in file: # line is a Python **tuple** object e.g. (1,2,3) # Do something about the line print(line) file.close() JSON (Javascript Object Notation)¶ Json is a popular format used in RESTful services and configurations. Creating JSON Objects¶ Arjuna’s Json class provides with various helper methods to easily create a Json object from various sources: - from_file: Load Json from a file. - from_str: Load Json from a string. - from_map: Load Json from a mapping type object. - from_iter: Load Json from an iterable. - from_object: Load Json from a Python built-in data type object. Json Class Assertions¶ Json class provides the following assertions: - assert_list_type: Validate that the object is a JsonList or Python list - assert_dict_type: Validate that the object is a JsonDict or Python dict Automatic Json Schema Extraction¶ Given a Json object, you can extract its schema automatically: Json.extract_schema(jsonobject_or_str) This schema can be used for schema validation for another Json object. JsonDict Object¶ JsonDict encapsulates the Json dictionary and provides higher level methods for interaction. - It has the following properties: - raw_object: The underlying dictionary - size: Number of keys in the JsonDict - schema: The Json schema of this JsonDict (as a JsonSchema object) Finding Json elements in a JsonDict Object¶ You can find Json elements in JsonDict by using a key name or by creating a more involved JsonPath query. - find: Find first match using a key or JsonPath - findall Find all matches using a JsonPath Matching Schema of a JsonDict object¶ You can use a custom Json schema dictionary or a JsonSchema object to validate schema of a JsonDict object. json_dict.matches_schema(schema) It returns True/False depending on the match success. Asserting JsonDict Object¶ JsonDict object provides various assertions to validate its contents: - assert_contents: Validate arbitary key-value pairs in its root. - assert_keys_present: Validate arbitrary keys - assert_match: Assert if it matches another Python dict or JsonDict. - assert_schema Assert if it matches provided schema dict or JsonSchema. - assert_match_schema Assert if it has the same schema as that of the provided dict or JsonDict. JsonList Object¶ JsonList encapsulates the Json list and provides higher level methods for interaction. - It has the following properties: - raw_object: The underlying dictionary - size: Number of keys in the JsonList == Operator with JsonDict and JsonList Objects¶ == operator is overridden for JsonDict and JsonList objects. JsonDict supports comparison with a JsonDict or Python dict. JsonList supports comparision with a JsonList or Python list. json_dict_1 == json_dict_2 json_dict_1 == py_dict json_list_1 == json_list_2 json_list_1 == py_list Modifying a JsonSchema object¶ JsonSchema object is primarily targeted to be created using auto-extraction using Json.extract_schema. You can currently make two modifications to the JsonSchema once created: - mark_optional: Mark arbitrary keys as optional in the root of the schema. - allow_null: Allow null value for the arbitrary keys. YAML¶ YAML is a popular format used in configurations. It is also the default format for Arjuna configuration and definition files. Creating YAML Objects¶ Arjuna’s Json class provides with various helper methods to easily create a YAML object from various sources: - from_file: Load YAML from a file. - from_str: Load YAML from a string. - from_object: Load YAML from a Python built-in data type object. YamlDict Object¶ YamlDict encapsulates the YAML dictionary and provides higher level methods for interaction. - It has the following properties: - raw_object: The underlying dictionary - size: Number of keys in the YamlDict YamlList Object¶ YamlList encapsulates the YAML list and provides higher level methods for interaction. - It has the following properties: - raw_object: The underlying dictionary - size: Number of keys in the JsonList == Operator with YamlDict and YamlList Objects¶ == operator is overridden for YamlDict and YamlList objects. YamlDict supports comparison with a YamlDict or Python dict. YamlList supports comparision with a YamlList or Python list. yaml_dict_1 == yaml_dict_2 yaml_dict_1 == py_dict yaml_list_1 == yaml_list_2 yaml_list_1 == py_list Using !join construct¶ Arjuna provides !join construct to easily construct strings by concatenating the provided list. For example: root: &BASE /path/to/root patha: !join [*BASE, a] pathb: !join [*BASE, b] Once loaded this YAML is equivalent to the following Python dictionary: { 'root': '/path/to/root', 'pathaa': '/path/to/roota', 'pathb': '/path/to/rootb' } XML¶ XML is another popular format used for data exchange. Creating an XmlNode Object¶ A loaded full Xml or a part of it is represented using an XmlNode object. Arjuna’s Xml class provides various helper methods to easily create an XmlNode object from various sources: - from_file: Load XmlNode from a file. - from_str: Load XmlNode from a string. - from_lxml_element: From an lxml element. The loaded object is returned as an XmlNode. Inquiring an XmlNode Object¶ XmlNode object provides the following properties for inquiry: - node: The underlying lxml element. - text: Unaltered text content. Text of all children is clubbed. - normalized_text: Text of this node with empty lines removed and individual lines trimmed. - texts: Texts returned as a sequence. - inner_xml: Xml of children. - normalized_inner_xml: Normalized inner XML of this node, with empty lines removed between children nodes. - source: String representation of this node’s XML. - normalized_source: String representation of this node with all new lines removed and more than one conseuctive space converted to a single space. - tag: Tag name - chidlren: All Children of this node as a Tuple of XmlNodes - parent: Parent XmlNode - preceding_sibling: The XmlNode before this node at same hierarchial level. - following_sibling: The XmlNode after this node at same hierarchial level. - attrs: All attributes as a mapping. - value: Content of value attribute. - Following inquiry methods are available: - attr: Get value of an attribute by name. - has_attr: Check presence of an attribute. Finding XmlNodes in an XmlNode Object using XPath¶ You can find XmlNodes in a given XmlNode object using XPath: - find_with_xpath: Find first match using XPath - findall_with_xpath Find all matches using XPath Finding XmlNodes in an XmlNode Object using XML.node_locator¶ Arjuna’s NodeLocator object helps you in easily defining locating criteria. # XmlNode with tag input locator = Xml.node_locator(tags='input') # XmlNode with attr 'a' with value 1 locator = Xml.node_locator(a=1) # XmlNode with tag input and attr 'a' with value 1 locator = Xml.node_locator(tags='input', a=1) Note ‘tags’ can be provided as: - A string containing a single tag - A string containing multiple tags - A list/tuple containing multiple tags. When multiple tags are provided, they are treated as a sequential descendant tags. # XmlNode with tag input and attr 'a' with value 1 locator = Xml.node_locator(tags='form input', a=1) locator = Xml.node_locator(tags=('form', 'input'), a=1) You can search for all XMlNodes using this locator in an XmlNode: locator.search_node(node=some_xml_node) For finer control, you can use finder methods in XmlNode object itself and provide the locator: - find: Find first match using XPath - findall Find all matches using XPathnode.findall(locator) # Returns None if not found node.find(locator) # Raise Exception if not found node.find(locator, strict=True) Providing Alternative NodeLocators (OR Relationship)¶ In some situations, you might want to find XmlNode(s) which match any of the provided locators. You can provide any number of locators in XmlNode finder methods. node.find(locator1, locator2, locator3) node.findall(locator1, locator2, locator3) HTML¶ In Web UI automation and HTTP Automation, extracting data from and matching data are common needs. Creating an HtmlNode Object¶ A loaded full HTML or a part of it is represented using an HtmlNode object. Arjuna’s Html class provides various helper methods to easily create an HtmlNode object from various sources: - from_file: Load HtmlNode from a file. - from_str: Load HtmlNode from a string. - from_lxml_element: Load HtmlNode from an lxml element. Arjuna uses BeautifulSoup based lxml parser to fix broken HTML while loading. While using from_file or from_file methods of Html object, you can load pass partial HTML content to be loaded as an HtmlNode For this provide partial=True as the keyword argument. node = Html.from_str(partial_html_str, partial=True) An HtmlNode is an XmlNode¶ As the HtmlNode inherits from XmlNode, it supports all properties, methods and flexbilities that are discussed above for XmlNode object. Additionally, it has the following properties: - inner_html: HTML of children. - normalized_inner_html: Normalized inner HTML of this node, with empty lines removed between children nodes.
https://arjuna-taf.readthedocs.io/en/latest/data/textparsing.html
2021-01-15T21:23:30
CC-MAIN-2021-04
1610703496947.2
[]
arjuna-taf.readthedocs.io
Logging¶ Introduction¶ Arjuna’s logging features as provided by log module, give you precise control over what is included in console display and log file. For each test run, Arjuna creates a log file with the name arjuna.log in <Test Project Directory>/report/<run report dir>/log directory. Arjuna’s Logging Functions to Support Python Logging Levels¶ Arjuna provides individual logging functions to support the default Python logging levels: - DEBUG - log_debug - INFO - log_info - WARNING - log_warning - ERROR - log_error - FATAL - log_fatal The levels work just like Python logging levels. FATAL has the top priority and DEBUG has the least. Controlling Which Log Messages Are Included on Console and in Log File¶ When you set a log level, only the log messages that are of same or higher priority get logged. For example, setting a log level to WARNING would mean: - Calls to log_warning, log_error and log_fatal will be entertained. - Calls to log_info and log_debug will be ignored. Default Logging Levels¶ Default logging level for console is INFO. Default logging level for log file (arjuna.log) is DEBUG. Arjuna’s TRACE Log Level¶ Arjuna adds an additional level of logging - TRACE - which is of lower priority than DEBUG. This is parimarily added so that Arjuna’s internal logging is kept to a minimum even when the logging is taking place at DEBUG level. You can use it in your test project with a similar goal by using log_trace call. Overriding Logging Level Defaults¶ You can change Arjuna’s logging level defaults using the following commands in command line: - –display-level: Set level for console logging. - –logger-level: Set level for file logging (arjuna.log) Contextual Logging¶ This is an advanced feature provided by Arjuna. You can set one or more contexts (strings) to log messages. log_info("test context 2", contexts="test2") log_info("test context 4", contexts={"test3", "test4"}) A log message with a context(s) set for it does not get logged by default in Arjuna. It is only logged when ArjunaOption.LOG_ALLOWED_CONTEXTS has been accordingly set to include the context string. This can be done by any of the following means: - Passing --ao LOG_ALLOWED_CONTEXTS <comma separated context strings>to the command line. - Overrding LOG_ALLOWED_CONTEXTS in the reference configuration. Note: Log messages without contexts set for them will work as usual and are not impacted by the LOG_ALLOWED_CONTEXTS option. Auto-Logging using @track Decorator¶ Many a times, you want to log messages at the beginning and end of a Python function/method call. This is a primary use case and usually depends on test author’s commitment to logging (and needs conscious efforts.) Tracking Methods, Functions, Properties¶ Arjuna’s solves this by provding auto-logging using its @track decorator. It will log: - Beginning of the call with provided arguments. - End of the call with return value (Long return values are truncated for brevity.) - Exceptions and exception trace if any exception is raised in calling the given function/method/property. - You can use @track with: - Functions - Bound Methods in a class - Class Methods in a class - Static Methods in a class - Properties in a class Following are some samples: # Function @track def test1(self, a, *vargs, b=None, **kwargs): log_debug("in test1") class MethodTrack: # Bound Method @track def test1(self, a, *vargs, b=None, **kwargs): log_debug("in test1") # Class method @track @classmethod def cls_method_1(cls, a): log_debug("in cls_method") # Static Method @track @staticmethod def stat_method_1(a): log_debug("in stat_method") # Property getter @track @property def prop1(self): log_debug("prop1 getter") return self._p # Property setter. Note that just setting this will also decorate the getter. @track @prop1.setter def prop1(self, value): log_debug("prop1 setter") self._p = value Tracking All Methods in a Class¶ If you want to track all methods in a class, you can decorate the class with @track rather than decorating all individual methods. This will: - - Track all - - Bound Methods in a class - Class Methods in a class - Static Methods in a class - - NOT track: - - properties (They still need to be individually decorated.) Following is a sample: @track class ClassTrack: def __init__(self, a, *vargs, b=None, **kwargs): log_debug("in __init__") def test1(self, a, *vargs, b=None, **kwargs): log_debug("in test1") @classmethod def cls_method(cls, a): log_debug("in cls_method") @staticmethod def stat_method(a): log_debug("in stat_method") Default Logging Level for @track¶ To control verbosity of logging, @track uses the following default logging levels: - DEBUG for all public methods. - TRACE for all protected (begin with “_”), private (begin with “__”) and magic methods (the dunder methods begin and end with “__”)
https://arjuna-taf.readthedocs.io/en/latest/fundamentals/logging.html
2021-01-15T20:40:16
CC-MAIN-2021-04
1610703496947.2
[]
arjuna-taf.readthedocs.io
The Agent Dashboard is the heart and soul of ICC. This is where the Twilio field is integrated into an interface to provide all of the necessary views and actions that an Agent needs. This is also where many of the tie-ins to Google Dialogflow for AI are integrated. To understand the details of the Agent Dashboard, it is important to know that this interface is designed using in-page navigation. Unlike chained forms in a process model, in-page navigation is driven entirely by SAIL and what the end user sees is strictly controlled through local variables. For purposes of organization, readability, reusability and concurrent development, the SAIL code for the dashboard is split up into many interface objects. Interfaces are then nested to create a logical structure of navigation. Designing an application with in-page navigation presents some unique challenges. These following sections will help you understand important concepts of how the Agent Dashboard is built and how to customize the interface. Local variables need to be defined purposefully with in-page navigation. "Load" variables cannot always be used in deeply nested rules because higher level interfaces cannot directly change the values of those variables. "With" variables will reset back to their initial definition on variable changes from higher level interfaces when their definition depends on variables from higher level interfaces. While this is helpful, having too many with variables reloading constantly will impact performance. A common design with in-page navigation is to include most local variables in the highest level interface and pass them down as rule inputs. This way the designer can control when variables change given different actions on a page with confidence that the variable will update in all of the other interfaces that reference it. To prevent a large number of rule inputs, local variables of primitive types can be defined inside a dictionary. The design can then pass the dictionary down as a rule input of type "Any Type". However, it is not recommended to store a list of CDTs within a dictionary. The ICC Application stores a variety of parameters in the local!stateVariables dictionary. These variables are the backbone of the Agent Dashboard structure. These perform a variety of functions, including controlling the state of the interface and storing information regarding the interaction. Please note that modifications to this local variable may cause errors to the out of the box functionality of Agent Dashboard. There are a multitude of SAIL comments in the rule input that defines this local variable, helping designers understand the intent of each item. The ICC Application also contains the local!customVariables dictionary. In the ICC Sample Application, this is used to store all local variables that are related to the credit card use case. In the default application, this is left empty. It is recommended that this dictionary be used to store all of the variables needed when defining your custom use case. This section will walk through the structure of the Agent Dashboard, starting with the highest-level interface. The 'ICC_RP_mainAgentDashboard' object is the top-level interface and starting point for the Agent Dashboard. This interface serves two primary functions: As described in the Local Variables section, any variables that are needed in both the Twilio field and in the remainder of the Agent Dashboard need to be defined here. These variables are then passed as rule input references to other interfaces. The main interfaces to consider when modifying the ICC App are described below. Navigating from the 'ICC_RP_mainAgentDashboard' to 'ICC_CP_connectionPanel' to 'ICC_CP_twilioField' brings you to the interface for the field. The Twilio field consists of a multitude of parameters which are used to define the functionality of the component, connections to Twilio, and various saves under certain conditions. These saves are critical for the rest of the application, and are described in more detail below. This save gets called once a task in Twilio reaches the Appian user in the Agent Dashboard. This is the initial "screenpop" that the agent sees, as it kicks off the rest of the application and begins the verification process. This parameter is also used to allow for graceful behavior if the screen is ever refreshed while on a call. In order to do this, we call "ICC_QE_doesTaskExist" and if the length is greater than 0, we query the database for that task. This data includes interaction logs which tells us what page the agent was on prior to refresh. If the agent has not verified the contact's identity, we bring the agent to the verification steps. If a contact's information has been verified, we query the contact's information and bring the agent to the standard customer view. During a chat or SMS interaction, this save gets called every time a new message is sent or received by the agent. In addition to saving the new messages into the interaction log, there are two other functions that happen: This save gets called when the agent ends an interaction and enters the wrap-up phase. In the ICC App, this fires off a process model which closes the interaction and for chat/sms, writes each message to the database as an interaction log. This save gets called when an agents finishes their wrap-up phase. In the ICC App, local variables are reset and the updated list of recent interactions is pulled from the database. The 'ICC_CP_leftColumn' interface is for all of the UI components on the left side of the agent dashboard below the Twilio component. Depending on whether an agent is on a live interaction or not, different child interfaces are rendered. The 'ICC_CP_offCallMargin' interface displays the list of recent interactions the agent has had. This can be customized to display any other navigational element an agent might need when not on a live interaction. The 'ICC_CP_onCallLayouts' interface renders the following elements when an agent is on a live interaction: The 'ICC_CP_agentDash' interface contains everything to the right of the Twilio component and left sidebar. This is the main content of the Agent Dashboard and is the most dynamic. Similar to "ICC_CP_leftColumn", it has two child interfaces depending on whether an agent is on an interaction or not. The 'agentOffConference' is the agent "landing page" that they will see upon initial rendering of the Agent Dashboard. For the ICC App, this contains several sections: The default view renders various statistics an Agent may want to see throughout the course of the day as well as links. Displays the interaction record interface when a recent interaction is selected from the 'ICC_CP_leftColumn'. The 'ICC_CP_activeConferenceInterface' is the main interface that is loaded when an agent is on a live interaction. Of the many elements of this interface, the most important parts are described below. The 'ICC_CP_activeInteractionRecommendedResponse' interface controls the recommended response that the agent sees at the top of the screen. What is rendered here is determined by the state of local variables. The 'ICC_UT_decideRecommendedResponse' rule input returns the appropriate response given the stateVariables dictionary. For example, the ICC App uses the following variables as guides for the rule input to display different responses: By changing these values in other interfaces, the designer can customize the recommended responses that an Agent sees at any point in the interaction. The 'ICC_CP_AI_recommendedActions' interface controls the recommended action capability to provide the agent a navigation action when appropriate. Similar to the recommended response, a button will render only under certain conditions as defined in stateVariables.intent and stateVariables.contactIsVerified. Depending on the intent that is recognized, a button will become visible to navigate the agent to a specific screen. The button navigation is really just a list of saves for certain local variables that control the state of the interface. An example is provided below for a specific Agent Assist action: If stateVariables.intent = updateContact, then a button will be rendered to Update Contact Info. If the button is clicked, the following local variables are updated, which will navigate the agent to the customer record view to update their contact information. Additionally, certain queries are run to update other local variables to display the customer's information. For details on how to modify Agent Assist with Dialogflow, please refer to the section below. The 'ICC_CP_confirmIdentityOfKnownIncomingNumber' interface is the customer verification screen if the phone number of the customer (voice/SMS) is recognized from the customer record. Please note the phone number must be stored in the 'ICC_CONTACT' table in the following format to be properly matched by default in the app: +(country code)(phone number). For example, for US phone numbers, the syntax would be: +11234567890. The 'ICC_CP_searchForAndSelectContact' interface is rendered if the phone number that of the customer (voice/SMS) is not recognized from the customer record. This brings the agent to the customer search interface. The 'ICC_CP_fullCustomerView' is the full customer view after a customer has been verified. This interface serves as the main navigational interface for viewing a customer's record, cases, interactions, etc. This page can be customized to display any relevant information an agent would want to see about a customer, as well as any actions an agent would want to take. As a designer, you would want to consider making local variable updates to trigger updates to recommended responses as well as the activity log from here. Recommended response logic can be found in the 'ICC_UT_decideRecommendedResponse' rule input, while appending another entry to the interactionLogs local variable will update the activity log on the left column. Agent assist is powered by Dialogflow and consists of an “agent” and a list of intents. Dialogflow uses intents to categorize a user's intentions. Intents have Training Phrases, which are examples of what a user might say to your agent. The ICC app utilizes intents to suggest things to the agents as phrases are given by the customer, this is called Agent Assist. In the app Agent Assist can do things like offer a direct link to the dispute a credit card transaction if the customer says they are having an issue with a card transaction. For more information see the intents documentation. As you develop your ICC app further you can add additional intents to allow the Agent Assist to cover more areas. To do this, first create a new intent. Once a new intent is configured the ICC dashboard needs to know what to do with the new intent response. To configure Agent Assist, open the Intelligent Contact Center App. On This Page
https://docs.appian.com/suite/help/18.4/icc/modifying_agent_dashboard.html
2021-01-15T20:44:14
CC-MAIN-2021-04
1610703496947.2
[]
docs.appian.com
Interface patterns give you an opportunity to explore different interface designs. To learn how to directly use patterns within your interfaces, see How to Adapt a Pattern for Your Application. Display an array of CDT data in a read-only paging grid. This scenario demonstrates: This expression shows how to modify the above expression for offline use. The only difference is that all rows are displayed initially since grid paging is not available when offline. local!pagingInfois a load()variable and local!datasubsetis a with()variable. This allows us to save a new value into local!pagingInfowhen the user interacts with the grid and then use that value to recalculate local!datasubset See also: load(), with(), Paging and Sorting Configurations
https://docs.appian.com/suite/help/19.1/recipe_display_array_of_data_in_a_grid.html
2021-01-15T21:00:55
CC-MAIN-2021-04
1610703496947.2
[]
docs.appian.com
Installing Form Vibes There are two ways to install the Form Vibes Via the WordPress Dashboard: - From the Dashboard, click Plugins → Add New. - In the Search field, enter Form Vibes and choose Form Vibes to install. - After installation, click Activate. Via WordPress.org - Go to. - Either go to plugins and search for Form Vibes or click on this link and click download. - In the WordPress dashboard, click Plugins → Add New. - Click Upload Plugin, and choose the file you’ve downloaded for Form Vibes.
https://docs.formvibes.com/article/76-installing-form-vibes
2021-01-15T20:26:58
CC-MAIN-2021-04
1610703496947.2
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5923ee3b0428634b4a335ad3/images/5d9341692c7d3a7e9ae1d933/file-RNUsOuQhbf.png', None], dtype=object) ]
docs.formvibes.com
GetAccessControlEffect Gets the effects of an organization's access control rules as they apply to a specified IPv4 address, access protocol action, or user ID. Request Syntax { "Action": " string", "IpAddress": " string", "OrganizationId": " string", "UserId": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - Action The access protocol action. Valid values include ActiveSync, AutoDiscover, EWS, IMAP, SMTP, WindowsOutlook, and WebMail. Type: String Length Constraints: Minimum length of 1. Maximum length of 64. Pattern: [a-zA-Z]+ Required: Yes - IpAddress The IPv4 address. Type: String Length Constraints: Minimum length of 1. Maximum length of 15. Pattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$ Required: Yes - OrganizationId The identifier for the organization. Type: String Pattern: ^m-[0-9a-f]{32}$ Required: Yes - UserId The user ID. Type: String Length Constraints: Minimum length of 12. Maximum length of 256. Pattern: [\S\s]*|[A-Za-z0-9-]+ Required: Yes Response Syntax { "Effect": "string", "MatchedRules": [ "string" ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - Effect The rule effect. Type: String Valid Values: ALLOW | DENY - MatchedRules The rules that match the given parameters, resulting in an effect. Type: Array of strings Array Members: Minimum number of 0 items. Maximum number of 10 items. Length Constraints: Minimum length of 1. Maximum length of 64. Pattern: [a-zA-Z0-9_-]+ Errors For information about the errors that are common to all actions, see Common Errors. - EntityNotFoundException The identifier supplied for the user, group, or resource does not exist in your organization.:
https://docs.aws.amazon.com/workmail/latest/APIReference/API_GetAccessControlEffect.html
2020-07-02T17:18:20
CC-MAIN-2020-29
1593655879532.0
[]
docs.aws.amazon.com
Some of the process flow objects have properties that they share in common. Each of these properties will be discussed in more detail in the following sections. All the activities and process flow objects have a Name box that you can use to edit its name. You can find this setting in Quick Properties under the Activity Properties group. By default, the name of the activities and other process flow objects will be based on what kind of activity or object it is, as shown in the following image: In the example above, the activity is named Batch because it is a Batch activity. You can change the name of the activity by clicking inside the Name box and typing a new name. After you've changed the name, it will update the display name on the activity. Be aware that you can also change an activity's name by double-clicking the activity name in the process flow. Clicking the Font button opens a pop-up box where you can edit the activity's visual properties, as shown in the following image: This pop-up box has the following properties: The Statistics button opens the activity's statistics dialog box. You can use this dialog box to turn recording for statistics on or off for this particular activity. See Process Flow Statistics for more information. Some activities have the Max Wait Timer properties, as shown in the following image: The Max Wait Timer properties are available for activities that might hold a token for a period of time until a specific condition or event occurs in the process flow or simulation model. You can use this timer to: For example, an Acquire Resource has a Max Wait Timer. Tokens that enter the Acquire Resource will request access to the resource and will wait in the Acquire Resource activity until the resource becomes available. You could set the Max Wait Timer to expire if the token is unable to get access to the resource after 5 minutes. Then, when timer goes off, you could possibly set the token to create a label named failed and then continue to the next downstream activity. The Max Wait Timer properties are available on the following activities: By default, only the Use Max Wait Timer checkbox is available at first. Then, when you check the Use Max Wait Timer checkbox, the other properties will become available: The Start Criteria box is only available for Batch, Join, and Synchronize activities. The Batch activity collects incoming tokens and sorts them into groups of tokens (batches). When a batch is ready, the Batch activity will release it to a downstream activity. If you decide to use the Max Wait Timer with a Batch activity, you can cause the Batch activity to release a batch early if a certain amount of time has passed. You'd also use the Start Criteria box to determine when the timer should begin running. By default, the timer is set to begin running as soon as a batch is created. You can change the value in this box if needed. For example, if you change the Start Criteria to collected > 3, the timer will begin when the fourth token in the batch is collected. This behavior is similar for the Join and Synchronize activities, except that they form waves (a group representing one token from each incoming connector) instead of batches. You can similarly release the entire wave early with this timer. Use the Time box to set the length of time the Max Wait timer will run. The time is measured in simulation time units. You can enter in a fixed time or create a time dynamically using the menu next to the box. Use the OnWaitTimerFired settings to determine what should happen to the token if the Max Wait Timer expires. You can click the View Properties button to view and edit the default settings, as shown in the following image: By default, the Set Label operation will create a label on the token that is called failed and assign it a value of 1 (which will represent a value that is set to true). The Release token operation will then release the token through connector 1. (See Adding and Connecting Activities - Number of Outgoing Connectors for more information about connector numbers.) You can edit these default operations or delete them using the Delete button next to each one. You can also add your own custom operations using the Add button to open a menu and select other operations. The Max Idle Timer works almost identically to the Max Wait Timer. It has many of the same available settings, as shown in the following image: The main difference is that it measures how long a token has been idle in the activity, meaning how long it has gone without receiving any tokens and/or values. See Max Wait Timer for more information about these properties. This property is available on the following activities: The Executer / Task Sequence box is available on most of the Task Sequence activities. You can use this box to determine which task executer or task sequence should receive the task. If you choose to give this task to a task executer, a new task sequence will be automatically created with this task and then it will be sent to the task executer. You can: Each of these different options will be explained in the following sections. To assign this task to a specific task executer, use the Sampler button to select a task executer in the 3D model. During the simulation run, the assigned task executer will always perform this task. If needed, you can make sure this task is assigned dynamically to a task executer during a simulation run. In other words, you can change these settings so that a different task executer might be assigned to the task based on different conditions during the simulation run. One way to dynamically assign a task executer is to use the task executer that is listed in a label on a token. To reference a label on a token, you can use the Label keyword: You can use the current command in Executer/Task Sequence box to dynamically assign the task to the task executer that is currently attached to a specific instance of the process flow. Be aware that this command can only be used for the Task Executer or Sub Flow process flow types. The keyword current will reference the task executer object attached to the process flow. See Process Flow Instances for more information. If needed, you can add this task to an existing task sequence that was previously created by a Create Task Sequence activity. The Create Task Sequence activity will create a reference to the created task sequence and assign it to a label on the token. This label can then be used in the Executer / Task Sequence box to add the task to the end of that task sequence. If you add any of the task activities to the end of a Create Task Sequence activity or another task activity, the newly added task activity will automatically put the correct label name in the Executer / Task Sequence box. This can make creating task sequences more convenient. However, if you need to add it manually: The Assign To property creates a reference to a new value(s) or object(s) that is created by the activity. These references are usually assigned to a label on a token, but they may be assigned to other labels or nodes. When using the token.LabelName syntax, the label will be created on the token if it does not currently exist. Otherwise, you will need to ensure that the node you pass in to the Assign To already exists. This can be done by using the object.labels.assert("LabelName") or aNode.subnodes.assert("NodeName") commands. The reference may be to a single object or value, or it may be to multiple. For instance, pulling entries off of a list may result in one entry or multiple. If multiple entries are pulled, an array will be created with each entry in the array being one of the pulled values. Creating a reference point allows other activities to easily reference created objects, values pulled from a list, task sequences etc. However, an Assign To label/node is not required for and may be removed by clicking the Remove button . The value(s) will be set in one of two methods: If the Assign box is checked, any data stored on the label or node that was passed into the Assign To box will be overwritten by the new value. If the Insert at Front box is checked, any data stored on the label or node that was passed into the Assign To box will remain and the new value(s) will be added to the front. This will cause the data to become an array with the most recent value as the first entry in the array. The Label Matching/Assignment table will become available on an event-listening activity when that activity is listening for a standard event. You can use this table to assign the value to a token label, match a token label's value, or match a value to. The Event-Triggered Source will assign the label to the token it creates; the Wait for Event activity will assign the label to the token that entered the activity and triggered the event-listening. You can leave this table blank if you don't need to do any operations. The rows in the Label Assignment table will vary depending on the specific event that the activity is listening for. Each simulation event has a set of parameters (sets of information) that it uses. For example, the following image shows the Label Assignment table for the OnEntry event on a fixed resource: In this example, notice that the first row is the Entering Item. This row is a reference to the item that is entering the fixed resource. The second row is Input Port which is a reference to the port number from which the fixed resource received the flow item. Most of the time, the name of the row will be descriptive enough to give you a good idea of its reference point. The text in the cells under the Label Name or Value column depends on the option set in the Operation column (as described below). If the operation is set to match, assign or insert at front then the text defines the name of the token label. When using a Wait for Event you can use dot syntax (periods), to separate label names to reference labels on objects that the token has a reference to. For example, if the entering token has a label called operator which references another token or a task executer object in the model you can enter operator.item to reference a label on the operator. If the operation is match value then the text defines the number, string or object reference. This value is evaluated on reset. To define an object reference use FlexScript. For example, Model.find("Processor1") You can also define dynamic numbers or strings using FlexScript. For example, Table("GlobalTable1")[1][1] or "Text" + string.fromNum(Table("GlobalTable1")[1][1]) To use the previous example, you could create a label called item or itemID to refer to the entering item. You could also match the input port the item entered through to port 1, or 2. If you click on any cell under the Operation column, it will open a menu. The option you choose will determine what operation will be performed when the event is triggered. The menu has the following options: Event-listening activities have the ability to override the return value of events they listen to. For example, you may want to perform some complex logic using Process Flow to define the Process Time of a Processor. To do this, first check the Will Override Return Value checkbox in the activity's properties. Next define the set of activities that determine the return value. Here you must make sure that none of these activities will cause any type of delay, such as a waiting operation or explicit Delay activity. Otherwise the return value will not be calculated properly. At the end of your block of activities place a Finish activity. The Finish activity allows you to define a return value for your overridden function and then destroys the token. In our example, the return value will become the Process Time of the Processor. The Change Rule properties will become available on an event-listening activity when that activity is listening for a value change event. You can use these properties to determine the conditions that will trigger the event. Usually this will be a statistical change of some sort. When these conditions are in place, the Event-Triggered Source activity will create a token and release it to the next downstream activity; the Wait for Event activity will release the token to the next downstream activity. The following image shows the Change Rule properties: The following sections describe the different Change Rule properties: You will use the Change Rule menu to determine the conditions that will trigger an event. The menu has the following options: Defines the user-defined value associated with the Change Rule. This is only needed for change rules with required values, such as Arrive at Value. If checked, the event will fire immediately (finishing the activity) if when the token first arrives, the value already meets the defined rule. For example, if are listening to a Zone's OnContentChange event, and you've defined the Change Rule as Arrive At Value with a Value of 5, then when the token arrives, if the Zone's content is already at 5, the event will immediately fire and the token will pass through the activity. This field is only applicable if the Change Rule uses an involved value, and if the activity is a Wait for Event.
https://docs.flexsim.com/en/19.1/Reference/ProcessFlowObjects/SharedProperties/SharedProperties.html
2020-07-02T16:04:02
CC-MAIN-2020-29
1593655879532.0
[]
docs.flexsim.com
Released on: Monday, December 4, 2017 - 11:15 Notes The Browser agent, sometimes called the JavaScript agent, has multiple variants: Lite, Pro, and Pro+SPA. Unless noted otherwise, all features/improvements/bug fixes are available in all variants of the agent. New features - Link JS Errors to Browser Interactions (relevant only for PRO+SPA): When a JS error occurs inside a browser interaction event, the error will now be associated with the interaction via Insights attributes. BrowserInteraction, AjaxRequest, and BrowserTiming events will now have the following attributes: browserInteractionId, eventId, and parentEventId. Bug fixes - JSONP Tracking Breaks in some versions of Safari (relevant only for PRO+SPA): Previously, the agent would cause Safari browsers to lock up when JSONP requests returned large data. The agent no longer calculates JSONP response size. How to upgrade To upgrade your agent to the latest version, see Upgrade the Browser agent.
https://docs.newrelic.com/docs/release-notes/new-relic-browser-release-notes/browser-agent-release-notes/browser-agent-v1071
2020-07-02T15:56:14
CC-MAIN-2020-29
1593655879532.0
[]
docs.newrelic.com
By default, the view frustum is arranged symmetrically around the camera’s center line, but it doesn’t necessarily need to be. You can make the frustum oblique, which means that one side is at a smaller angle to the centre line than the opposite side. This makes the perspective on one side of the image seem more condensed, giving the impression that the viewer is very close to the object visible at that edge. An example of how this can be used is a car racing game; if the frustum is flattened at its bottom edge, it appears to the viewer that they are closer to the road, accentuating the feeling of speed. In the Built-in Render Pipeline, a Camera that uses an oblique frustum can only use the Forward rendering path. If your Camera is set to use the Deferred Shading rendering path and you make its frustum oblique, Unity forces that Camera to use the Forward rendering path. Although the Camera component does not have functions specifically for setting the obliqueness of the frustum, you can do it by either enabling the camera’s Physical Camera properties and applying a Lens Shift, or by adding a script to alter the camera’s projection matrix. Enable a camera’s Physical Camera properties to expose the Lens Shift options. You can use these to offset the camera’s focal center along the X and Y axes in a way that minimizes distortion of the rendered image. Shifting the lens reduces the frustum angle on the side opposite the direction of the shift. For example, as you shift the lens up, the angle between the bottom of the frustum and the camera’s center line gets smaller. For further information about the Physical Camera options, see documentation on Physical Cameras. For further information about setting individual Physical Camera properties, see the Camera Component reference. The following script example shows how to quickly achieve an oblique frustum by altering the camera’s projection matrix. Note that you can only see the effect of the script while the game is running Play mode. using UnityEngine; using System.Collections; public class ExampleScript : MonoBehaviour { void SetObliqueness(float horizObl, float vertObl) { Matrix4x4 mat = Camera.main.projectionMatrix; mat[0, 2] = horizObl; mat[1, 2] = vertObl; Camera.main.projectionMatrix = mat; } } _Ejemplo de un C# script It is not necessary to understand how the projection matrix works to make use of this. The horizObl and vertObl values set the amount of horizontal and vertical obliqueness, respectively. A value of zero indicates no obliqueness. A positive value shifts the frustum rightwards or upwards, thereby flattening the left or bottom side. A negative value shifts leftwards or downwards and consequently flattens the right or top side of the frustum. The effect can be seen directly if this script is added to a camera and the game is switched to the scene view while the game runs; the wireframe depiction of the camera’s frustum will change as you vary the values of horizObl and vertObl in the inspector. A value of 1 or –1 in either variable indicates that one side of the frustum is completely flat against the centreline. It is possible although seldom necessary to use values outside this range.
https://docs.unity3d.com/es/2020.1/Manual/ObliqueFrustum.html
2020-07-02T15:48:01
CC-MAIN-2020-29
1593655879532.0
[]
docs.unity3d.com
or while data are available to read ( while (client.connected() || client.available()), see below) we can read line by line and print out server’s response: while (client.connected() || client.available()) { if (client.available()) { String line = client.readStringUntil('\n'); Serial.println(line); } } The inner if (client.available()) is checking if there are any data available from the server. If so, then they are printed out. Data can be unavailable while the TCP connection is still alive. That means data could be later received. Once server sends all requested data it will disconnect, then once all received data are read,!] General client loop¶ Here is a general TCP sending / receiving loop: while (client.available() || client.connected()) { if (client.available()) { // client.available() bytes are immediately available for reading // warning: reading them *allows* peer to send more, so they should // be read *only* when they can immediately be processed, see below // for flow control } if (client.connected()) { if (client.availableForWrite() >= N) { // TCP layer will be able to *bufferize* our N bytes, // and send them *asynchronously*, with by default // a small delay if those data are small // because Nagle is around (~200ms) // unless client.setNoDelay(true) was called. // // In any case client.write() will *never* block here } else { // or we can send but it will take time because data are too // big to be asynchronously bufferized: TCP needs to receive // some ACKs to release its buffers. // That means that write() will block until it receives // authorization to send because we are not in a // multitasking environment // It is always OK to do this, client.availableForWrite() is // only needed when efficiency is a priority and when data // to send can wait where they currently are, especially // when they are in another tcp client. // Flow control: // It is also important to know that the ACKs we are sending // to remote are directly generated from client.read(). // It means that: // Not immediately reading available data can be good for // flow control and avoid useless memory filling/overflow by // preventing peer from sending more data, and slow down // incoming bandwidth // (tcp is really a nice and efficient beast) } } // this is necessary for long duration loops (harmless otherwise) yield(); }.
https://arduino-esp8266.readthedocs.io/en/latest/esp8266wifi/client-examples.html?highlight=http
2020-07-02T17:03:50
CC-MAIN-2020-29
1593655879532.0
[array(['../_images/client-example-domain.png', 'A web page to be retreived by the clinet program'], dtype=object)]
arduino-esp8266.readthedocs.io
Included with each installation of edgeCore is a System Metrics Connection. This connection allows users to access various metrics that are tracked. The connection is read-only, and therefore it can be neither edited nor deleted. The System Metrics connection comes with its own list of predefined feeds. These feeds are configured like most other feeds with a name, an optional description, and update/polling settings. The distinguishing factor between the feeds is the Metrics Group that each feed represents. The following table lists the available available internal metrics by EdgeCore release versions: Name: *
https://docs.edge-technologies.com/docs/edgecore-system-metrics/
2020-07-02T15:08:23
CC-MAIN-2020-29
1593655879532.0
[]
docs.edge-technologies.com
1 Description Clicks [x]-button on a Confirmation, Error, Warning or Info Dialog. 2 Supported Widgets - Window - DialogMessage - ConfirmationDialog 3 Usage Optionally you can provide the dialog title and dialog type, to specify which dialog you want to close. Otherwise this action will close the first found active dialog. This action is equivalent to pressing the [x]-button on top of the dialog.
https://docs.mendix.com/addons/ats-addon/rg-one-close-dialog
2020-07-02T16:43:38
CC-MAIN-2020-29
1593655879532.0
[]
docs.mendix.com
What is Cluster Aware Updating in Windows Server 2012? (Part 2) [VIDEO] Summary: This article is contributed by Ryan Doon , a Premier Field Engineer based in Canada. In this post (the second of a two part series), he walks us through a video demonstration of the new Cluster Aware Updating feature of Windows Server 2012, and how it can be used to easily update clustered servers while maintaining availability of service to clients. Enjoy! And now it’s time to present Part 2 of What is Cluster Aware Updating in Windows Server 2012? In this video, I will take you through a live demonstration of cluster aware updating, to show how this excellent new feature in Windows Server 2012 will greatly simplify the process of updating your clusters. And for those that asked, this is not a feature that can be ported into Windows Server 2008 R2 . Written by Ryan Doon; Posted by Frank Battiston, MSPFE Editor
https://docs.microsoft.com/en-us/archive/blogs/technet/mspfe/what-is-cluster-aware-updating-in-windows-server-2012-part-2-video
2020-07-02T16:55:33
CC-MAIN-2020-29
1593655879532.0
[]
docs.microsoft.com
Use the Windows 10 Resource Management System in a legacy app or game .NET and Win32 apps and games are often localized into different languages to expand their total addressable market. For more info about the value proposition of localizing your app, see Globalization and localization. By packaging your .NET or Win32 app or game as an .msix or .appx package, you can leverage the Resource Management System to load app resources tailored to the run-time context. This in-depth topic describes the techniques. There are many ways to localize a traditional Win32 application, but Windows 8 introduced a new resource-management system that works across programming languages, across application types, and provides functionality over and above simple localization. This system will be referred to as "MRT" in this topic. Historically, that stood for "Modern Resource Technology" but the term "Modern" has been discontinued. The resource manager might also be known as MRM (Modern Resource Manager) or PRI (Package Resource Index). Combined with MSIX-based or .appx-based deployment (for example, from the Microsoft Store), MRT can automatically deliver the most-applicable resources for a given user / device which minimizes the download and install size of your application. This size reduction can be significant for applications with a large amount of localized content, perhaps on the order of several gigabytes for AAA games. Additional benefits of MRT include localized listings in the Windows Shell and the Microsoft Store, automatic fallback logic when a user's preferred language doesn't match your available resources. This document describes the high-level architecture of MRT and provides a porting guide to help move legacy Win32 applications to MRT with minimal code changes. Once the move to MRT is made, additional benefits (such as the ability to segment resources by scale factor or system theme) become available to the developer. Note that MRT-based localization works for both UWP applications and Win32 applications processed by the Desktop Bridge (aka "Centennial"). In many situations, you can continue to use your existing localization formats and source code whilst integrating with MRT for resolving resources at runtime and minimizing download sizes - it's not an all-or-nothing approach. The following table summarizes the work and estimated cost/benefit of each stage. This table doesn't include non-localization tasks, such as providing high-resolution or high-contrast application icons. For more info about providing multiple assets for tiles, icons, etc., See Tailor your resources for language, scale, high contrast, and other qualifiers. Introduction Most non-trivial applications contain user-interface elements known as resources that are decoupled from the application's code (contrasted with hard-coded values that are authored in the source code itself). There are several reasons to prefer resources over hard-coded values - ease of editing by non-developers, for example - but one of the key reasons is to enable the application to pick different representations of the same logical resource at runtime. For example, the text to display on a button (or the image to display in an icon) might differ depending on the language(s) the user understands, the characteristics of the display device, or whether the user has any assistive technologies enabled. Thus the primary purpose of any resource-management technology is to translate, at runtime, a request for a logical or symbolic resource name (such as SAVE_BUTTON_LABEL) into the best possible actual value (eg, "Save") from a set of possible candidates (eg, "Save", "Speichern", or "저장"). MRT provides such a function, and enables applications to identify resource candidates using a wide variety of attributes, called qualifiers, such as the user's language, the display scale-factor, the user's selected theme, and other environmental factors. MRT even supports custom qualifiers for applications that need it (for example, an application could provide different graphic assets for users that had logged in with an account vs. guest users, without explicitly adding this check into every part of their application). MRT works with both string resources and file-based resources, where file-based resources are implemented as references to the external data (the files themselves). Example Here's a simple example of an application that has text labels on two buttons ( openButton and saveButton) and a PNG file used for a logo ( logoImage). The text labels are localized into English and German, and the logo is optimized for normal desktop displays (100% scale factor) and high-resolution phones (300% scale factor). Note that this diagram presents a high-level, conceptual view of the model; it does not map exactly to implementation. In the graphic, the application code references the three logical resource names. At runtime, the GetResource pseudo-function uses MRT to look those resource names up in the resource table (known as PRI file) and find the most appropriate candidate based on the ambient conditions (the user's language and the display's scale-factor). In the case of the labels, the strings are used directly. In the case of the logo image, the strings are interpreted as filenames and the files are read off disk. If the user speaks a language other than English or German, or has a display scale-factor other than 100% or 300%, MRT picks the "closest" matching candidate based on a set of fallback rules (see Resource Management System for more background). Note that MRT supports resources that are tailored to more than one qualifier - for example, if the logo image contained embedded text that also needed to be localized, the logo would have four candidates: EN/Scale-100, DE/Scale-100, EN/Scale-300 and DE/Scale-300. Sections in this document The following sections outline the high-level tasks required to integrate MRT with your application. Phase 0: Build an application package This section outlines how to get your existing Desktop application building as an application package. No MRT features are used at this stage. Phase 1: Localize the application manifest This section outlines how to localize your application's manifest (so that it appears correctly in the Windows Shell) whilst still using your legacy resource format and API to package and locate resources. Phase 2: Use MRT to identify and locate resources This section outlines how to modify your application code (and possibly resource layout) to locate resources using MRT, whilst still using your existing resource formats and APIs to load and consume the resources. Phase 3: Build resource packs This section outlines the final changes needed to separate your resources into separate resource packs, minimizing the download (and install) size of your app. Not covered in this document After completing Phases 0-3 above, you will have an application "bundle" that can be submitted to the Microsoft Store and that will minimize the download and install size for users by omitting the resources they don't need (eg, languages they don't speak). Further improvements in application size and functionality can be made by taking one final step. Phase 4: Migrate to MRT resource formats and APIs This phase is beyond the scope of this document; it entails moving your resources (particularly strings) from legacy formats such as MUI DLLs or .NET resource assemblies into PRI files. This can lead to further space savings for download & install sizes. It also allows use of other MRT features such as minimizing the download and install of image files by based on scale factor, accessibility settings, and so on. Phase 0: Build an application package Before you make any changes to your application's resources, you must first replace your current packaging and installation technology with the standard UWP packaging and deployment technology. There are three ways to do this: - If you have a large desktop application with a complex installer or you utilize lots of OS extensibility points, you can use the Desktop App Converter tool to generate the UWP file layout and manifest information from your existing app installer (for example, an MSI). - If you have a smaller desktop application with relatively few files or a simple installer and no extensibility hooks, you can create the file layout and manifest information manually. - If you're rebuilding from source and want to update your app to be a pure UWP application, you can create a new project in Visual Studio and rely on the IDE to do much of the work for you. If you want to use the Desktop App Converter, see Package a desktop application using the Desktop App Converter for more information on the conversion process. A complete set of Desktop Converter samples can be found on the Desktop Bridge to UWP samples GitHub repo. If you want to manually create the package, you will need to create a directory structure that includes all your application's files (executables and content, but not source code) and a package manifest file (.appxmanifest). An example can be found in the Hello, World GitHub sample, but a basic package manifest file that runs the desktop executable named ContosoDemo.exe is as follows, where the highlighted text would be replaced by your own values. <?xml version="1.0" encoding="utf-8" ?> <Package xmlns="" xmlns: <Identity Name="Contoso.Demo" Publisher="CN=Contoso.Demo" Version="1.0.0.0" /> <Properties> <DisplayName>Contoso App</DisplayName> <PublisherDisplayName>Contoso, Inc</PublisherDisplayName> <Logo>Assets\StoreLogo.png</Logo> </Properties> <Dependencies> <TargetDeviceFamily Name="Windows.Desktop" MinVersion="10.0.14393.0" MaxVersionTested="10.0.14393.0" /> </Dependencies> <Resources> <Resource Language="en-US" /> </Resources> <Applications> <Application Id="ContosoDemo" Executable="ContosoDemo.exe" EntryPoint="Windows.FullTrustApplication"> <uap:VisualElements </uap:VisualElements> </Application> </Applications> <Capabilities> <rescap:Capability </Capabilities> </Package> For more information about the package manifest file and package layout, see App package manifest. Finally, if you're using Visual Studio to create a new project and migrate your existing code across, see Create a "Hello, world" app. You can include your existing code into the new project, but you will likely have to make significant code changes (particularly in the user interface) in order to run as a pure UWP app. These changes are outside the scope of this document. Phase 1: Localize the manifest Step 1.1: Update strings & assets in the manifest In Phase 0 you created a basic package manifest (.appxmanifest) file for your application (based on values provided to the converter, extracted from the MSI, or manually entered into the manifest) but it will not contain localized information, nor will it support additional features like high-resolution Start tile assets, etc. To ensure your application's name and description are correctly localized, you must define some resources in a set of resource files, and update the package manifest to reference them. Creating a default resource file The first step is to create a default resource file in your default language (eg, US English). You can do this either manually with a text editor, or via the Resource Designer in Visual Studio. If you want to create the resources manually: - Create an XML file named resources.reswand place it in a Strings\en-ussubfolder of your project. Use the appropriate BCP-47 code if your default language is not US English. - In the XML file, add the following content, where the highlighted text is replaced with the appropriate text for your app, in your default language. Note There are restrictions on the lengths of some of these strings. For more info, see VisualElements. <?xml version="1.0" encoding="utf-8"?> <root> <data name="ApplicationDescription"> <value>Contoso Demo app with localized resources (English)</value> </data> <data name="ApplicationDisplayName"> <value>Contoso Demo Sample (English)</value> </data> <data name="PackageDisplayName"> <value>Contoso Demo Package (English)</value> </data> <data name="PublisherDisplayName"> <value>Contoso Samples, USA</value> </data> <data name="TileShortName"> <value>Contoso (EN)</value> </data> </root> If you want to use the designer in Visual Studio: - Create the Strings\en-usfolder (or other language as appropriate) in your project and add a New Item to the root folder of your project, using the default name of resources.resw. Be sure to choose Resources File (.resw) and not Resource Dictionary - a Resource Dictionary is a file used by XAML applications. - Using the designer, enter the following strings (use the same Namesbut replace the Valueswith the appropriate text for your application): Note If you start with the Visual Studio designer, you can always edit the XML directly by pressing F7. But if you start with a minimal XML file, the designer will not recognize the file because it's missing a lot of additional metadata; you can fix this by copying the boilerplate XSD information from a designer-generated file into your hand-edited XML file. Update the manifest to reference the resources After you have the values defined in the .resw file, the next step is to update the manifest to reference the resource strings. Again, you can edit an XML file directly, or rely on the Visual Studio Manifest Designer. If you are editing XML directly, open the AppxManifest.xml file and make the following changes to the highlighted values - use this exact text, not text specific to your application. There is no requirement to use these exact resource names—you can choose your own—but whatever you choose must exactly match whatever is in the .resw file. These names should match the Names you created in the .resw file, prefixed with the ms-resource: scheme and the Resources/ namespace. Note Many elements of the manifest have been omitted from this snippet - do not delete anything! <?xml version="1.0" encoding="utf-8"?> <Package> <Properties> <DisplayName>ms-resource:Resources/PackageDisplayName</DisplayName> <PublisherDisplayName>ms-resource:Resources/PublisherDisplayName</PublisherDisplayName> </Properties> <Applications> <Application> <uap:VisualElements <uap:DefaultTile <uap:ShowNameOnTiles> <uap:ShowOn </uap:ShowNameOnTiles> </uap:DefaultTile> </uap:VisualElements> </Application> </Applications> </Package> If you are using the Visual Studio manifest designer, open the .appxmanifest file and change the highlighted values values in the *Application tab and the Packaging tab: Step 1.2: Build PRI file, make an MSIX package, and verify it's working You should now be able to build the .pri file and deploy the application to verify that the correct information (in your default language) is appearing in the Start Menu. If you're building in Visual Studio, simply press Ctrl+Shift+B to build the project and then right-click on the project and choose Deploy from the context menu. If you're building manually, follow these steps to create a configuration file for MakePRI tool and to generate the .pri file itself (more information can be found in Manual app packaging): Open a developer command prompt from the Visual Studio 2017 or Visual Studio 2019 folder in the Start menu. Switch to the project root directory (the one that contains the .appxmanifest file and the Strings folder). Type the following command, replacing "contoso_demo.xml" with a name suitable for your project, and "en-US" with the default language of your app (or keep it en-US if applicable). Note the XML file is created in the parent directory (not in the project directory) since it's not part of the application (you can choose any other directory you want, but be sure to substitute that in future commands). makepri createconfig /cf ..\contoso_demo.xml /dq en-US /pv 10.0 /o You can type makepri createconfig /?to see what each parameter does, but in summary: /cfsets the Configuration Filename (the output of this command) /dqsets the Default Qualifiers, in this case the language en-US /pvsets the Platform Version, in this case Windows 10 /osets it to Overwrite the output file if it exists Now you have a configuration file, run MakePRIagain to actually search the disk for resources and package them into a PRI file. Replace "contoso_demop.xml" with the XML filename you used in the previous step, and be sure to specify the parent directory for both input and output: makepri new /pr . /cf ..\contoso_demo.xml /of ..\resources.pri /mf AppX /o You can type makepri new /?to see what each parameter does, but in a nutshell: /prsets the Project Root (in this case, the current directory) /cfsets the Configuration Filename, created in the previous step /ofsets the Output File /mfcreates a Mapping File (so we can exclude files in the package in a later step) /osets it to Overwrite the output file if it exists Now you have a .prifile with the default language resources (eg, en-US). To verify that it worked correctly, you can run the following command: makepri dump /if ..\resources.pri /of ..\resources /o You can type makepri dump /?to see what each parameter does, but in a nutshell: /ifsets the Input Filename /ofsets the Output Filename ( .xmlwill be appended automatically) /osets it to Overwrite the output file if it exists Finally, you can open ..\resources.xmlin a text editor and verify it lists your <NamedResource>values (like ApplicationDescriptionand PublisherDisplayName) along with <Candidate>values for your chosen default language (there will be other content in the beginning of the file; ignore that for now). You can open the mapping file ..\resources.map.txt to verify it contains the files needed for your project (including the PRI file, which is not part of the project's directory). Importantly, the mapping file will not include a reference to your resources.resw file because the contents of that file have already been embedded in the PRI file. It will, however, contain other resources like the filenames of your images. Building and signing the package Now the PRI file is built, you can build and sign the package: To create the app package, run the following command replacing contoso_demo.appxwith the name of the .msix/.appx file you want to create and making sure to choose a different directory for the file (this sample uses the parent directory; it can be anywhere but should not be the project directory). makeappx pack /m AppXManifest.xml /f ..\resources.map.txt /p ..\contoso_demo.appx /o You can type makeappx pack /?to see what each parameter does, but in a nutshell: /msets the Manifest file to use /fsets the mapping File to use (created in the previous step) /psets the output Package name /osets it to Overwrite the output file if it exists After the package is created, it must be signed. The easiest way to get a signing certificate is by creating an empty Universal Windows project in Visual Studio and copying the .pfxfile it creates, but you can create one manually using the MakeCertand Pvk2Pfxutilities as described in How to create an app package signing certificate. Important If you manually create a signing certificate, make sure you place the files in a different directory than your source project or your package source, otherwise it might get included as part of the package, including the private key! To sign the package, use the following command. Note that the Publisherspecified in the Identityelement of the AppxManifest.xmlmust match the Subjectof the certificate (this is not the <PublisherDisplayName>element, which is the localized display name to show to users). As usual, replace the contoso_demo...filenames with the names appropriate for your project, and (very important) make sure the .pfxfile is not in the current directory (otherwise it would have been created as part of your package, including the private signing key!): signtool sign /fd SHA256 /a /f ..\contoso_demo_key.pfx ..\contoso_demo.appx You can type signtool sign /?to see what each parameter does, but in a nutshell: /fdsets the File Digest algorithm (SHA256 is the default for .appx) /awill Automatically select the best certificate /fspecifies the input File that contains the signing certificate Finally, you can now double-click on the .appx file to install it, or if you prefer the command-line you can open a PowerShell prompt, change to the directory containing the package, and type the following (replacing contoso_demo.appx with your package name): add-appxpackage contoso_demo.appx If you receive errors about the certificate not being trusted, make sure it is added to the machine store (not the user store). To add the certificate to the machine store, you can either use the command-line or Windows Explorer. To use the command-line: Run a Visual Studio 2017 or Visual Studio 2019 command prompt as an Administrator. Switch to the directory that contains the .cerfile (remember to ensure this is outside of your source or project directories!) Type the following command, replacing contoso_demo.cerwith your filename: certutil -addstore TrustedPeople contoso_demo.cer You can run certutil -addstore /?to see what each parameter does, but in a nutshell: -addstoreadds a certificate to a certificate store TrustedPeopleindicates the store into which the certificate is placed To use Windows Explorer: - Navigate to the folder that contains the .pfxfile - Double-click on the .pfxfile and the Certicicate Import Wizard should appear - Choose Local Machineand click - Accept the User Account Control admin elevation prompt, if it appears, and click - Enter the password for the private key, if there is one, and click Place all certificates in the following store - Click Browse, and choose the Trusted Peoplefolder (not "Trusted Publishers") - Click Nextand then Finish After adding the certificate to the Trusted People store, try installing the package again. You should now see your app appear in the Start Menu's "All Apps" list, with the correct information from the .resw / .pri file. If you see a blank string or the string ms-resource:... then something has gone wrong - double check your edits and make sure they're correct. If you right-click on your app in the Start Menu, you can Pin it as a tile and verify the correct information is displayed there also. Step 1.3: Add more supported languages After the changes have been made to the package manifest and the initial resources.resw file has been created, adding additional languages is easy. Create additional localized resources First, create the additional localized resource values. Within the Strings folder, create additional folders for each language you support using the appropriate BCP-47 code (for example, Strings\de-DE). Within each of these folders, create a resources.resw file (using either an XML editor or the Visual Studio designer) that includes the translated resource values. It is assumed you already have the localized strings available somewhere, and you just need to copy them into the .resw file; this document does not cover the translation step itself. For example, the Strings\de-DE\resources.resw file might look like this, with the highlighted text changed from en-US: <?xml version="1.0" encoding="utf-8"?> <root> <data name="ApplicationDescription"> <value>Contoso Demo app with localized resources (German)</value> </data> <data name="ApplicationDisplayName"> <value>Contoso Demo Sample (German)</value> </data> <data name="PackageDisplayName"> <value>Contoso Demo Package (German)</value> </data> <data name="PublisherDisplayName"> <value>Contoso Samples, DE</value> </data> <data name="TileShortName"> <value>Contoso (DE)</value> </data> </root> The following steps assume you added resources for both de-DE and fr-FR, but the same pattern can be followed for any language. Update the package manifest to list supported languages The package manifest must be updated to list the languages supported by the app. The Desktop App Converter adds the default language, but the others must be added explicitly. If you're editing the AppxManifest.xml file directly, update the Resources node as follows, adding as many elements as you need, and substituting the appropriate languages you support and making sure the first entry in the list is the default (fallback) language. In this example, the default is English (US) with additional support for both German (Germany) and French (France): <Resources> <Resource Language="EN-US" /> <Resource Language="DE-DE" /> <Resource Language="FR-FR" /> </Resources> If you are using Visual Studio, you shouldn't need to do anything; if you look at Package.appxmanifest you should see the special x-generate value, which causes the build process to insert the languages it finds in your project (based on the folders named with BCP-47 codes). Note that this is not a valid value for a real package manifest; it only works for Visual Studio projects: <Resources> <Resource Language="x-generate" /> </Resources> Re-build with the localized values Now you can build and deploy your application, again, and if you change your language preference in Windows you should see the newly-localized values appear in the Start menu (instructions for how to change your language are below). For Visual Studio, again you can just use Ctrl+Shift+B to build, and right-click the project to Deploy. If you're manually building the project, follow the same steps as above but add the additional languages, separated by underscores, to the default qualifiers list ( /dq) when creating the configuration file. For example, to support the English, German, and French resources added in the previous step: makepri createconfig /cf ..\contoso_demo.xml /dq en-US_de-DE_fr-FR /pv 10.0 /o This will create a PRI file that contains all the specified languagesthat you can easily use for testing. If the total size of your resources is small, or you only support a small number of languages, this might be acceptable for your shipping app; it's only if you want the benefits of minimizing install / download size for your resources that you need to do the additional work of building separate language packs. Test with the localized values To test the new localized changes, you simply add a new preferred UI language to Windows. There is no need to download language packs, reboot the system, or have your entire Windows UI appear in a foreign language. - Run the Settingsapp ( Windows + I) - Go to Time & language - Go to Region & language - Click Add a language - Type (or select) the language you want (eg Deutschor German) - If there are sub-languages, choose the one you want (eg, Deutsch / Deutschland) - Select the new language in the language list - Click Set as default Now open the Start menu and search for your application, and you should see the localized values for the selected language (other apps might also appear localized). If you don't see the localized name right away, wait a few minutes until the Start Menu's cache is refreshed. To return to your native language, just make it the default language in the language list. Step 1.4: Localizing more parts of the package manifest (optional) Other sections of the package manifest can be localized. For example, if your application handles file-extensions then it should have a windows.fileTypeAssociation extension in the manifest, using the green highlighted text exactly as shown (since it will refer to resources), and replacing the yellow highlighted text with information specific to your application: <Extensions> <uap:Extension <uap:FileTypeAssociation <uap:DisplayName>ms-resource:Resources/FileTypeDisplayName</uap:DisplayName> <uap:Logo>Assets\StoreLogo.png</uap:Logo> <uap:InfoTip>ms-resource:Resources/FileTypeInfoTip</uap:InfoTip> <uap:SupportedFileTypes> <uap:FileType.contoso</uap:FileType> </uap:SupportedFileTypes> </uap:FileTypeAssociation> </uap:Extension> </Extensions> You can also add this information using the Visual Studio Manifest Designer, using the Declarations tab, taking note of the highlighted values: Now add the corresponding resource names to each of your .resw files, replacing the highlighted text with the appropriate text for your app (remember to do this for each supported language!): ... existing content... <data name="FileTypeDisplayName"> <value>Contoso Demo File</value> </data> <data name="FileTypeInfoTip"> <value>Files used by Contoso Demo App</value> </data> This will then show up in parts of the Windows shell, such as File Explorer: Build and test the package as before, exercising any new scenarios that should show the new UI strings. Phase 2: Use MRT to identify and locate resources The previous section showed how to use MRT to localize your app's manifest file so that the Windows Shell can correctly display the app's name and other metadata. No code changes were required for this; it simply required the use of .resw files and some additional tools. This section will show how to use MRT to locate resources in your existing resource formats and using your existing resource-handling code with minimal changes. Assumptions about existing file layout & application code Because there are many ways to localize Win32 Desktop apps, this paper will make some simplifying assumptions about the existing application's structure that you will need to map to your specific environment. You might need to make some changes to your existing codebase or resource layout to conform to the requirements of MRT, and those are largely out of scope for this document. Resource file layout This article assumes your localized resources all have the same filenames (eg, contoso_demo.exe.mui or contoso_strings.dll or contoso.strings.xml) but that they are placed in different folders with BCP-47 names ( en-US, de-DE, etc.). It doesn't matter how many resource files you have, what their names are, what their file-formats / associated APIs are, etc. The only thing that matters is that every logical resource has the same filename (but placed in a different physical directory). As a counter-example, if your application uses a flat file-structure with a single Resources directory containing the files english_strings.dll and french_strings.dll, it would not map well to MRT. A better structure would be a Resources directory with subdirectories and files en\strings.dll and fr\strings.dll. It's also possible to use the same base filename but with embedded qualifiers, such as strings.lang-en.dll and strings.lang-fr.dll, but using directories with the language codes is conceptually simpler so it's what we'll focus on. Note It is still possible to use MRT and the benefits of packaging even if you can't follow this file naming convention; it just requires more work. For example, the application might have a set of custom UI commands (used for button labels etc.) in a simple text file named ui.txt, laid out under a UICommands folder: + ProjectRoot |--+ Strings | |--+ en-US | | \--- resources.resw | \--+ de-DE | \--- resources.resw |--+ UICommands | |--+ en-US | | \--- ui.txt | \--+ de-DE | \--- ui.txt |--- AppxManifest.xml |--- ...rest of project... Resource loading code This article assumes that at some point in your code you want to locate the file that contains a localized resource, load it, and then use it. The APIs used to load the resources, the APIs used to extract the resources, etc. are not important. In pseudocode, there are basically three steps: set userLanguage = GetUsersPreferredLanguage() set resourceFile = FindResourceFileForLanguage(MY_RESOURCE_NAME, userLanguage) set resource = LoadResource(resourceFile) // now use 'resource' however you want MRT only requires changing the first two steps in this process - how you determine the best candidate resources and how you locate them. It doesn't require you to change how you load or use the resources (although it provides facilities for doing that if you want to take advantage of them). For example, the application might use the Win32 API GetUserPreferredUILanguages, the CRT function sprintf, and the Win32 API CreateFile to replace the three pseudocode functions above, then manually parse the text file looking for name=value pairs. (The details are not important; this is merely to illustrate that MRT has no impact on the techniques used to handle resources once they have been located). Step 2.1: Code changes to use MRT to locate files Switching your code to use MRT for locating resources is not difficult. It requires using a handful of WinRT types and a few lines of code. The main types that you will use are as follows: - ResourceContext, which encapsulates the currently active set of qualifier values (language, scale factor, etc.) - ResourceManager (the WinRT version, not the .NET version), which enables access to all the resources from the PRI file - ResourceMap, which represents a specific subset of the resources in the PRI file (in this example, the file-based resources vs. the string resources) - NamedResource, which represents a logical resource and all its possible candidates - ResourceCandidate, which represents a single concrete candidate resource In pseudo-code, the way you would resolve a given resource file name (like UICommands\ui.txt in the sample above) is as follows: // Get the ResourceContext that applies to this app set resourceContext = ResourceContext.GetForViewIndependentUse() // Get the current ResourceManager (there's one per app) set resourceManager = ResourceManager.Current // Get the "Files" ResourceMap from the ResourceManager set fileResources = resourceManager.MainResourceMap.GetSubtree("Files") // Find the NamedResource with the logical filename we're looking for, // by indexing into the ResourceMap set desiredResource = fileResources["UICommands\ui.txt"] // Get the ResourceCandidate that best matches our ResourceContext set bestCandidate = desiredResource.Resolve(resourceContext) // Get the string value (the filename) from the ResourceCandidate set absoluteFileName = bestCandidate.ValueAsString Note in particular that the code does not request a specific language folder - like UICommands\en-US\ui.txt - even though that is how the files exist on-disk. Instead, it asks for the logical filename UICommands\ui.txt and relies on MRT to find the appropriate on-disk file in one of the language directories. From here, the sample app could continue to use CreateFile to load the absoluteFileName and parse the name=value pairs just as before; none of that logic needs to change in the app. If you are writing in C# or C++/CX, the actual code is not much more complicated than this (and in fact many of the intermediate variables can be elided) - see the section on Loading .NET resources, below. C++/WRL-based applications will be more complex due to the low-level COM-based APIs used to activate and call the WinRT APIs, but the fundamental steps you take are the same - see the section on Loading Win32 MUI resources, below. Because .NET has a built-in mechanism for locating and loading resources (known as "Satellite Assemblies"), there is no explicit code to replace as in the synthetic example above - in .NET you just need your resource DLLs in the appropriate directories and they are automatically located for you. When an app is packaged as an MSIX or .appx using resource packs, the directory structure is somewhat different - rather than having the resource directories be subdirectories of the main application directory, they are peers of it (or not present at all if the user doesn't have the language listed in their preferences). For example, imagine a .NET application with the following layout, where all the files exist under the MainApp folder: + MainApp |--+ en-us | \--- MainApp.resources.dll |--+ de-de | \--- MainApp.resources.dll |--+ fr-fr | \--- MainApp.resources.dll \--- MainApp.exe After conversion to .appx, the layout will look something like this, assuming en-US was the default language and the user has both German and French listed in their language list: + WindowsAppsRoot |--+ MainApp_neutral | |--+ en-us | | \--- MainApp.resources.dll | \--- MainApp.exe |--+ MainApp_neutral_resources.language_de | \--+ de-de | \--- MainApp.resources.dll \--+ MainApp_neutral_resources.language_fr \--+ fr-fr \--- MainApp.resources.dll Because the localized resources no longer exist in sub-directories underneath the main executable's install location, the built-in .NET resource resolution fails. Luckily, .NET has a well-defined mechanism for handling failed assembly load attempts - the AssemblyResolve event. A .NET app using MRT must register for this event and provide the missing assembly for the .NET resource subsystem. A concise example of how to use the WinRT APIs to locate satellite assemblies used by .NET is as follows; the code as-presented is intentionally compressed to show a minimal implementation, although you can see it maps closely to the pseudo-code above, with the passed-in ResolveEventArgs providing the name of the assembly we need to locate. A runnable version of this code (with detailed comments and error-handling) can be found in the file PriResourceRsolver.cs in the .NET Assembly Resolver sample on GitHub. static class PriResourceResolver { internal static Assembly ResolveResourceDll(object sender, ResolveEventArgs args) { var fullAssemblyName = new AssemblyName(args.Name); var fileName = string.Format(@"{0}.dll", fullAssemblyName.Name); var resourceContext = ResourceContext.GetForViewIndependentUse(); resourceContext.Languages = new[] { fullAssemblyName.CultureName }; var resource = ResourceManager.Current.MainResourceMap.GetSubtree("Files")[fileName]; // Note use of 'UnsafeLoadFrom' - this is required for apps installed with .appx, but // in general is discouraged. The full sample provides a safer wrapper of this method return Assembly.UnsafeLoadFrom(resource.Resolve(resourceContext).ValueAsString); } } Given the class above, you would add the following somewhere early-on in your application's startup code (before any localized resources would need to load): void EnableMrtResourceLookup() { AppDomain.CurrentDomain.AssemblyResolve += PriResourceResolver.ResolveResourceDll; } The .NET runtime will raise the AssemblyResolve event whenever it can't find the resource DLLs, at which point the provided event handler will locate the desired file via MRT and return the assembly. Note If your app already has an AssemblyResolve handler for other purposes, you will need to integrate the resource-resolving code with your existing code. Loading Win32 MUI resources is essentially the same as loading .NET Satellite Assemblies, but using either C++/CX or C++/WRL code instead. Using C++/CX allows for much simpler code that closely matches the C# code above, but it uses C++ language extensions, compiler switches, and additional runtime overheard you might wish to avoid. If that is the case, using C++/WRL provides a much lower-impact solution at the cost of more verbose code. Nevertheless, if you are familiar with ATL programming (or COM in general) then WRL should feel familiar. The following sample function shows how to use C++/WRL to load a specific resource DLL and return an HINSTANCE that can be used to load further resources using the usual Win32 resource APIs. Note that unlike the C# sample that explicitly initializes the ResourceContext with the language requested by the .NET runtime, this code relies on the user's current language. #include <roapi.h> #include <wrl\client.h> #include <wrl\wrappers\corewrappers.h> #include <Windows.ApplicationModel.resources.core.h> #include <Windows.Foundation.h> #define IF_FAIL_RETURN(hr) if (FAILED((hr))) return hr; HRESULT GetMrtResourceHandle(LPCWSTR resourceFilePath, HINSTANCE* resourceHandle) { using namespace Microsoft::WRL; using namespace Microsoft::WRL::Wrappers; using namespace ABI::Windows::ApplicationModel::Resources::Core; using namespace ABI::Windows::Foundation; *resourceHandle = nullptr; HRESULT hr{ S_OK }; RoInitializeWrapper roInit{ RO_INIT_SINGLETHREADED }; IF_FAIL_RETURN(roInit); // Get Windows.ApplicationModel.Resources.Core.ResourceManager statics ComPtr<IResourceManagerStatics> resourceManagerStatics; IF_FAIL_RETURN(GetActivationFactory( HStringReference( RuntimeClass_Windows_ApplicationModel_Resources_Core_ResourceManager).Get(), &resourceManagerStatics)); // Get .Current property ComPtr<IResourceManager> resourceManager; IF_FAIL_RETURN(resourceManagerStatics->get_Current(&resourceManager)); // get .MainResourceMap property ComPtr<IResourceMap> resourceMap; IF_FAIL_RETURN(resourceManager->get_MainResourceMap(&resourceMap)); // Call .GetValue with supplied filename ComPtr<IResourceCandidate> resourceCandidate; IF_FAIL_RETURN(resourceMap->GetValue(HStringReference(resourceFilePath).Get(), &resourceCandidate)); // Get .ValueAsString property HString resolvedResourceFilePath; IF_FAIL_RETURN(resourceCandidate->get_ValueAsString( resolvedResourceFilePath.GetAddressOf())); // Finally, load the DLL and return the hInst. *resourceHandle = LoadLibraryEx(resolvedResourceFilePath.GetRawBuffer(nullptr), nullptr, LOAD_LIBRARY_AS_DATAFILE | LOAD_LIBRARY_AS_IMAGE_RESOURCE); return S_OK; } Phase 3: Building resource packs Now that you have a "fat pack" that contains all resources, there are two paths towards building separate main package and resource packages in order to minimize download and install sizes: - Take an existing fat pack and run it through the Bundle Generator tool to automatically create resource packs. This is the preferred approach if you have a build system that already produces a fat pack and you want to post-process it to generate the resource packs. - Directly produce the individual resource packages and build them into a bundle. This is the preferred approach if you have more control over your build system and can build the packages directly. Step 3.1: Creating the bundle Using the Bundle Generator tool In order to use the Bundle Generator tool, the PRI config file created for the package needs to be manually updated to remove the <packaging> section. If you're using Visual Studio, refer to Ensure that resources are installed on a device regardless of whether a device requires them for information on how to build all languages into the main package by creating the files priconfig.packaging.xml and priconfig.default.xml. If you're manually editing files, follow these steps: Create the config file the same way as before, substituting the correct path, file name and languages: makepri createconfig /cf ..\contoso_demo.xml /dq en-US_de-DE_es-MX /pv 10.0 /o Manually open the created .xmlfile and delete the entire <packaging&rt;section (but keep everything else intact): <?xml version="1.0" encoding="UTF-8" standalone="yes" ?> <resources targetOsVersion="10.0.0" majorVersion="1"> <!-- Packaging section has been deleted... --> <index root="\" startIndexAt="\"> <default> ... ... Build the .prifile and the .appxpackage as before, using the updated configuration file and the appropriate directory and file names (see above for more information on these commands): makepri new /pr . /cf ..\contoso_demo.xml /of ..\resources.pri /mf AppX /o makeappx pack /m AppXManifest.xml /f ..\resources.map.txt /p ..\contoso_demo.appx /o AFter the package has been created, use the following command to create the bundle, using the appropriate directory and file names: BundleGenerator.exe -Package ..\contoso_demo.appx -Destination ..\bundle -BundleName contoso_demo Now you can move to the final step, signing (see below). Manually creating resource packages Manually creating resource packages requires running a slightly different set of commands to build separate .pri and .appx files - these are all similar to the commands used above to create fat packages, so minimal explanation is given. Note: All the commands assume that the current directory is the directory containing the AppXManifest.xml file, but all files are placed into the parent directory (you can use a different directory, if necessary, but you shouldn't pollute the project directory with any of these files). As always, replace the "Contoso" filenames with your own file names. Use the following command to create a config file that names only the default language as the default qualifier - in this case, en-US: makepri createconfig /cf ..\contoso_demo.xml /dq en-US /pv 10.0 /o Create a default .priand .map.txtfile for the main package, plus an additional set of files for each language found in your project, with the following command: makepri new /pr . /cf ..\contoso_demo.xml /of ..\resources.pri /mf AppX /o Use the following command to create the main package (which contains the executable code and default language resources). As always, change the name as you see fit, although you should put the package in a separate directory to make creating the bundle easier later (this example uses the ..\bundledirectory): makeappx pack /m .\AppXManifest.xml /f ..\resources.map.txt /p ..\bundle\contoso_demo.main.appx /o After the main package has been created, use the following command once for each additional language (ie, repeat this command for each language map file generated in the previous step). Again, the output should be in a separate directory (the same one as the main package). Note the language is specified both in the /foption and the /poption, and the use of the new /rargument (which indicates a Resource Package is desired): makeappx pack /r /m .\AppXManifest.xml /f ..\resources.language-de.map.txt /p ..\bundle\contoso_demo.de.appx /o Combine all the packages from the bundle directory into a single .appxbundlefile. The new /doption specifies the directory to use for all the files in the bundle (this is why the .appxfiles are put into a separate directory in the previous step): makeappx bundle /d ..\bundle /p ..\contoso_demo.appxbundle /o The final step to building the package is signing. Step 3.2: Signing the bundle Once you have created the .appxbundle file (either through the Bundle Generator tool or manually) you will have a single file that contains the main package plus all the resource packages. The final step is to sign the file so that Windows will install it: signtool sign /fd SHA256 /a /f ..\contoso_demo_key.pfx ..\contoso_demo.appxbundle This will produce a signed .appxbundle file that contains the main package plus all the language-specific resource packages. It can be double-clicked just like a package file to install the app plus any appropriate language(s) based on the user's Windows language preferences.
https://docs.microsoft.com/en-us/windows/uwp/app-resources/using-mrt-for-converted-desktop-apps-and-games
2020-07-02T15:49:02
CC-MAIN-2020-29
1593655879532.0
[array(['images/conceptual-resource-model.png', None], dtype=object) array(['images/editing-resources-resw.png', None], dtype=object) array(['images/editing-application-info.png', None], dtype=object) array(['images/editing-packaging-info.png', None], dtype=object) array(['images/editing-declarations-info.png', None], dtype=object) array(['images/file-type-tool-tip.png', None], dtype=object)]
docs.microsoft.com
Using Existing Libraries Similar to a uses clause in Delphi or a using statement in C#, Remoting SDK libraries can use existing services. By using an existing library in your project, you can use and extend all its elements. To use an existing library, select Edit|Include Existing RODL. In the resulting dialog, browse to or enter the file path of the desired RODL file. The Service Builder will then add a new item to the Library Panel to indicate that you are using the Library. After choosing the RODL file to use, you will see this item listed in the Library tree with all the elements provided by this library listed underneath it: From here, you can change the name, the file name and the documentation for the use of the RODL file. Note that all elements defined in the RODL used are listed in the Library pane in gray and are in read-only mode; you can select them to view their details, but you cannot modify them.
https://docs.remotingsdk.com/Tools/ServiceBuilder/UsingExistingLibraries/
2020-07-02T15:17:10
CC-MAIN-2020-29
1593655879532.0
[array(['../../../Tools/ServiceBuilder/Using_Existing_Libraries.png', None], dtype=object) ]
docs.remotingsdk.com
TOPICS× Reporting Second Party and Third Party Data Usage in Audience Marketplace at the Segment Level This video shows a new method of reporting data usage within the Payables section of the Audience Marketplace UI. In addition to the existing process of feed-level reporting, monthly impressions can now be submitted at the segment level, which eliminates the need for offline calculations for cost attribution. This release will provide more flexibility and an improved workflow for second party and third party data usage that allows customers to report on impressions at the segment level via the UI or using bulk upload functionality. Additionally, customers purchasing second party or third party data from Audience Marketplace will benefit from an improved cost attribution policy. This new policy will attribute costs to data providers based on the unique user counts of traits in a rule-based segment, resulting in more transparency and equitable usage billing. More details on the billing algorithm can be found in the documentation . For more information about reporting CPM usage, please visit the documentation .
https://docs.adobe.com/content/help/en/audience-manager-learn/tutorials/audience-marketplace/buying-data/reporting-2nd-and-3rd-party-data-usage-in-the-audience-marketplace-at-the-segment-level.html
2020-07-02T15:50:40
CC-MAIN-2020-29
1593655879532.0
[]
docs.adobe.com
8.5.201.00 Workforce Management Data Aggregator Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release includes the following new features and enhancements: - The WFM ETL Database now stores Agent Hourly Wage and Rank information in its WFM_AGENT Dimension table. (WFM-23895) - Added support for virtual platform ESXi 5.5. Resolved Issues This release contains the following resolved issues: Agents now correctly display as non-adherent to predefined Vacation schedule states when that state is included in a Schedule State Group, other than the general Time-Off Fixed state. User configured Time-Off type adherence works as expected. (WFM-23249) Upgrade Notes No special procedure is required to upgrade to release 8.5.201.00. This page was last edited on April 27, 2016, at 01:27. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/8.5.x/wm-da85rn/wm-da8520100
2020-07-02T14:29:00
CC-MAIN-2020-29
1593655879532.0
[]
docs.genesys.com
Windows 7 refreshed media creation The). To install most of the other patches in between April 2016 and October 2016 we will include the following updates: - KB3020369 (April 2015 Servicing Stack Update) - KB3125574 (April 2016 Convenience Update Rollup) - KB3177467 (September 2016 SSU) - KB3172605 (July 2016 Functional Update Rollup, 7C* package) - KB3179573 (August 2016 FUR, 8C* package) - KB2841134 (Internet Explorer 11, Optional) - KB3185330 (October 2016 Monthly Quality Rollup, 10B' package [contains September 2016 FUR, 9C* package]) *3rd Tuesday package of that month. Download all the packages from the Microsoft Update Catalog (now updated to work on all browsers) to a folder: Expand the .MSU files to extract the .CAB file which will be used with DISM: Command used: expand -f:*Windows*.cab C:\files\Window7MediaRefresh\*.msu C:\files\Window7MediaRefresh\CABs (thank you abbodi1406) Afterwards we should have the following: For this guide we will use the install.wim file from a Windows 7 Enterprise x64 SP1 Media: Mount the image as described in the offline servicing article below: Add or Remove Packages Offline Using DISM Dism /Mount-Image /ImageFile:C:\test\images\install.wim /Name:"Windows 7 ENTERPRISE" /MountDir:C:\test\offline And now we can start adding the first package, the KB3020369 SSU: Dism /Image:C:\test\offline /Add-Package /PackagePath:C:\files\Window7MediaRefresh\CABs\Windows6.1-KB3020369-x64.cab We can check the installation state using the command below: Dism /Image:C:\test\offline /Get-Packages It is in a "Install Pending" state: Package Identity : Package_for_KB3020369~31bf3856ad364e35~amd64~~6.1.1.1 State : Install Pending Release Type : Update Install Time : 11/4/2016 9:11 AM The next update to install is the Convenience Update Rollup KB3125574: Dism /Image:C:\test\offline /Add-Package /PackagePath:C:\files\Window7MediaRefresh\CABs\Windows6.1-KB3125574-v4-x64.cab Package Identity : Package_for_KB3125574~31bf3856ad364e35~amd64~~6.1.4.4 State : Install Pending Release Type : Update Install Time : 11/4/2016 10:33 AM We cannot continue to install updates offline, because of the DISM limitation below:. Commit the changes with the command below to seal the image: Dism /Unmount-Image /MountDir:C:\test\offline /Commit Now we need to install a Windows 7 VM/PC with this image to finish the outstanding servicing actions and install the other updates. Do this either by hand or using your favorite deployment tool. After the image is installed we should see the following (SP1 + KB3020369 + KB3125574 only): C:\Windows\system32>Dism /Online /Get-Packages Deployment Image Servicing and Management tool Version: 6.1.7600.16385 Image Version: 6.1.7601.23403 Packages listing: Package Identity : Package_for_KB3020369~31bf3856ad364e35~amd64~~6.1.1.1 State : Installed Release Type : Update Install Time : 11/4/2016 12:29 AM Package Identity : Package_for_KB3125574~31bf3856ad364e35~amd64~~6.1.4.4 State : Installed Release Type : Update Install Time : 11/4/2016 12:29 AM Package Identity : Package_for_KB976902~31bf3856ad364e35~amd64~~6.1.1.17514 State : Installed Release Type : Update Install Time : 11/21/2010 3:01 AM Continue to install updates 3-5 using the DISM commands below: Step 6, which is installing IE11, is optional, in case you have business critical applications that are dependent on an older Internet Explorer version. The security-only, monthly rollup, and preview rollup will not install or upgrade to these versions of Internet Explorer if they are not already present, so we will install IE11 next, before we apply the monthly rollup. This will be the IE version installed, if we do not upgrade it: The last update to install is KB3185330, at the moment of writing this guide, in the future just replace it with the latest monthly/preview rollup, found on the page below: Windows 7 SP1 and Windows Server 2008 R2 SP1 update history Dism /Online /Add-Package /PackagePath:C:\temp\Windows6.1-KB3185330-x64.cab Now we can search online for the remainder of the updates or just capture the image after generalizing it, if we want to finish as fast as possible. Online search results after the above updates (34/39 optional updates are language packs): TL;DR version: Download the updates: Extract CAB files from MSU: expand -f:* C:\files\Window7MediaRefresh\AMD64-all-windows6.1-kb3020369-x64_5393066469758e619f21731fc31ff2d109595445.msu C:\files\Window7MediaRefresh\CABs expand -f:* C:\files\Window7MediaRefresh\AMD64-all-windows6.1-kb3125574-v4-x64_2dafb1d203c8964239af3048b5dd4b1264cd93b9.msu C:\files\Window7MediaRefresh\CABs expand -f:* C:\files\Window7MediaRefresh\AMD64-all-windows6.1-kb3177467-x64_42467e48b4cfeb44112d819f50b0557d4f9bbb2f.msu C:\files\Window7MediaRefresh\CABs expand -f:* C:\files\Window7MediaRefresh\AMD64-all-windows6.1-kb3172605-x64_2bb9bc55f347eee34b1454b50c436eb6fd9301fc.msu C:\files\Window7MediaRefresh\CABs expand -f:* C:\files\Window7MediaRefresh\AMD64-all-windows6.1-kb3179573-x64_0ec541490b3f7b02e41f26cb2c444cbd9e13df4d.msu C:\files\Window7MediaRefresh\CABs expand -f:* C:\files\Window7MediaRefresh\AMD64-all-windows6.1-kb3185330-x64_8738d0ef3718b8b05659454cff898e8c4f0433d7.msu C:\files\Window7MediaRefresh\CABs Mount the image: Dism /Mount-Image /ImageFile:C:\test\images\install.wim /Name:"Windows 7 ENTERPRISE" /MountDir:C:\test\offline Install the first two updates offline: Dism /Image:C:\test\offline /Add-Package /PackagePath:C:\files\Window7MediaRefresh\CABs\Windows6.1-KB3020369-x64.cab Dism /Image:C:\test\offline /Add-Package /PackagePath:C:\files\Window7MediaRefresh\CABs\Windows6.1-KB3125574-v4-x64.cab Unmount and commit the changes to the install.wim file: Dism /Unmount-Image /MountDir:C:\test\offline /Commit Install a reference machine with the above install.wim file and continue to install the updates: Dism /Online /Add-Package /PackagePath:C:\temp\Windows6.1-KB3185330-x64.cab Search online to install the remainder of the updates. Sysprep the newly patched machine and capture an updated wim file. If you have any questions or feedback, please use the comment section. Thank you and see you next time! Andrei Stoica, Windows Deployment Engineer
https://docs.microsoft.com/en-us/archive/blogs/astoica/windows-7-refreshed-media-creation
2020-07-02T16:57:21
CC-MAIN-2020-29
1593655879532.0
[]
docs.microsoft.com
The Dark Side of Virtualization Over the years I've been engaged in several AD disaster recovery scenarios where things ultimately boiled down to the same root cause; a single point of failure had been introduced into the IT environment. When the single point of failure failed catastrophically - it consequently took down the entire environment with it. With good backups that can be restored to recover this may not be an End of Days scenario - but as the 3rd principle of Murphy's Law dicates chances are the backups available are either unusable, unrestorable or non-existent when you actually need them (in the same sense that they will always work when you don't need them). Now... in most virtualization scenarios the admin responsible for the virtual server is typically completely removed from the storage layer - this has been a conscious push by most of the virtualization providers as part of the drive towards virtualization being intended to simplify IT environments by making the storage medium unimportant. In itself that may be a valid selling point - but the thing is that even if the Admin is removed from the storage medium the virtual hard disk of the virtual server still needs to be physically stored somewhere. Even the Cloud has mechanical moving parts... For large hosting providers this "somewhere" is typically a centralized SAN with redundant gizmos, thingamagicks and bells and whistles. SAN's are tried and tested storage devices that have been around for years before the idea of using them to store virtual machine images was ever conceived - but with today's storage capacity by far outweighing today's backup or restore capability and the cost of a decent SAN with full redundancy being relatively high it becomes very tempting to build SAN's that are large enough to hold the entire mass of virtual machines you are hosting to save money and increase ROI from that SAN. Consider the following hypothetical but all too likely disaster recovery scenario in today's SAN-based virtualization environments: - You store 2000 virtual machines on the same SAN. - The SAN fails catastrophically Even at best, with a perfect backup strategy in place, a bulletproof Disaster Recovery plan and a small army of trained ninjas that spring from the shadows and start Disaster Recovery procedures at the very instant that the failure has been detected and quantified.... you're still looking at a lengthy recovery process. If you're missing one of these...you're looking at an even longer process. Morale: Every Cloud has a Silver lining - even Private Clouds :)
https://docs.microsoft.com/en-us/archive/blogs/instan/the-dark-side-of-virtualization
2020-07-02T15:50:28
CC-MAIN-2020-29
1593655879532.0
[]
docs.microsoft.com
Use New Relic Infrastructure's Host not reporting alert condition to notify you when we have stopped receiving data from an Infrastructure agent. This Infrastructure feature allows you to dynamically alert on groups of hosts, configure the time window from five to 60 minutes, and take full advantage of New Relic Alerts. Anyone can view alerts tied to your account. Only Owner, Admins, or Add-on Managers can create, modify, or delete conditions. Features You can define conditions based on the sets of hosts most important to you, and configure thresholds appropriate for each filter set. The Host not reporting event triggers when data from the Infrastructure agent does not reach our collector within the time frame you specify. This feature's flexibility allows you to easily customize what to monitor and when to notify selected individuals or teams. In addition, the email notification includes links to help you quickly troubleshoot the situation. Create "host not reporting" condition To define the Host not reporting alert criteria: - Follow standard procedures to create an Infrastructure alert condition. - Select Host not reporting as the Alert type. - Define the Critical threshold for triggering the alert notification: minimum 5 minutes, maximum 60 minutes. - Enable 'Don't trigger alerts for hosts that perform a clean shutdown' option, if you want to prevent false alerts when you have hosts set to shut down via command line. Currently this feature is supported on all Windows systems and Linux systems using systemd. Depending on the alert policy's incident preferences, the policy defines which notification channels we use when the defined Critical threshold for the alert condition passes. To avoid "false positives," the host must stop reporting for the entire time period before a violation is opened. Example: You create a condition to open a violation when any of the filtered set of hosts stop reporting data for seven minutes. - If any host stops reporting for five minutes, then resumes reporting, the condition does not open a violation. - If any host stops reporting for seven minutes, even if the others are fine, the condition does open a violation. Investigate the problem To further investigate why a host is not reporting data: - Review the details in the alert email notification. - Use the link from the email notification to monitor ongoing changes in your environment from Infrastructure's Events page. For example, use the Events page to help determine if a host disconnected right after a root user made a configuration change to the host. - Optional: Use the email notification's Acknowledge link to verify you are aware of and taking ownership of the alerting incident. - Use the email links to examine additional details in the Incident details page in Alerts. Intentional outages We can distinguish between unexpected situations and planned situations with the option Don't trigger alerts for hosts that perform a clean shutdown. Use this option for situations such as: - Host has been taken offline intentionally. - Host has planned downtime for maintenance. - Host has been shut down or decommissioned. - Autoscaling hosts or shutting down instances in a cloud console. We rely on Linux and Windows shutdown signals to flag a clean shutdown. We confirmed that these scenarios are detected by the agent: - AWS Auto-scaling event with EC2 instances that use systemd (Amazon Linux, CentOs/RedHat 7 and newer, Ubuntu 16 and newer, Suse 12 and newer, Debian 9 and newer) - User-initiated shutdown of Windows systems - User-initiated shutdown of Linux systems that use systemd (Amazon Linux, CentOs/RedHat 7 and newer, Ubuntu 16 and newer, Suse 12 and newer, Debian 9 and newer) We know that these scenarios are not detected by the agent: - User-initiated shutdown of Linux systems that don't use systemd (CentOs/RedHat 6 and earlier, Ubuntu 14, Debian 8). This includes other modern Linux systems that still use Upstart or SysV init systems. - AWS Auto-scaling event with EC2 instances that don't use systemd (CentOs/RedHat 6 and earlier, Ubuntu 14, Debian 8). This includes other more modern Linux systems that still use Upstart or SysV init systems.
https://docs.newrelic.com/docs/infrastructure/new-relic-infrastructure/infrastructure-alert-conditions/create-infrastructure-host-not-reporting-condition
2020-07-02T17:14:25
CC-MAIN-2020-29
1593655879532.0
[]
docs.newrelic.com
New Relic Diagnostics is a utility that automatically detects common problems with New Relic agents. If Diagnostics detects a problem, it suggests troubleshooting steps. New Relic Diagnostics can also automatically attach troubleshooting data to a New Relic Support ticket. For additional troubleshooting steps for your agent, see Not seeing data. Compatibility New Relic Diagnostics is available for Linux, macOS, and Windows. It can detect common configuration issues for: New Relic APM: Available for all APM agents except C SDK. For the Go agent, only basic connectivity checks are available. - New Relic Browser: Browser agent detection - New Relic Infrastructure: Linux and Windows agents Diagnostics does not require superuser permissions to run, although New Relic does recommend those permissions for some checks. It will return an error if it does not have permissions to read the files it scans. Run New Relic Diagnostics To use New Relic Diagnostics: - Review the release notes, to make sure you have the latest version. - Download the latest version, which contains executable files for Linux, macOS, and Windows. - Move the executable for your platform into the location of your application's root directory. - Recommendation: Temporarily raise the logging level for the New Relic agent for more accurate troubleshooting. Note that changing the logging level requires you to restart your application. - Run the executable. - Recommendation: Run with a task suite (CLI option) to scope your troubleshooting. New Relic Diagnostics automatically searches its root directory and subdirectories for agent configuration files and other relevant data. To run Diagnostics, follow the procedures for your platform: - Linux Ensure you have New Relic Diagnostics: From the command line, change the directory to your application's root directory and ensure that the nrdiag.zipfile is present. OR Manually download the latest version. - Unzip nrdiag.zipif necessary. - From the nrdiag_1.2.3/linuxdirectory, move nrdiaginto the application's root directory. Run nrdiag(along with any CLI options and/or a ticket attachment key): ./nrdiag CLI_OPTIONS New Relic Diagnostics outputs issues it discovers to your terminal. If it doesn't, then include an attachment key as a CLI option. This will add relevant files to your support ticket for troubleshooting. - macOS/macdirectory, move nrdiaginto the application's root directory. Run nrdiag(along with any CLI options, or a ticket attachment key): ./nrdiag CLI_OPTIONS New Relic Diagnostics outputs any issues it discovers, and uploads relevant files to your New Relic Support ticket if you include a ticket attachment key. - Windows/windirectory, move nrdiag.exeor nrdiag_x64.exeinto the application's root directory. - For troubleshooting web applications, ensure you are running the executable from your project's parent directory, or specify your config file location with the -coption. Run the executable (along with any CLI options or a ticket attachment key) from the directory you placed the binary. Since some checks require elevated permissions, for best results run from an Admin shell. Run via PowerShell if you add any CLI_OPTIONS: ./nrdiag.exe CLI_OPTIONS OR, for x64 systems: ./nrdiag_x64.exe CLI_OPTIONS New Relic Diagnostics outputs any issues it discovers, and it uploads relevant files to your New Relic Support ticket if you include a ticket attachment key. - New Relic Browser Ensure you have the latest version of New Relic Diagnostics. If necessary, manually download the latest version. - Unzip nrdiag.zipif necessary. - From the nrdiag_1.2.3/OS directory, run nrdiag(along with any CLI options or a ticket attachment key): ./nrdiag -browser-url WEBSITE_URL CLI_OPTIONS New Relic Diagnostics outputs any issues it discovers, and uploads relevant files to your New Relic Support ticket if you include a ticket attachment key. Suites flag (highly recommended CLI option) A suite is a collection of health checks that target specific products or issues. Using a suite can help narrow the scope of troubleshooting and reduce the occurrence of false positives. View suites To review a list of available suites, run New Relic Diagnostics with the -help suites option. ./nrdiag --help suites Run suites Suites are run by providing the -suites flag and one or more suite names (for example, java) to run as arguments. View or copy attachment key New Relic Support generates an attachment key for your support ticket, which is used with Diagnostics. To view or copy your attachment key: - Log in to your New Relic account at rpm.newrelic.com, then select Help > Get support. - Select View open tickets, then select the ticket. - Copy the NR Diagnostics attachment keythat appears at the top of the ticket. If you do not see the attachment key code, notify Support. Use this attachment key to upload your Diagnostics results to your support ticket. Upload results to a support ticket If your system is configured to not connect to external IP addresses, this method will not work. Instead, attach the output files in an email to New Relic Support. Automatic upload To upload your results automatically to a New Relic Support ticket when New Relic Diagnostics is executed, use the -attachment-key command line flag with your ticket's attachment key. Uploading your results to a support ticket will automatically upload the contents of nrdiag-output.zip. If you would like to inspect or modify the contents of this file before upload, follow the manual upload process. Pass command line options These command line options are available for New Relic Diagnostics: Interpret the output After executing New Relic Diagnostics from your terminal, you will see the results for each task as they are completed. Tasks that result in a Warning or Failure status code will log additional details regarding possible issues found during execution, along with troubleshooting suggestions and relevant links to documentation: File output New Relic Diagnostics outputs three files: Result status codes New Relic Diagnostics returns the following status codes after running: Licensing and security The use of New Relic Diagnostics is subject to this license agreement, as well as licensing agreements for open-source software used by New Relic Diagnostics. Like any other New Relic product or service, the Diagnostic service is designed to protect you and your customers' data privacy. New Relic Diagnostics inspects system information and New Relic product artifacts (logs, config files) that are relevant for performing diagnostic checks that assess New Relic product configuration and operability. By default, this data is not transmitted to New Relic. You do have the option to upload this information to a support ticket over HTTPS. - Data uploaded to support tickets Diagnostics does allow uploading of this information to a support ticket over HTTPS, with the use of a specific command-line argument. You will be prompted before collection of any files that we expect to have a likelihood of containing sensitive information. Before the collected files contained within nrdiag-output.zipare uploaded to New Relic, you will also be prompted. This allows you to review and edit any information that you do not want to provide. (For example, the nrdiag-output.zipwill include your user name.) You also have the option to cancel the upload altogether. - Data storage Any support ticket attachments made using Diagnostics or containing Diagnostics data that are less than or equal to 20MB are stored by Zendesk. Otherwise these attachments are stored by New Relic. For more information, see Zendesk's privacy and data protection policies. - Environment variables New Relic Diagnostics examines the following environment variables to perform diagnostic checks. The values of these variables are recorded locally in the nrdiag-output.jsonfile. - Any environment variable containing NEWRELICor NEW_RELIC - Any environment variable beginning with NRIA PATH RUBY_ENV RAILS_ENV APP_ENV RACK_ENV LOCALAPPDATA DOTNET_SDK_VERSION DOTNET_INSTALL_PATH COR_PROFILER COR_PROFILER_PATH COR_ENABLE_PROFILER CORECLR_ENABLE_PROFILING CORECLR_PROFILER CORECLR_PROFILER_PATH ProgramFiles ProgramData APPDATA JBOSS_HOME JAVA_VERSION_MAJOR JAVA_VERSION_MINOR JAVA_VERSION_BUILD JAVA_PACKAGE JAVA_JCE JAVA_HOME GLIBC_REPO GLIBC_VERSION LANG WORKDIR MINION_JAR DEFAULT_LOCALE_CFG_FILE MINION_PROVIDER MINION_USER MINION_GROUP MINION_LOG_LEVEL DOCKER_API_VERSION DOCKER_HOST MINION_API_ENDPOINT MINION_DOCKER_RUNNER_REGISTRY_ENDPOINT MINION_API_PROXY MINION_API_PROXY_SELF_SIGNED_CERT MINION_CHECK_TIMEOUT MINION_DOCKER_API_VERSION MINION_DOCKER_HOST MINION_DOCKER_RUNNER_APPARMOR MINION_JVM_MB MINION_JVM_OPTS For more information about New Relic's security measures, see our security and privacy documentation, or visit the New Relic security website.
https://docs.newrelic.com/docs/using-new-relic/cross-product-functions/troubleshooting/new-relic-diagnostics
2020-07-02T17:02:35
CC-MAIN-2020-29
1593655879532.0
[array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/2020-06-03%2016.01.42.gif', 'New Relic Diagnostics - Ubuntu Linux New Relic Diagnostics - Ubuntu Linux'], dtype=object) array(['https://docs.newrelic.com/sites/default/files/styles/inline_660px/public/thumbnails/image/screen-nrdiag-output_0.png?itok=chbRj-vb', 'screen-nrdiag-output.png screen-nrdiag-output.png'], dtype=object) ]
docs.newrelic.com
This is an older version of Search Guard. Switch to Latest version Index Action Content Use index actions to store data in an Elasticsearch index. Basic Functionality A typical index action looks like this: { "actions": [ { "type": "index", "checks": [ { "type": "transform", "source": "['flight_num': data.source.FlightNum, 'dest': data.source.DestAirportID]" } ], "index": "testindex" } ] } Index actions write a complete snapshot of the current runtime data as one JSON document into the specified index. Therefore, as shown in the example above, index actions are typically accompanied by transforms which can explicitly define the data to be indexed using Painless scripts - or any other installed script engine. The script should return a map which will converted to a JSON document by the action. Specifying the Document ID Normally, documents will be indexed with an automatically generated ID. You can however also explicitly define the ID of the document by providing an additional attribute in the runtime data called _id. Indexing Multiple Documents If you want to index multiple documents by one action execution, you need to prepare the runtime data in a special way: Store the documents to be indexed in an array and store this array in an attribute called _doc at the top level of the runtime data. The following example stores two documents: { "actions": [ { "type": "index", "checks": [ { "type": "transform", "source": "['_doc': [ [ 'x': 1 ], [ 'x': 2 ] ] ]" } ], "index": "testindex" } ] } Authorization The index operation will be executed with the privileges the user had when creating or updating the watch. So, you must make sure to have all the privileges necessary to write to the respective indexes when creating or updating a watch. Advanced Attributes Further configuration attributes are: refresh: The Elasticsearch index refresh policy. One of false, true or wait_for. Optional; default is false. timeout: If the index operation does not complete in the specified time (in seconds), it will be aborted. Optional; default is 60 seconds.
https://docs.search-guard.com/7.x-40/elasticsearch-alerting-actions-index
2020-07-02T16:05:47
CC-MAIN-2020-29
1593655879532.0
[]
docs.search-guard.com
Getting out of a no boot situation after installing updates on Windows 7-2008R2 A.
https://docs.microsoft.com/en-us/archive/blogs/joscon/getting-out-of-a-no-boot-situation-after-installing-updates-on-windows-7-2008r2
2020-07-02T17:19:28
CC-MAIN-2020-29
1593655879532.0
[]
docs.microsoft.com
users. Let us study should be shown with the new created object..3<<
https://docs.mendix.com/studio7/page-editor-widgets-events-section
2020-07-02T16:52:32
CC-MAIN-2020-29
1593655879532.0
[array(['attachments/page-editor-widgets-events-section/events-section.png', None], dtype=object) array(['attachments/consistency-errors-pages/data-view-customer.png', 'Data View Expects the Customer Object'], dtype=object) array(['attachments/page-editor-widgets-events-section/create-object-example.png', None], dtype=object) array(['attachments/page-editor-widgets-events-section/open-link-action.png', None], dtype=object) array(['attachments/page-editor-widgets-events-section/list-view-delete.png', None], dtype=object) array(['attachments/page-editor-widgets-events-section/data-view-delete.png', None], dtype=object) ]
docs.mendix.com
Segovia Wallets Funds that are available to be paid to recipients are tracked in Segovia wallets. This section describes how a wallet works. Available vs. Current Balance There are two values describing the amount of money in a wallet. The current balance is the amount of money that has not yet been sent to recipients or paid as fees. The available balance is the amount of money that is available to send to recipients. When a payment request arrives and there is enough money in the wallet to cover it, its amount (plus fees) is deducted from the available balance. If the payment succeeds, the amount is deducted from the current balance. If the payment fails, the amount is added back to the available balance. Generally, when there are no payments in the process of being sent, the available and current balance should be the same, but they may differ while payments are in flight, including in cases where a payment provider issue causes the status of a payment request to be unknown for some period of time. Minimum Balance The minimum balance controls whether or not new payment requests are accepted. If the available balance is less than the minimum balance, no new payments may be initiated. If the available balance is at or above the minimum, a payment may cause it to drop below the minimum, but see the next section. Negative Balance Limit For some customers, a wallet may be configured with a negative balance limit. This is the absolute minimum a wallet balance is allowed to reach. A payment that would cause the available balance to drop below this limit will be refused even if the available balance is greater than the minimum balance. In most cases this limit will be zero, that is, negative balances aren't generally permitted. Example Suppose a wallet starts out with - Available balance = $9000 - Current balance = $9000 - Minimum balance = $1000 - Negative balance limit = -$2000 The client requests a batch of payments totaling $7000 including fees. When the request is received, the available balance is reduced: - Available balance = $2000 - Current balance = $9000 - Minimum balance = $1000 - Negative balance limit = -$2000 The payment succeeds: - Available balance = $2000 - Current balance = $2000 - Minimum balance = $1000 - Negative balance limit = -$2000 Now the client requests a payment that totals $5000 including fees. This would cause the available balance to drop below the negative balance limit, so the payment request is refused. Later, the client requests a payment that totals $3000 including fees. The available balance is greater than the minimum, so this is accepted: - Available balance = $-1000 - Current balance = $2000 - Minimum balance = $1000 - Negative balance limit = -$2000 While that payment is in process, the client requests another payment of $50. The available balance is lower than the minimum balance, so the request is refused. Subwallets To subdivide the funds in a wallet, it's possible to create "subwallets" that can be funded by moving money from a main wallet. Each subwallet is associated with a single main wallet and has its own available and current balances, which work as described above. Subwallets always have minimum balances of zero and negative balance limits of zero. Moving money into subwallets is subject to the minimum balance and negative balance limits on the main wallet: it's not possible to move money into a subwallet if the originating wallet is already below its minimum balance, and it's not possible to move money into a subwallet if it would cause the originating wallet's balance to go below its negative balance limit. In other words, moving money into a subwallet follows the same balance limit rules as making a payment from the main wallet. When a payment request specifies a subwallet, only the balances of that particular subwallet are affected. A payment request will be rejected if the subwallet's available balance would drop below zero, even when there are sufficient funds in the main wallet. A subwallet may be deactivated. Deactivated subwallets retain their balances but don't allow new payments. A deactivated subwallet may be reactivated later. Funds may be transferred out of a deactivated subwallet, but not into one. If a reversal of a payment from a subwallet is processed after the subwallet has been deactivated, the funds from the reversal will be credited to the main wallet rather than to the deactivated subwallet. Each subwallet has a name, which is a string supplied by the client when the subwallet is created. A subwallet name must be unique among the subwallets associated with a particular main wallet, but different main wallets can have subwallets with the same names. A subwallet name may not contain the forward slash character / or the asterisk * but may contain any other valid Unicode character. Subwallet names may be up to 40 characters in length. In the API, subwallets and main wallets are represented using a path syntax. For example, a main wallet might be called safaricom-kenya. If it has a subwallet called agent-123, this would be passed to the payment gateway as safaricom-kenya/agent-123 in API calls. The same subwallet name under a different main wallet might be mtn-rwanda/agent-123 and would refer to a completely distinct subwallet with its own separate balances. Only one level of subwallets is supported; it is not legal for a subwallet to have subwallets of its own.
https://docs.thesegovia.com/wallets/
2020-07-02T16:48:27
CC-MAIN-2020-29
1593655879532.0
[]
docs.thesegovia.com
Tech Writer Resources Unable to render {children}. Page not found: KBSS:Tech Writer Resources. - In the Armor Management Portal (AMP), on the left-side navigation, click Infrastructure. - Click SSL/VPN. - If you user has multiple virtual data centers, select the desired virtual data center. - Locate the desired user. - Click the corresponding gear icon, and then select Enable. - This action will enable the Download SSL/VPN Client in your user's account. The user is responsible for installing SSL/VPN in their machine. - (Optional) Repeat these steps for any additional users. You must repeat these steps if the same user uses multiple virtual data centers.
https://docs.armor.com/pages/viewpage.action?pageId=21530482
2020-07-02T14:56:44
CC-MAIN-2020-29
1593655879532.0
[]
docs.armor.com
Install the Power Pack plugin - The Bitnami Review Board + Power Pack Stack provides a one-click install solution for Review Board and Power Pack..
https://docs.bitnami.com/azure/apps/reviewboard/configuration/install-plugins-powerpack/
2020-07-02T17:05:14
CC-MAIN-2020-29
1593655879532.0
[]
docs.bitnami.com
Configure Desktop Appliance sites The tasks below describe how to create, remove, and modify Desktop Appliance sites. To create or remove sites, you execute Windows PowerShell commands. Changes to Desktop Appliance site settings are made by editing the site configuration: The StoreFront and PowerShell consoles cannot be open at the same time. Always close the StoreFront admin console before using the PowerShell console to administer your StoreFront configuration. Likewise, close all instances of PowerShell before opening the StoreFront console. To create or remove Desktop Appliance sitesTo create or remove Desktop Appliance sites. Use an account with local administrator permissions to start Windows PowerShell and, at a command prompt, type the following command to import the StoreFront modules. pre codeblock & "installationlocation\Scripts\ImportModules.ps1" Where installationlocation is the directory in which StoreFront is installed, typically C:\Program Files\Citrix\Receiver StoreFront\. To create a new Desktop Appliance site, type the following command. pre codeblock To remove an existing Desktop Appliance site, type the following command. pre codeblock. To list the Desktop Appliance sites currently available from your StoreFront deployment, type the following command. pre codeblock Get-DSDesktopAppliancesSummary To configure user authenticationTo configure user authentication enabled attribute to false to disable explicit authentication for the site. Locate the following element in the file. ``` pre codeblock ``` Set the value of the enabled attribute to true to enable smart card authentication. To enable pass-through with smart card authentication, you must also set the value of the useEmbeddedSmartcardSso attribute to true. Use the embeddedSmartcardSsoPinTimeout attribute to set the time in hours, minutes, and seconds for which the PIN entry screen is displayed before it times out. When the PIN entry screen times out, users are returned to the logon screen and must remove and reinsert their smart cards to access the PIN entry screen again. The time-out period is set to 20 seconds by default. To enable users to choose between multiple desktopsTo enable users to choose between multiple desktops showMultiDesktop attribute to true to enable users to see and select from all the desktops available to them in the store when they log on to the Desktop Appliance site.
https://docs.citrix.com/en-us/storefront/3-12/advanced-configurations/configure-desktop-appliance-sites.html
2020-07-02T16:14:49
CC-MAIN-2020-29
1593655879532.0
[]
docs.citrix.com
Application Debugging In some cases, you might want to monitor the activity of the JVM running as part of your .NET application. The jconsole is a great tool that allows you to troubleshoot the JVM internals. Opening the JMX port The following is used to open the JMX port to view and monitor the JVM loaded into the .NET process memory address. Have the following settings as part of your app.config file: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="GigaSpaces" type="GigaSpaces.Core.Configuration.GigaSpacesCoreConfiguration, GigaSpaces.Core"/> </configSections> <GigaSpaces> <JvmSettings> <JvmCustomOptions IgnoreUnrecognized="false"> <add Option="-Dcom.sun.management.jmxremote.port=5144"/> <add Option="-Dcom.sun.management.jmxremote.ssl=false"/> <add Option="-Dcom.sun.management.jmxremote.authenticate=false"/> </JvmCustomOptions> </JvmSettings> </GigaSpaces> </configuration> See .NET JVM Configuration for more details. Viewing in JConsole - Start jconsole– jconsole is located under the bin directory of the Java home, by default it is under <Installation dir>\Runtime\java\bin - Once the jconsoleis started, select the Local tab: - This shows the status of the JVM running in your .NET application: See also: For more details on JMX and For more details on JMX and jconsole, refer to: - Sun - Monitoring and Management Using JMX - Sun - Using jconsole Viewing in JVisualVM As an alternative to viewing in JConsole, you can also use JVisualVM. - Start JVisualVM. The default location is the C:\GigaSpaces\XAP.NET-12.2.1-x64\Runtime\Javadirectory. - Connect via JMX. Go to File | Add JMX Connection… - Enter the Connection: localhost:5144 - For thread dumps, go to the “Threads” tab, and click on the “Thread Dump” button.Or: Right click on the application in the left pane, and select “Thread Dump”. - Similarly for heap dumps, go to the “Monitor” tab, click on the “Heap Dump” button. Or right click on the application in the left pane and select “Heap Dump”.
https://docs.gigaspaces.com/xap/12.2/dev-dotnet/debugging-a-xapnet-application.html
2020-07-02T16:50:37
CC-MAIN-2020-29
1593655879532.0
[array(['/attachment_files/dotnet/jcon11.jpg', 'jcon1.jpg'], dtype=object) array(['/attachment_files/dotnet/jcon21.jpg', 'jcon2.jpg'], dtype=object)]
docs.gigaspaces.com
Use New Relic One's entity explorer to access the performance data from all your monitored applications, services, and hosts. For more information, see What is an entity? View entities To use the entity explorer: Go to one.newrelic.com and select Entity explorer. All your monitored entities appear on the left. You may need to scroll your list of entities to see them all. The entity explorer brings together data reported from across all of New Relic. Entity categories include: - Services: applications and services monitored by New Relic APM. - Hosts: servers and hosts monitored by New Relic Infrastructure. - Mobile applications: apps monitored by New Relic Mobile. - Browser applications: apps monitored by New Relic Browser. - Integration-reported data: data from services monitored by New Relic integrations will also be listed, including on-host integrations, like Kubernetes, and cloud integrations, like Amazon AWS or Microsoft Azure. Want to learn more about entities in New Relic One? Read the Accounts in New Relic One post in New Relic's Explorers Hub. Or, watch this video (less than 4 min). You can also learn more about New Relic One in the full online course. Health (alert) status The entity explorer shows a color-coded alert status from New Relic Alerts by tag or entity name There are a couple ways to filter down to specific types of entities: - Filter entities by tags: Use Filter with tags at the top of the page. For example, you may want to filter down to only entities tagged with production, or only entities with a specific AWS region tag. For more about tags, see Tagging. - Filter by entity name: Use Search services by name at the top of the page. New Relic One entity data retention Data retention for entities depends on these factors: As a result of these factors, a short-lived entity (like a cloud host) may not be available in the New Relic One entity list or by search, but its data may still be available via NRQL query.
https://docs.newrelic.com/docs/new-relic-one/use-new-relic-one/ui-data/new-relic-one-entity-explorer-view-performance-across-apps-services-hosts
2020-07-02T16:55:39
CC-MAIN-2020-29
1593655879532.0
[array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/new-relic-one-entity-explorer-entities_0.png', 'new-relic-one-entity-explorer-entities.png New Relic One entity explorer'], dtype=object) array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/new-relic-one-entity-alert-status-red.png', 'new-relic-one-entity-alert-status.png New Relic One entity alert status'], dtype=object) ]
docs.newrelic.com
TypeActivator Overview The TypeActivator class is an abstract class used in conjunction with the TypeManager class to provide an abstract factory for the Serializer. While generating the _Intf file CodeGen, this class defines descendants of the TypeActivator class which implement a CreateInstance method for each complex type in your RODL. These generated classes are marked with an ActivatorAttribute attribute and are linked with corresponding complex types via the RemotableAttribute attribute ActivatorClass property. Location - Reference: RemObjects.SDK.dll - Namespace: RemObjects.SDK Implements Instance Methods constructor protected Creates a new instance of the TypeActivator class. constructor TypeActivator() Sub New CreateInstance Abstract method. In descendants generated by the CodeGen, this method creates a new instance of the corresponding complex type. method CreateInstance: Object Object CreateInstance() Function CreateInstance As Object See Also - TypeManager - Serializer - RODL - Struct - CodeGen - TypeActivator Class:
https://docs.remotingsdk.com/API/NET/Classes/TypeActivator/
2020-07-02T16:02:20
CC-MAIN-2020-29
1593655879532.0
[]
docs.remotingsdk.com
abstract fun subSequence( startIndex: Int, endIndex: Int ): CharSequence Returns a new character sequence that is a subsequence of this character sequence, starting at the specified startIndex and ending right before the specified endIndex. startIndex - the start index (inclusive). endIndex - the end index (exclusive). © 2010–2019 JetBrains s.r.o. Licensed under the Apache License, Version 2.0.
https://docs.w3cub.com/kotlin/api/latest/jvm/stdlib/kotlin/-char-sequence/sub-sequence/
2020-07-02T16:58:10
CC-MAIN-2020-29
1593655879532.0
[]
docs.w3cub.com
Use. © 2010–2019 JetBrains s.r.o. Licensed under the Apache License, Version 2.0.
https://docs.w3cub.com/kotlin/api/latest/jvm/stdlib/kotlin/-use-experimental/-init-/
2020-07-02T16:59:18
CC-MAIN-2020-29
1593655879532.0
[]
docs.w3cub.com
Dinoflagellaten-Zysten als Paläoumweltindikatoren im Spätquartär des Europäischen Nordmeeres Baumann, Astrid Univ. Bremen Monography Verlagsversion Deutsch Baumann, Astrid, 2007: Dinoflagellaten-Zysten als Paläoumweltindikatoren im Spätquartär des Europäischen Nordmeeres. Univ. Bremen, DOI. Dinoflagellate cysts, paleoceanography, paleoenvironment, Late Quaternary, Norwegian-Greenland Sea. - Dinoflagellate cysts have been investigated in nine short sediment cores as well as two long sediment cores from the Norwegian-Greenland Sea and the North Atlantic to reconstruct the surface water paleoenvironment of the last climatic cycle and the Holocene. Holocene sea-surface temperatures and salinities during summer and winter and the extent of sea-ice cover were reconstructed. On the Rockall Plateau, higher cyst concentrations indicating favourable conditions and increased productivity only occur during parts of stage 5, 4-2, and the Holocene. Only sparse occurrences of dinocysts have been observed in the Norwegian-Greenland Sea before 10000 yr BP. Later, high abundances of O. centrocarpum and N. labyrinthus indicate the increased inflow of relatively warm Atlantic. A change in dominance of these species as well as a distinct increase in cyst concentrations marks the onset of the recent circulation system. In the Norwegian Sea, O. centrocarpum dominates the assemblages since about 7000 yr BP, while assemblages in the Iceland and the Greenland Sea are more complex due to the influence of different surface currents. Statistik:View Statistics Collection - Paläontologie [99]
https://e-docs.geo-leo.de/handle/11858/00-1735-0000-0001-31C5-6
2020-07-02T16:36:51
CC-MAIN-2020-29
1593655879532.0
[]
e-docs.geo-leo.de
What you will learn This article synthesizes what you need to check before you publish an iOS application that interacts with beacons. Xcodeproj and Capabilities Our SDK does not need any specific iOS 'capabilities' to run properly, and in particular, it does not require any location background mode capability or BLE accessories background mode capability (unless you need to perform background long beacon beacons. You have to provide the link to this video with an explanation in the App Review Application section of the iTunes Connect publication page. Provide a beacon for tests You must provide a UUID / Major / Minor beacons. You may test these interactions with a beacon configured with the following UUID / Major / Minor: [X / X / X]. You may discover all of the possible interactions between our application and the beacons in the following video: [video link]" Declare your application using the advertisingIdentifier Our SDK analytics library uses the advertisingIdentifier. Do not forget to declare it when you publish your application.
https://docs.connecthings.com/2.7/ios/to-publish-an-ios-application.html
2017-08-16T23:28:26
CC-MAIN-2017-34
1502886102757.45
[]
docs.connecthings.com
If you want to extend the functionality of an existing control, you can create a control derived from an existing control through inheritance. When inheriting from an existing control, you inherit all of the functionality and visual properties of that control. For example, if you were creating a control that inherited from Button, your new control would look and act exactly like a standard Button control. You could then extend or modify the functionality of your new control through the implementation of custom methods and properties. In some controls, you can also change the visual appearance of your inherited control by overriding its OnPaint method. Note The dialog boxes and menu commands you see might differ from those described in Help depending on your active settings or edition. To change your settings, choose Import and Export Settings on the Tools menu. For more information, see Customizing Development Settings in Visual Studio. To create an inherited control Create a new Windows Forms Application project. From the Project menu, choose Add New Item. The Add New Item dialog box appears. In the Add New Item dialog box, double-click Custom Control. A new custom control is added to your project. If you using Visual Basic, at the top of Solution Explorer, click Show All Files. Expand CustomControl1.vb and then open CustomControl1.Designer.vb in the Code Editor. If you are using C#, open CustomControl1.cs in the Code Editor. Locate the class declaration, which inherits from Control. Change the base class to the control that you want to inherit from. For example, if you want to inherit from Button, change the class declaration to the following: Partial Class CustomControl1 Inherits System.Windows.Forms.Button public partial class CustomControl1 : System.Windows.Forms.Button If you are using Visual Basic, save and close CustomControl1.Designer.vb. Open CustomControl1.vb in the Code Editor. Implement any custom methods or properties that your control will incorporate. If you want to modify the graphical appearance of your control, override the OnPaint method. Note Overriding OnPaint will not allow you to modify the appearance of all controls. Those controls that have all of their painting done by Windows (for example, TextBox) never call their OnPaint method, and thus will never use the custom code. Refer to the Help documentation for the particular control you want to modify to see if the OnPaint method is available. For a list of all the Windows Form Controls, see Controls to Use on Windows Forms. If a control does not have OnPaint listed as a member method, you cannot alter its appearance by overriding this method. For more information about custom painting, see Custom Control Painting and Rendering. Protected Overrides Sub OnPaint(ByVal e As _ System.Windows.Forms.PaintEventArgs) MyBase.OnPaint(e) ' Insert code to do custom painting. ' If you want to completely change the appearance of your control, ' do not call MyBase.OnPaint(e). End Sub protected override void OnPaint(PaintEventArgs pe) { base.OnPaint(pe); // Insert code to do custom painting. // If you want to completely change the appearance of your control, // do not call base.OnPaint(pe). } Save and test your control. See Also Varieties of Custom Controls How to: Inherit from the Control Class How to: Inherit from the UserControl Class How to: Author Controls for Windows Forms Troubleshooting Inherited Event Handlers in Visual Basic Walkthrough: Inheriting from a Windows Forms Control with Visual Basic Walkthrough: Inheriting from a Windows Forms Control with Visual C#
https://docs.microsoft.com/en-us/dotnet/framework/winforms/controls/how-to-inherit-from-existing-windows-forms-controls
2017-08-17T00:44:43
CC-MAIN-2017-34
1502886102757.45
[]
docs.microsoft.com
Installing¶ Using Git¶ To create a new local repository go to and fork the repository to your own username account. Check out your clone at a URL like this: git clone [email protected]:username/chirpradio.git You can use your local fork to create topic branches and make pull requests into the main repo. Here is a guide on working with topic branches. Prerequisites¶ Everything should run in Python 2.5 or greater Note: Recent Ubuntu Linux versions (at least after Jaunty) ship with Python 2.6. Many have reported problems running the Google App Engine SDK with a non-2.5.* version of Python. To install Python 2.5 without breaking the default Python install, you can use this command: sudo apt-get install python2.5 Install the Google App Engine SDK from If on Mac OS X be sure to start up the launcher once so that it prompts you to create symbolic links in /usr/local/google_appengine Unlike the Google App Engine Python SDK for Mac OS X/Windows, the Linux version comes as a zip archive rather than an installer. To install, just unpack the archive into /usr/local/google_appengine. Or you can unpack it to your home directory and create a symlink in /usr/local/google_appengine. It’s a good idea to install PyCrypto for pushing code to Google and so that the SDK works as expected. On a Debian/Ubuntu system, use this command: sudo apt-get install python-crypto On Mac OS X you need to grab the PyCrypto source and run: sudo python setup.py install To run the JavaScript lint tests (which will fail otherwise) you will need the jsl command line tool, aka javascript-lint. On a Mac OS X system with homebrew, type: brew install jsl (there is probably something similar for Linux) Running The Development Server¶ Note The Google App Engine SDK currently does not run inside a virtualenv. This is a known bug. To start up a local server, run python manage.py runserver Note: If you are running on a system with multiple versions of Python installed, make sure that you are using the 2.5 version, e.g.: python2.5 manage.py runserver You can reach your local server by going to in your web browser. If you are running this server on a different computer, you need to run the server with python manage.py runserver 0.0.0.0 instead. This tells Django to bind to your external IP address and accept remote connections. Below, we refer to local URLs like this: You should replace “HOST:PORT” with the appropriate host name/port combination. Running The Test Suite¶ To run all unit tests: python manage.py test You can also use python manage.py test [application name] to only run a single application’s tests.
http://chirpradio.readthedocs.io/en/latest/topics/install.html
2017-08-16T23:35:44
CC-MAIN-2017-34
1502886102757.45
[]
chirpradio.readthedocs.io
Support Tickets From PhpCOIN Documentation The HelpDesk is a system whereby a client can enter a request for support, and all subsequent messages to and from the client dealing with that issue will appear on a single web-page. Other issues will have their own web-page, thereby making it much easier to track support issues via phpCOIN than via email. If a client does send an email rather than logging into phpCOIN, the /coin_cron/helpdesk.php file can be set to run via cron (scheduled task on Windows) as often as you want. It will check an email (POP or IMAP) box and if an email is an email address listed as a client or client additional email within phpCOIN, then phpCOIN will either create a new HelpDesk ticket, or append the email as the response to an existing ticket, as appropriate. A logged-in admin can also add a ticket on behalf of a client if you enable [Admin] -> [Parameters] -> [operation] -> [helpdesk] -> [HelpDesk Admin: Admin Can Enter Tickets]. phpCOIN will treat these tickets exactly as if they were entered by that logged-in client. Another option is to show in the information for a ticket the username of each admin that added a response to the ticket, or to leave the generic "support" as the responder. Enabling the admin makes it easier in larger organizations to track down who did what. The option can be enable/disabled in [Admin] -> [Parameters] -> [operation] -> [helpdesk] -> [HelpDesk Admin: Reveal Admin Identity] By default, every email dealing with a helpdesk issue will contain the original request as well as every subsequent response. This has the potential to make messages quite long, so phpCOIN has an option to limit the number of responses appended to the message, as well as specify the number of responses to append. These settings are located at [Admin] -> [Parameters] -> [operation] -> [helpdesk] -> [HelpDesk Reply: Limit Messages Sent] and [Admin] -> [Parameters] -> [operation] -> [helpdesk] -> [Helpdesk Reply: Email Messages Limit ] Another option is to send an email to a pager when a support ticket is entered. Enabling/Disabling this option, as well as specifying the email/pager address to receive the message, is found at [Admin] -> [Parameters] -> [enable] -> [helpdesk] -> [Helpdesk TT Alert Email: Enable] and [Admin] -> [Parameters] -> [operation] -> [helpdesk] -> [HelpDesk TT Alert Email: Address]
http://docs.phpcoin.com/index.php?title=Support_Tickets
2017-08-16T23:25:10
CC-MAIN-2017-34
1502886102757.45
[]
docs.phpcoin.com
To assist VMware Technical Support in troubleshooting Horizon Agent, you might need to use the vdmadmin command to create a Data Collection Tool (DCT) bundle. You can also obtain the DCT bundle manually, without using vdmadmin. About this task For your convenience, you can use the vdmadmin command on a Connection Server instance to request a DCT bundle from a remote desktop. The bundle is returned to Connection Server. You can alternatively log in to a specific remote desktop and run a support command that creates the DCT bundle on that desktop. If User Account Control (UAC) is turned on, you must obtain the DCT bundle in this fashion. Procedure - Log in as a user with the required privileges. - Open a command prompt and run the command to generate the DCT bundle. Results The command writes the bundle to the specified output file. Using vdmadmin to Create a Bundle File for Horizon Agent What to do next If you have an existing support request, you can update it by attaching the DCT bundle file.
https://docs.vmware.com/en/VMware-Horizon-7/7.5/horizon-administration/GUID-1621BB82-8175-4F27-A33E-37B2B0DA9763.html
2018-07-16T01:28:29
CC-MAIN-2018-30
1531676589029.26
[]
docs.vmware.com
- - Mechanics API - Quantum Mechanics - Quantum Functions - States and Operators - Quantum Computation - Analytic Solutions - Optics Module - Unit systems - Continuum Mechanics
http://docs.sympy.org/dev/modules/physics/index.html
2017-06-22T20:40:41
CC-MAIN-2017-26
1498128319902.52
[]
docs.sympy.org
Exception Handling Handling errors is a normal part of coding any application, especially an ever-connected app. Depending on the .NET execution methods you use, you do this differently. This article is organized as follows: When Using Execute Methods You can handle errors using try...catch blocks around the invocation when using Execute, ExecuteSync or ExecuteAsync. try { var allBooks = await app.WorkWith().Data<Book>().GetAll().ExecuteAsync(); } catch (EverliveException ex) { ... } When Using TryExecute Methods The TryExecute, TryExecuteAsync, and TryExecuteSync methods allow you to handle errors without a try...catch block. Instead of the actual request result, they return an object that holds either the actual result or the error that occurred. var allBooksResult = await app.WorkWith().Data<Book>().GetAll().TryExecuteAsync(); if (allBooksResult.Success) { var allBooks = allBooksResult.Result; } else { EverliveException error = allBooksResult.Error; } The EverliveException Class Whenever an error occurs while using the SDK, the EverliveException exception is thrown or instantiated. It contains the following members that provide information about the error. Code—an integer value specifying the error code. Use it to build additional error handling logic in your application. Message—a string describing the error. It is not recommended to directly display this message to your app users, but you can log it somewhere for your own reference.
http://docs.telerik.com/platform/backend-services/dotnet/data/data-exception-handling
2017-06-22T20:32:38
CC-MAIN-2017-26
1498128319902.52
[]
docs.telerik.com
distributed¶ Distributed is a lightweight library for distributed computing in Python. It extends both the concurrent.futures and dask APIs to moderate sized clusters. Distributed provides data-local computation by keeping data on worker nodes, running computations where data lives, and by managing complex data dependencies between tasks. See the quickstart to get started. Motivation¶ Why build yet-another-distributed-system?.
http://distributed.readthedocs.io/en/1.9.5/
2017-06-22T20:36:56
CC-MAIN-2017-26
1498128319902.52
[]
distributed.readthedocs.io
Scenario: Implementing a New Tagging Strategy Consider a situation where you have a medium to large working environment with multiple resources used by various employees. You decide to use tagging to help you organize and get better oversight of your account’s resources. But how to proceed when there are dozens of resources to tag? Fortunately, Tag Editor can simplify the process. Make a plan. Before you begin, sketch out a plan of the tag keys and values that will help you organize your resources. For example, you might want all resources to have tag keys like Project, Cost Center, and Environment. Remember, too, that each resource cannot have more than 50 tags. Open Tag Editor. Sign into the AWS Management Console and open Tag Editor at. Find all resources in your account. For Regions, select all regions that apply. For Resource types, select All resource types. Leave both Tags boxes empty. Then choose Find resources. For more information, see Searching for Resources to Tag. Select all the found resources. The Tag Editor search results appear at the bottom of the page. When the list shows the resources that you want to tag, select the top check box to select all resources. Choose Edit tags for selected. Apply tag keys with empty elements. In Add/edit tags, under Add tags, in the space provided, type the key name that you want to add, such as Project. Repeat for your other new keys, such as Cost Centerand Environment. Choose Apply changes. Tip If any of your selected resources have reached the maximum of 50 tags, a message warns you before you choose Apply changes. You can pause the pointer over the number of affected resources in the message to see a pop-up list of the specific resources. Add values for each tag key. The next step is to add tag values that will help you distinguish individual resources that share tag keys. There are a couple of ways to do this depending on whether you plan on adding the same values to many resources or just a few. Bulk add values. Start by selecting the check box at the top of the table again to clear all the check boxes. Then select individual check boxes for just those resources that need a specific tag value. Choose Edit tags for selected. In Add/edit tags, under Applied tags, type a new value in the Value column next to a tag key. For example, you might add a billing code in the value for the Cost Center key or type Productionfor the Environment key. Note that if the Value column shows Multiple values, you can still type in a new value. However, your new value will replace all the key’s existing values for the selected resources. When you’re done, choose Apply changes. Add individual tag values. If you want each resource to have its own unique value, you can edit tag values right in the search results table. Start by choosing the cog icon above the table and selecting the check boxes for your new keys. To continue our example, you might select the check boxes for Project, Cost Center, and Environment. This makes your keys appear as columns in the search results table. For a given resource, locate the column that displays the tag key whose value you want to edit. Choose the pencil icon, and then type the new value in the box. Press Enter to complete the editing. Repeat Step 6 for other resources in your list.
http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/scenario-implementing-tagging.html
2017-06-22T20:21:04
CC-MAIN-2017-26
1498128319902.52
[]
docs.aws.amazon.com
The keys documented on this page are for development use. There is no GUI option to configure them; a developer can set them by manually editing the configuration files. Integer to set the maximum number of undo items on the stack. If zero, undo items are unlimited. Present as: Referenced by EDA_DRAW_FRAME::LoadSettings(), and EDA_DRAW_FRAME::SaveSettings().
http://docs.kicad-pcb.org/doxygen/group__develconfig.html
2017-06-22T20:22:14
CC-MAIN-2017-26
1498128319902.52
[]
docs.kicad-pcb.org
How to Safely Modify the Cache from an Event Handler Callback How. GemFire is a highly distributed system and many operations that may seem local invoke distributed operations. - Calling Region methods, on the event's region or any other region. - Using the GemFire DistributedLockService. - Modifying region attributes. - Executing a function through the GemFire FunctionService. To be on the safe side, do not make any calls to the GemFire API directly from your event handler. Make all GemFire API calls from within a separate thread or executor. How to Perform Distributed Operations Based on Events If you need to use the GemFire API from your handlers, make your work asynchronous to the event handler. You can spawn a separate thread or use a solution like the java.util.concurrent.Executor interface. The Executor interface is available in JDK version 1.5 and above. public void afterCreate(EntryEvent event) { final Region otherRegion = cache.getRegion("/otherRegion"); final Object key = event.getKey(); final Object val = event.getNewValue(); serialExecutor.execute(new Runnable() { public void run() { try { otherRegion.create(key, val); } catch (com.gemstone.gemfire.cache.RegionDestroyedException e) { ... } catch (com.gemstone.gemfire.cache.EntryExistsException e) { ... } } }); } For additional information on the Executor, see the SerialExecutor example on the Oracle Java web site.
http://gemfire.docs.gopivotal.com/docs-gemfire/latest/developing/events/writing_callbacks_that_modify_the_cache.html
2017-06-22T20:40:09
CC-MAIN-2017-26
1498128319902.52
[]
gemfire.docs.gopivotal.com
Measuring sub-processes¶ Complex test suites may spawn sub-processes to run tests, either to run them in parallel, or because sub-process behavior is an important part of the system under test. Measuring coverage in those sub-processes. Configuring Python for sub-process coverage¶ Measuring coverage in sub-processes is a little tricky. When you spawn a sub-process, you are invoking Python to run your program. Usually, to get coverage measurement, you have to use coverage.py to run your program. Your sub-process won’t be using coverage.py, so we have to convince Python to use coverage.py even when not explicitly invoked.-processes. As long as the environment variable is visible in your sub-process, it will work. You can configure your Python installation to invoke the process_startup function in two ways: Create or append to sitecustomize.py to add these lines: import coverage coverage. Note that if you use one of these techniques, you must undo them if you uninstall coverage.py, since you will be trying to import it during Python start-up. Be sure to remove the change when you uninstall coverage.py, or use a more defensive approach to importing it. Signal handlers and atexit¶ To successfully write a coverage data file, the Python sub-process under analysis must shut down cleanly and have a chance for coverage.py to run the atexit handler it registers. For example if you send SIGTERM to end the sub-process, but your sub-process has never registered any SIGTERM handler, then a coverage file won’t be written. See the atexit docs for details of when the handler isn’t run.
http://coverage.readthedocs.io/en/coverage-4.3.4/subprocess.html
2017-06-22T20:28:20
CC-MAIN-2017-26
1498128319902.52
[]
coverage.readthedocs.io
This tutorial is intended for beginner AppBuilder users who need to create a cryptographic identity for iOS development of Apple Watch bundles for the first time. You will learn: - The benefits and the limitations of using a cryptographic identity and provisioning profile for development - How to create a certificate for development - How to create the provisioning profiles required for development of Apple Watch bundles - How to create a cryptographic identity for development run your Apple Watch bundles on a pair of devices, you need a pair of matching cryptographic identity and provisioning profiles. A cryptographic identity matches a provisioning profile, if both include the same Apple-signed certificate. A pair of matching cryptographic identity and provisioning profiles for development provides the following benefits. - You can build and run your app on selected devices - Your apps are enabled for debugging on device A pair of matching cryptographic identity and provisioning profiles for development has the following limitations. - You cannot publish your app in the App Store - You can build and run your app on a limited number of predefined devices only Prerequisites To complete this tutorial, you need to be enrolled in the Apple Developer Program. To be able to create a new certificate for development, you must not have any certificates for iOS App Development created with your account. You need to be logged in the iOS Dev Center. Step 1: Create a cryptographic identity The cryptographic identity is a pair of matching public key certificate and private key. In AppBuilder, you can create a cryptographic identity or import an existing one. To create a new cryptographic identity, you need to complete a certificate signing request for an Apple-signed certificate. Alternatively, if you want to use an existing cryptographic identity that you created earlier, you can import it. Start by running AppBuilder to create a certificate signing request. Make sure you have stored the CSR file on your disk. Next, create a certificate for development in the iOS Dev Center. Make sure you have downloaded the CER file on your disk. Last, complete the certificate signing request in AppBuilder by uploading the CER file for your certificate. Your new cryptographic identity is added to the list. You cannot upload your CERfile if you do not have a matching pending certificate signing request in AppBuilder. Step 2: Obtain the provisioning profiles Each provisioning profile is stored as a mobileprovision file. This file contains information about the identity of the app author, the identity of the app and its distribution purpose. You can obtain a provisioning profile by exporting an existing one or by creating a new one. To create a development provisioning profile, you need a registered iOS App ID, one or more development a development cryptographic identity and provisioning profile, you can run it only on registered devices included in the provisioning profile. If your devices for testing are already registered in the iOS Dev Center, you can skip this step. Last, create the development provisioning profiles for your Apple Watch bundle components and download the mobileprovision files on your disk. Step 3: Add the provisioning profiles in AppBuilder Repeat this for all components of the Apple Watch bundle: the host app, the watch extension and the watch app. If you are an classic Windows desktop client user, run classic Windows desktop client, open your app and in the title bar, click your user name and select Options. In the Mobile tab, expand iOS, select Mobile Provisions, click Import and select the mobileprovision file from your disk. If you are an in-browser client user, run in-browser client, open your app, click the cogwheel icon and select Options. Select iOS → Provisioning Profiles, click Import and select the mobileprovision file from your disk. If you are an extension for Visual Studio user, run Microsoft Visual Studio, open your app and in the main menu bar, click AppBuilder → Options. In the Mobile tab, expand iOS, select Mobile Provisions, click Import and select the mobileprovision file from your disk. If you are an command-line interface user, in the command prompt, run the following command. appbuilder provision import <File Path> Where <File Path> is the complete file path to the mobileprovision file for your provisioning profile. Next Steps After configuring your pair of matching cryptographic identity and provisioning profile, you can build and run your app on an iOS device.
http://docs.telerik.com/platform/appbuilder/cordova/code-signing-your-app/configuring-code-signing-for-apple-watch-bundles/scenarios-and-tutorials/create-dev-cryptographic-id
2017-06-22T20:33:33
CC-MAIN-2017-26
1498128319902.52
[]
docs.telerik.com
The SPDY router (uWSGI 1.9)¶ Starting from uWSGI 1.9 the HTTPS router has been extended to support version 3 of the SPDY protocol. To run the HTTPS router with SPDY support, use the --https2 option: uwsgi --https2 addr=0.0.0.0:8443,cert=foobart.crt,key=foobar.key,spdy=1 --module werkzeug.testapp:test_app This will start an HTTPS router on port 8443 with SPDY support, forwarding requests to the Werkzeug’s test app the instance is running. If you’ll go to with a SPDY-enabled browser, you will see additional WSGI variables reported by Werkzeug: SPDY– on SPDY.version– protocol version (generally 3) SPDY.stream– stream identifier (an odd number). Opening privileged ports as a non-root user will require the use of the shared-socket option and a slightly different syntax: uwsgi --shared-socket :443 --https2 addr==0,cert=foobart.crt,key=foobar.key,spdy=1 --module werkzeug.testapp:test_app --uid user Both HTTP and HTTPS can be used at the same time (=0 and =1 are references to the privileged ports opened by shared-socket commands): uwsgi --shared-socket :80 --shared-socket :443 --http =0 --https2 addr==1,cert=foobart.crt,key=foobar.key,spdy=1 --module werkzeug.testapp:test_app --uid user Notes¶ - You need at least OpenSSL 1.x to use SPDY (all modern Linux distributions should have it). - During uploads, the window size is constantly updated. - The --http-timeoutdirective is used to set the SPDY timeout. This is the maximum amount of inactivity after the SPDY connection is closed. PINGrequests from the browsers are all acknowledged. - On connect, the SPDY router sends a settings packet to the client with optimal values. - If a stream fails in some catastrophic way, the whole connection is closed hard. RSTmessages are always honoured.
http://uwsgi.readthedocs.io/en/latest/SPDY.html
2017-06-22T20:41:47
CC-MAIN-2017-26
1498128319902.52
[]
uwsgi.readthedocs.io
This is the official documentation of the solveset module in solvers. It contains the frequently asked questions about our new module to solve equations. SymPy already has a pretty powerful solve function. But it has a lot of major issues module). -atically object: \ here solves the problem. - It also cannot represent solutions for equations like \(|x| < 1\), which is a disk of radius 1 in the Argand Plane. This problem is solved using complex sets implemented as ComplexRegion. removes the flags argument of solve, which had made the input API messy and output API inconsistent. Solveset is designed to be independent of the assumptions on the variable being solved for and instead, uses the domain argument to decide the solver to dispatch the equation to, namely solveset_real or solveset_complex. It’s unlike the old solve which considers the assumption on the variable.>>> from sympy import solveset, S >>> from sympy.abc import x >>> solveset(x**2 + 1, x) # domain=S.Complexes is default {-I, I} >>> solveset(x**2 + 1, x, domain=S.Reals) EmptySet() Solveset uses various methods to solve an equation, here is a brief overview of the methodology: - The domain argument is first considered to know the domain in which the user is interested to get the solution. - If the given function is a relational (>=, <=, >, <), and the domain is real, then solve_univariate_inequality and solutions are returned. Solving for complex solutions of inequalities, like \(x^2 < 0\) is not yet supported. - Based on the domain, the equation is dispatched to one of the two functions solveset_real is called, which solves it by converting it to complex exponential form. - The function is now checked if there is any instance of a Piecewise expression, if it is, then it’s converted to explict expression and set pairs and then solved recursively. - The respective solver now tries to invert the equation using the routines invert_real tries to simplify the radical, by removing it using techniques like squarring, cubing etc, and _solve_abs solves or _solve_as_poly_complex is called to solve f as a polynomial. - The underlying method _solve_as_poly solves the equation using polynomial techniques if it’s already a polynomial equation or, with a change of variables, can be made so. - The final solution set returned by solveset is the intersection of the set of solutions found above and the input domain. - In the real domain, we use our ImageSet class in the sets module to return infinite solutions. ImageSet is an image of a set under a mathematical function. For example, to represent the solution of the equation \(\sin{(x)} = 0\), we can use the ImageSet as:>>> from sympy import ImageSet, Lambda, pi, S, Dummy, pprint >>> n = Dummy('n') >>> pprint(ImageSet(Lambda(n, 2*pi*n), S.Integers), use_unicode=True) {2⋅n⋅π | n ∊ ℤ} Where n is a dummy variable. It is basically the image of the set of integers under the function \(2\pi n\). - In the complex domain, we use complex sets, which are implemented as the ComplexRegion class in the sets module, to represent infinite solution in the Argand plane. For example to represent the solution of the equation \(|z| = 1\), which is a unit circle, we can use the ComplexRegion in the ProductSet is the range of the value of \(r\), which is the radius of the circle and the Interval gives us the ability to represent unevaluated equations and inequalities in forms like \(\{x|f(x)=0; x \in S\}\) and \(\{x|f(x)>0; x \in S\}\) but a more powerful thing about ConditionSet. Creating a universal equation solver, which can solve each and every equation we encounter in mathematics is an ideal case for solvers in a Computer Algebra System. When cases which are not solved or can only be solved incompletely, a ConditionSet is used and acts as an unevaluated solveset object. Note that, mathematically, finding a complete set of solutions for an equation is undecidable. See Richardson’s theorem. ConditionSet\}\) There are still a few things solveset can’t do, which the old solve can, such as solving non linear multivariate & LambertW type equations. Hence, it’s not yet a perfect replacement for old solve. The ultimate goal is to: - Replace solve with solveset once solveset is at least as powerful as solve, i.e., solveset does everything that solve can do currently, and - eventually rename solveset to solve.) [0, 2] >>> not_empty_in(FiniteSet(x, x**2).intersect(Interval(1, 2)), x) [-sqrt(2), -1] U [1, 2]. Solves a given inequality or equation with set as output See also >>> x = Symbol('x') >>> pprint(solveset(exp(x) - 1, x), use_unicode=False) {2*n*I*pi | n in Integers()} >>> x = Symbol('x', real=True) >>> pprint(solveset(exp(x) - 1, x), use_unicode=False) {2*n*I*pi | n in Integers()} >>> Integers()} \ {0}) U ({2*n*pi + pi | n in Integers()} \ {0}) >>> p = Symbol('p', positive=True) >>> pprint(solveset(sin(p)/p, p), use_unicode=False) {2*n*pi | n in Integers()} U {2*n*pi + pi | n in Integers()} >>> solveset(exp(x) > 1, x, R) (0, oo) Reduce the complex valued equation f(x) = y to a set of equations {g(x) = h_1(y), g(x) = h_2(y), ..., g(x) = h_n(y) } where g(x) is a simpler function than f(x). The return value is a tuple (g(x), set_h), where g(x) is a function of x and set_h is the set of function {h_1(y), h_2(y), ..., h_n(y)}. Here, y is not necessarily a symbol. The set_h contains the functions along with the information about their domain in which they are valid, through set operations. For instance, if y = Abs(x) - n, is inverted in the real domain, then, the set_h doesn’t simply return \({-n, n}\), as the nature of \(n\) is unknown; rather it will return: \(Intersection([0, oo) {n}) U Intersection((-oo, 0], {-n})\) By default, the complex domain is used but note that inverting even seemingly simple functions like exp(x) can give very different result in the complex domain than are obtained in the real domain. (In the case of exp(x), the inversion via log is multi-valued in the complex domain, having infinitely many branches.) If you are working with real values only (or you are not sure which function to use) you should probably use))), Integers())) >>> invert_real(exp(x), y, x) (x, Intersection((-oo, oo), {log(y)})) When does exp(x) == 1? >>> invert_complex(exp(x), 1, x) (x, ImageSet(Lambda(_n, 2*_n*I*pi), Integers())) >>> invert_real(exp(x), 1, x) (x, {0}) >>> domain_check(x/x, x, 0) # x/x is automatically simplified to 1 True >>> domain_check(Mul(x, 1/x, evaluate=False), x, 0) False Converts a given System of Equations into Matrix form. Here \(equations\) must be a linear system of equations in \(symbols\). The order of symbols in input \(symbols\) will determine the order of coefficients in the returned Matrix. The Matrix form corresponds to the augmented matrix form. For example:]]) >>> a, b, c, d, e, f = symbols('a, b, c, d, e, f') >>> eqns = [a*x + b*y - c, d*x + e*y - f] >>> A, B = linear_eq_to_matrix(eqns, x, y) >>> A Matrix([ [a, b], [d, e]]) >>> B Matrix([ [c], [f]]) 2 -1 1] system = [2 -2 4 -2] [2 -1 2 0] \(system = [3x + 2y - z - 1, 2x - 2y + 4z + 2, 2x - y + 2z]\) [3 2 -1 ] [ 1 ] A = [2 -2 4 ] b = [ -2 ] [2 -1 2 ] [ 0 ] \(system = (A, b)\) Symbols to solve for should be given as input in all the cases either in an iterable or as comma separated arguments. This is done to maintain consistency in returning solutions in the form of variable input by the user. The algorithm used here is Gauss-Jordan elimination, which results, after elimination, in)} >>> A = Matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> b = Matrix([3, 6, 9]) >>> linsolve((A, b), [x, y, z]) {(z - 1, -2*z + 2, z)} >>> Eqns = [3*x + 2*y - z - 1, 2*x - 2*y + 4*z + 2, - x + S(1)/2*y - z] >>> linsolve(Eqns, x, y, z) {(1, -2, -2)} >>> aug = Matrix([[2, 1, 3, 1], [2, 6, 8, 3], [6, 8, 18, 5]]) >>> aug Matrix([ [2, 1, 3, 1], [2, 6, 8, 3], [6, 8, 18, 5]]) >>> linsolve(aug, x, y, z) {(3/10, 2/5, 0)} >>> a, b, c, d, e, f = symbols('a, b, c, d, e, f') >>> eqns = [a*x + b*y - c, d*x + e*y - f] >>> linsolve(eqns, x, y) {((-b*f + c*e)/(a*e - b*d), (a*f - c*d)/(a*e - b*d))} >>> system = Matrix(([0,0,0], [0,0,0], [0,0,0])) >>> linsolve(system, x, y) {(x, y)} >>> linsolve([ ], x) EmptySet() See Diophantine
http://docs.sympy.org/latest/modules/solvers/solveset.html
2017-06-22T20:44:01
CC-MAIN-2017-26
1498128319902.52
[]
docs.sympy.org
Hively Hively is a great way to consistently measure customer happiness. With Hively's snippet embedded in your signature, customers can rate your response with just one click. This article will guide you through integrating Hively with Help Scout. Activation instructions - 1 Log in to Hively, and click on the menu next to your name in the upper right-hand corner of the page. Select Integrations from the dropdown menu. - 2 - On the next page, select Help Scout listed under the Integrations list. - 3 You'll need to enter a Help Scout User ID for each person who has a Hively and Help Scout profile. If you're logged in to Help Scout, you can grab User IDs from this page. After you've added the User IDs to those blank text fields, click on the blue Update button. - 4 Once the page reloads, click the Get Help Scout snippet button. Click on the Show HTML link and copy the code to your clipboard, then head back to Help Scout. - 5 On the Hively installation page, toggle the HTML editor by clicking on the </> icon, then paste your signature code in to the Rating Text box. You can also modify where the rating appears, and which mailboxes will use your Hively ratings link. Don't forget to click the blue Save button. - 6 - After you've saved, you'll be able to see how your ratings will appear to your customers in the Rating Text field.
http://docs.helpscout.net/article/257-hively
2017-06-22T20:26:04
CC-MAIN-2017-26
1498128319902.52
[]
docs.helpscout.net
Product version: AppBuilder 2.7.3 Released: 2015, January 21 AppBuilder 2.7.3 is an update release. For a list of the new features and updates introduced in the earlier major release, AppBuilder 2.7, see AppBuilder 2.7 Release Notes. This release introduces the following updates in AppBuilder. - Significant increase in the speed and performance of the command-line interface simulateand debugoperations when run for the first time. - Optimized cloud builds for apps containing large-sized custom Apache Cordova plugins.
http://docs.telerik.com/platform/appbuilder/release-notes/2x/v2-7-3
2017-06-22T20:35:31
CC-MAIN-2017-26
1498128319902.52
[]
docs.telerik.com
Type inference and type annotations¶ Type inference¶ The initial assignment defines a variable. If you do not explicitly specify the type of the variable, mypy infers the type based on the static type of the value expression: i = 1 # Infer type int for i l = [1, 2] # Infer type List[int] for l Type inference is bidirectional and takes context into account. For example, the following is valid: def f(l: List[object]) -> None: l = [1, 2] # Infer type List[object] for [1, 2] In an assignment, the type context is determined by the assignment target. In this case this is l, which has the type List[object]. The value expression [1, 2] is type checked in this context and given the type List[object]. In the previous example we introduced a new variable l, and here the type context was empty. Note that the following is not valid, since List[int] is not compatible with List[object]: def f(l: List[object], k: List[int]) -> None: l = k # Type check error: incompatible types in assignment The reason why the above assignment is disallowed is that allowing the assignment could result in non-int values stored in a list of int: def f(l: List[object], k: List[int]) -> None: l = k l.append('x') print(k[-1]) # Ouch; a string in List[int] You can still run the above program; it prints x. This illustrates the fact that static types are used during type checking, but they do not affect the runtime behavior of programs. You can run programs with type check failures, which is often very handy when performing a large refactoring. Thus you can always ‘work around’ the type system, and it doesn’t really limit what you can do in your program. Type inference is not used in dynamically typed functions (those without an explicit return type) — every local variable type defaults to Any, which is discussed later. Explicit types for variables¶ You can override the inferred type of a variable by using a special type comment after an assignment statement: x = 1 # type: Union[int, str] Without the type comment, the type of x would be just int. We use an annotation to give it a more general type Union[int, str]. Mypy checks that the type of the initializer is compatible with the declared type. The following example is not valid, since the initializer is a floating point number, and this is incompatible with the declared type: x = 1.1 # type: Union[int, str] # Error! Note The best way to think about this is that the type comment sets the type of the variable, not the type of the expression. To force the type of an expression you can use cast(<type>, <expression>). Explicit types for collections¶ The type checker cannot always infer the type of a list or a dictionary. This often arises when creating an empty list or dictionary and assigning it to a new variable that doesn’t have an explicit variable type. In these cases you can give the type explicitly using a type annotation comment: l = [] # type: List[int] # Create empty list with type List[int] d = {} # type: Dict[str, int] # Create empty dictionary (str -> int) Similarly, you can also give an explicit type when creating an empty set: s = set() # type: Set[int] Declaring multiple variable types at a time¶ You can declare more than a single variable at a time. In order to nicely work with multiple assignment, you must give each variable a type separately: i, found = 0, False # type: int, bool You can optionally use parentheses around the types, assignment targets and assigned expression: i, found = 0, False # type: (int, bool) # OK (i, found) = 0, False # type: int, bool # OK i, found = (0, False) # type: int, bool # OK (i, found) = (0, False) # type: (int, bool) # OK Starred expressions¶ In most cases, mypy can infer the type of starred expressions from the right-hand side of an assignment, but not always: a, *bs = 1, 2, 3 # OK p, q, *rs = 1, 2 # Error: Type of rs cannot be inferred On first line, the type of bs is inferred to be List[int]. However, on the second line, mypy cannot infer the type of rs, because there is no right-hand side value for rs to infer the type from. In cases like these, the starred expression needs to be annotated with a starred type: p, q, *rs = 1, 2 # type: int, int, *List[int] Here, the type of rs is set to List[int]. Types in stub files¶ Stub files are written in normal Python 3 syntax, but generally leaving out runtime logic like variable initializers, function bodies, and default arguments, replacing them with ellipses. In this example, each ellipsis ... is literally written in the stub file as three dots: x = ... # type: int def afunc(code: str) -> int: ... def afunc(a: int, b: int=...) -> int: ... Note The ellipsis ... is also used with a different meaning in callable types and tuple types.
http://mypy.readthedocs.io/en/stable/type_inference_and_annotations.html
2017-06-22T22:10:15
CC-MAIN-2017-26
1498128319912.4
[]
mypy.readthedocs.io
Contains a Multidimensional Expressions (MDX) expression that returns a trend indicator for a Kpi element. Syntax <Kpi> ... <Trend>...</Trend> ... </Kpi> Element Characteristics Element Relationships Remarks The Trend element contains an MDX expression that evaluates to a number between -1 and 1. The element that corresponds to the parent of Trend in the Analysis Management Objects (AMO) object model is Kpi.
https://docs.microsoft.com/en-us/sql/analysis-services/scripting/properties/trend-element-assl
2017-06-22T23:09:23
CC-MAIN-2017-26
1498128319912.4
[]
docs.microsoft.com
WebUIFileOpenPickerActivatedEventArgs WebUIFileOpenPickerActivatedEventArgs WebUIFileOpenPickerActivatedEventArgs Class Definition Provides information about an activated event that fires when the user tries to pick files or folders that are provided by the app. C#/C++/VB This type appears as FileOpenPickerActivatedEventArgs. public : sealed class WebUIFileOpenPickerActivatedEventArgs : IActivatedEventArgs, IActivatedEventArgsWithUser, IFileOpenPickerActivatedEventArgs, IFileOpenPickerActivatedEventArgs2, IActivatedEventArgsDeferral public sealed class WebUIFileOpenPickerActivatedEventArgs : IActivatedEventArgs, IActivatedEventArgsWithUser, IFileOpenPickerActivatedEventArgs, IFileOpenPickerActivatedEventArgs2, IActivatedEventArgsDeferral Public NotInheritable Class WebUIFileOpenPickerActivatedEventArgs Implements IActivatedEventArgs, IActivatedEventArgsWithUser, IFileOpenPickerActivatedEventArgs, IFileOpenPickerActivatedEventArgs2, IActivatedEventArgsDeferral - Attributes - Examples The File picker sample demonstrates how to respond to a fileOpenPicker activated event. //. Remarks Learn how to offer files for the user to pick from your app in Quickstart: Providing file services through and in the Windows.Storage.Pickers.Provider namespace reference. This object is accessed when you implement an event handler for the WinJS.Application.Onactivated or the Windows.UI.WebUI.WebUIApplication.activated events when ActivationKind is fileOpenPicker. Note : This class is not agile, which means that you need to consider its threading model and marshaling behavior. For more info, see Threading and Marshaling (C++/CX) . Properties Gets the app activated operation. public : ActivatedOperation ActivatedOperation { get; } public ActivatedOperation ActivatedOperation { get; } Public ReadOnly Property ActivatedOperation As ActivatedOperation The activation operation. - Attributes - Gets the family name of the caller's package. public : PlatForm::String CallerPackageFamilyName { get; } public string CallerPackageFamilyName { get; } Public ReadOnly Property CallerPackageFamilyName As string - Value - PlatForm::String string string The family name of the caller's package - Attributes - The letterbox UI of the file picker that is displayed when the user wants to pick files or folders that are provided by the app. - Attributes - Gets the activation type. public : ActivationKind Kind { get; } public ActivationKind Kind { get; } Public ReadOnly Property Kind As ActivationKind The activationKind.fileOpenPicker enumeration value. - Attributes - One of the enumeration values. - Attributes - Gets the splash screen object that provides information about the transition from the splash screen to the activated app. public : SplashScreen SplashScreen { get; } public SplashScreen SplashScreen { get; } Public ReadOnly Property SplashScreen As SplashScreen The object that provides splash screen information. - Attributes -. Gets the user that the app was activated for. public : User User { get; } public User User { get; } Public ReadOnly Property User As User - Attributes -.
https://docs.microsoft.com/en-us/uwp/api/Windows.UI.WebUI.WebUIFileOpenPickerActivatedEventArgs
2017-06-22T23:26:59
CC-MAIN-2017-26
1498128319912.4
[]
docs.microsoft.com
Setup webhooks Webhook endpoints are URLs defined by users to which HyperTrack sends events. An event is an account occurrence, such as, user arrived at a destination, user is delayed by X minutes, and user turned off location. Each occurrence has a corresponding Event object. HyperTrack creates an Event object for each event. This object contains all the relevant information about what just happened, including the type of event and the data associated with that event. HyperTrack then sends the Event object to the URL in your account's webhooks setting via an HTTP POST request. You can find a full list of all event types in the API docs. You might use a webhook to trigger a notification to your customer that a delivery may be delayed, or the user is arriving soon, or send a receipt email when the action has been completed. Configuring webhooks Webhooks are configured in the account settings section of the Dashboard. You can enter any URL you'd like to have events sent to. This should be a dedicated endpoint on your server, coded per the instructions below. You can choose the events that you are interested in and only those would be sent to you. Receiving a webhook notification Creating a webhoook endpoint on your server is no different from creating any other route or API on your server. Webhook data is sent as JSON in the POST request body. The full event details are included and can be used directly, after parsing the JSON into an Event object. Here is an example JSON of the body of the request. { "id":"000225ce-5841-4cfa-afa0-5c61697d4e0c", "user_id":"809c85e8-e1ea-4545-9c6a-58664190d163", "recorded_at":"2017-04-06T22:59:44.465916Z", "type":"action.on_the_way", "data":{ "object":{ "id":"229a1aa9-0726-40fa-8a2e-c59f6ce79baf", "user":{ "id":"809c85e8-e1ea-4545-9c6a-58664190d163", "group_id":null, "lookup_id":null, "name":"Toivo Pollari", "phone":null, "photo":"", "availability_status":"offline", "vehicle_type":"unknown", "pending_actions":[ ], "last_location":{ "geojson":{ "type":"Point", "coordinates":[ -122.4640349, 37.7687697 ] }, "recorded_at":"2017-04-07T06:49:33.787879+00:00" }, "last_online_at":"2017-04-07T04:51:24.787879Z", "last_heartbeat_at":"2017-04-07T06:51:44.862080Z", "location_status":"location_available", "display":{ "activity_text":"", "status_text":"Offline", "sub_status_text":"Last updated 36 minutes ago", "has_new_sdk":true, "is_warning":false }, "created_at":"2017-04-06T13:01:32.181804Z", "modified_at":"2017-04-07T06:51:44.862723Z" }, "type":"pickup", "vehicle_type":"unknown", "started_place":{ "id":"a18ad5e1-4c3e-4b1a-8786-5b2cc6b035f9", "name":"", "location":{ "type":"Point", "coordinates":[ -122.42813323979513, 37.80378573707798 ] }, "address":"1263 Bay St, San Francisco, CA 94123, USA", "landmark":"", "zip_code":"94123", "city":"SF", "state":"CA", "country":"US", "created_at":"2017-04-06T22:59:21.776499Z", "modified_at":"2017-04-06T22:59:21.776529Z" }, "started_at":"2017-04-06T22:58:48.157260Z", "expected_place":{ "id":"812fdbcc-31e8-44b3-8245-e5b7167c9407", "name":"", "location":{ "type":"Point", "coordinates":[ -122.41729555196481, 37.76292051400294 ] }, "address":"637-659 S Van Ness Ave, San Francisco, CA 94110, USA", "landmark":"", "zip_code":"94110", "city":"SF", "state":"CA", "country":"US", "created_at":"2017-04-06T22:59:21.402646Z", "modified_at":"2017-04-06T22:59:21.402674Z" }, "expected_at":null, "completed_place":null, "completed_at":"2017-04-07T00:38:37.362650Z", "assigned_at":"2017-04-06T22:59:21.500233Z", "suspended_at":"2017-04-07T00:24:51.602806Z", "canceled_at":null, "status":"completed", "eta":null, "initial_eta":"2017-04-06T23:17:40.782848Z", "short_code":"QynE7jlo", "tracking_url":"", "lookup_id":"HMKFO", "created_at":"2017-04-06T22:59:21.412968Z", "modified_at":"2017-04-07T00:38:37.362833Z", "display":{ "duration_remaining":0, "status_text":"Completed", "sub_status_text":"", "show_summary":true } } }, "created_at":"2017-04-06T22:59:44.466227Z" } The webhook endpoint should not have any authentication. Also, if you're using Rails, Django, or another web framework, your site may automatically check that every POST request contains a CSRF token. If so, you may need to exempt the webhooks route from CSRF protection to receive webhooks. If you use an HTTPS URL for your webhook endpoint, we will validate that the connection to your server is secure before sending your webhook data. For this to work, your server must be correctly configured to support HTTPS with a valid server certificate. Responding to a webhook To acknowledge receipt of a webhook, your endpoint should return a 2xx HTTP status code. Any other information returned in the request headers or request body is ignored. All response codes outside the this range, including 3xx codes, will indicate that you did not receive the webhook. If a webhook is not successfully received for any reason, we will continue trying to send it once every 3 minutes up to a maximum of 3 retries. Webhooks cannot be manually retried after this time, though you can query for the event to reconcile your data with any missed events. Best practices If your webhook script performs complex logic, or makes network calls, it's possible the script would timeout before we see its complete execution. For that reason, you may want to have your webhook endpoint immediately acknowledge receipt by returning a 2xx HTTP status code, and then perform the rest of its duties. Webhook endpoints may occasionally receive the same event more than once. We advise you to guard against duplicated event receipts by making your event processing idempotent. One way of doing this is logging the events you've processed, and then not processing already-logged events. For optimum security, you can confirm the event data with us before acting upon it. To do so: - Parse the JSON data as above. - Grab the received Event object ID value. - Use the Event object ID in a retrieve event API call. - Take action using the returned Event object.
https://docs.hypertrack.com/events/webhook.html
2017-06-22T22:24:10
CC-MAIN-2017-26
1498128319912.4
[array(['../assets/webhook.png', 'Webhook config'], dtype=object)]
docs.hypertrack.com
Routing¶ SiteSupra routing is heavily based on Symfony’s routing component and uses very similar syntax. However, there are some minor differences. For example, you have to load all your routing files manually in your package’s inject() method: Function locateConfigFile searches routes.yml in your package’s Resources\config directory. Common Example¶ Let’s take a look at some routing definition examples. The most simple would be SupraPackageFramework main routing file: configuration section at line 1 defines global prefix and defaults (default parameter values) keys. prefix must be explicitly defined even with default ~ value. routes section, starting from line 3, defines actual routes. Each route may contain the following fields: patterndefines actual URI that will trigger the route; controllerspecifies the controller (in the example above, Framework:Combo:comboresolves into SupraPackageFramework→ ComboController→ comboAction(just like Symfony does!); filtersdefines Symfony route filters; requirementshere you can specify per-parameter regex requirements; defaultsprovides default parameter values; optionsat the moment supports frontendkey only. where only pattern and controller are required. Tip Due to the fact SiteSupra’s routing is based on Symfony Routing component, everything written in Symfony documentation applies to SiteSupra as well - we did not reinvent the wheel here.
http://sitesupra-docs.readthedocs.io/en/latest/docs/routing.html
2017-06-22T22:04:02
CC-MAIN-2017-26
1498128319912.4
[]
sitesupra-docs.readthedocs.io
8 Ways to Increase Your Mean Time Between Loss of Sleep Want even longer periods of uninterrupted sleep? Here are eight best practices to make dynamic infrastructure and server monitoring even easier with New Relic Infrastructure. 1. Install the Infrastructure agent across your entire environment New Relic Infrastructure was designed to help enterprise customers monitor their large and dynamically changing environments at scale. In order to facilitate this, the UI is completely driven by tags that let you visualize aggregated metrics, events, and inventory for a large number of servers. To really get the most out of Infrastructure monitoring, we recommend installing it across your entire environment, preferably even across multiple regions and clusters. This will provide a more accurate picture of the health of your host ecosystem and the impact your infrastructure has on your applications. Want to achieve faster Mean Time To Resolution (MTTR)? Install Infrastructure on database servers, web servers, and any other host that supports your applications. When deploying the agent, leverage custom attributes to tag your hosts so that you can use those for filtering the data presented in the UI and for setting alerts. This is in addition to any Amazon EC2 tags you may be using which will auto-import when you enable the EC2 integration. You may also prefer to keep the agent logs separate from the system logs, which you can do through the configuration. How to do it - Leverage our install modules for config management tools such as Chef, Puppet and Ansible to easily deploy your agent across all your infrastructure. - Read the instructions in the github repo for your config management tool referenced in the link above and define the custom_attributes you want to use to tag your hosts. - Set the ‘log_file` attribute to your preferred location for the New Relic Infrastructure agent logs. If you are installing the agent on a single host, the process should only take a few minutes and you can find detailed instructions in our documentation. 2. Configure the native EC2 integration If you have an AWS environment, in addition to installing the Infrastructure agent on your EC2 instances to monitor them, we also recommend configuring the EC2 integration so that Infrastructure can automatically import all the tags and metadata associated with your AWS instances. This allows you to filter down to a part of your infrastructure using the same AWS tags (example, ECTag_Role=’Kafka’), and slice-and-dice your data in multiple ways. Additionally, our ‘Alerts’ and ‘Saved Filter Sets’ are completely tag-driven and dynamic, so they automatically add/remove instances matching these tags to give our users the most real-time views that scale with your cloud infrastructure. How to do it - From the New Relic Infrastructure menu bar, select Integrations. - Enable the EC2 integration. - If you have not configured any AWS integration yet, click on the Amazon Web Services EC2 button and follow the steps. - If you have already configured other AWS integrations: - Click on the Manage Services link on the top right. - Select the EC2 checkbox. - Click on Save Changes on the bottom right. 3. Activate integrations with your Pro subscription Monitoring your Infrastructure extends beyond just CPU, memory, and storage utilization. That’s why Infrastructure Pro has out-of-the-box Integrations that allow you to monitor all the services that support your hosts as well. Activate any of our 20+ integrations, including AWS Billing, AWS ELB, Amazon S3, MySQL, NGINX, and more, to extend monitoring to your AWS or on-host applications, and access the pre-configured dashboards that appear for each of them. How to do it AWS Integrations: - From the New Relic Infrastructure menu bar, select Integrations. - Click on an integration you want to configure. - Follow the on-screen instructions to connect New Relic to your AWS account. - Select the integrations you want to enable. On-Host Integrations: - From the New Relic Infrastructure menu bar, select Integrations. - Click on the On Host Integrations tab. - Click on the link to Configure any of the integrations and follow the steps in the documentation. - Once the integration starts reporting data it will automatically show as “Active.” 4. Create filter sets With New Relic Infrastructure, users can create filter sets to organize hosts, cluster roles, and other resources based on criteria that matter the most to users. This allows you to optimize your resources by using a focused view to monitor, detect, and resolve any problems proactively. The attributes for filtering are populated from the auto-imported EC2 tags or custom tags that may be applied to hosts. You can combine as many filters as you want in a filter set, and save them to share with other people in your account. You’ll also be able to see the color-coded health status of each host inside the filter set, so you can quickly identify problematic areas of your infrastructure. Additionally, filter sets can be used in the health map to get an overview of your infrastructure performance at a glance based on the filters that matter to your teams. How to do it - From the New Relic Infrastructure menu bar, select Compute. - Click on the Add Filter button on the left and specify your filtering criteria. - Click on the edit icon next to New Filter Set and set the name for your filter set. - Click on Save. - Access your filter set by clicking on Saved Filter Sets at the top of the left sidebar. 5. Create alert conditions With New Relic Infrastructure, you can create alert conditions directly within the context of what you are currently monitoring with New Relic. For example, if you are viewing a filter set comprised of a large number of hosts and notice a problem, you don’t need to create an individual alert condition for every host within. Instead, we recommend initiating the alert condition directly from the chart of the metric you are viewing and creating it based on the filter tags. This will create an alert condition for any hosts that match those tags, allowing Infrastructure to automatically remove hosts that go offline and add new hosts to the alert condition if they match those tags. Alerts configured once for the appropriate tags will scale correctly across all future hosts. And know that you can also leverage existing alert policies for Infrastructure’s alert conditions. How to do it For Compute, Network, Storage and Processes Metrics - From the New Relic Infrastructure menu bar, select the tab that contains the metrics you want to alert on. - Click on the bell icon (“Set Alert”) at the top right of a chart. - Name your alert condition. - Add additional filters using the Narrow Down Entities drop down. - Select a metric and provide the threshold details. - Choose an existing alert policy or create a new one. - Click on Create. For Integrations - From the New Relic Infrastructure menu bar, select Integrations. - Click on the Set Alert link for the integration you want to create an alert for. - Name your alert condition. - Add additional filters using the Narrow Down Entities drop down. - Select a metric and provide the threshold details. - Choose an existing alert policy or create a new one. - Click on Create. 6. View Infrastructure data alongside APM data The integration between New Relic APM and Infrastructure lets you see your APM data and infrastructure data side by side, so you can find the root cause of problems more quickly, no matter where they originate. This allows users to view the performance relationship of your hosts and the applications running on them, allowing for quicker diagnosis of the issue and impact on the business’ health. Use health maps to quickly spot any issues or alerts related to the health of your applications and how that connects to the supporting infrastructure. The first boxes starting from the top left are those that require your attention. How to do it - Click on Health map in the top navigation to access health map. - Select the appropriate application-centric or host-centric view, based on your infrastructure filter sets. - Use the filter to narrow down the cards to the ones you are interested in. - Mouse over the cards to get a tooltip with additional information about the current issues. - Click on the title of a card to navigate to the appropriate APM or Infrastructure page with more details about the application or hosts. 7. Access Infrastructure data in New Relic Insights Teams that use multiple New Relic products find it useful to create a single dashboard to visually correlate the Infrastructure’s health with Application, Browser and Synthetics metrics. That’s where New Relic Insights comes in. All the granular metrics and events collected by Infrastructure are stored in New Relic Insights and are accessible to you immediately. This data is retained for three months for an Essential subscriptions and 13 months for a Pro subscription. Having access to the raw metrics means you can run more custom queries using NRQL, and also create dashboards to share Infrastructure metrics with your team. Simply click on the ‘View in Insights’ icon above any of the Infrastructure charts to view the query that drives the data. How to do it - From the New Relic Infrastructure menu bar, select the screen with the chart you are interested in. - Click on the View in Insights icon on the top right of any Infrastructure chart. - Edit the query in Insights to customize it at will. - Use the Add to Dashboard button to save the query for later use. 8. Update your agents regularly New Relic’s software engineering team is constantly pushing out improvements and new features to improve our customers’ overall monitoring experience. In order to take advantage of all the awesomeness they’re delivering, we recommend regularly updating to the latest version of the Infrastructure agent. How to do it - From the New Relic Infrastructure menu bar, select Settings. - From the left sidebar, select Agents to check what agent versions are you using. - If needed, update the agent using the instructions for your operating system. Want more user tips? View training videos at New Relic University. - Read the documentation. - Ask a question in the New Relic Community Forum.
https://docs.newrelic.com/docs/infrastructure/new-relic-infrastructure/guides/new-relic-infrastructure-best-practices-guide
2018-12-09T21:38:11
CC-MAIN-2018-51
1544376823183.3
[array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/infra-setup_0.png', 'infra-setup.png Infrastructure integration setup page'], dtype=object) array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/infra-integration-enable_0.png', 'infra-integration-enable.png Infrastructure integration enable'], dtype=object) array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/infra-filter-set.png', 'infra-filter-set.png infra-filter-set.png'], dtype=object) array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/infra-alerts.png', 'infra-alerts.png infra-alerts.png'], dtype=object) array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/infra-health-map.png', 'infra-health-map.png infra-health-map.png'], dtype=object) array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/infra-insights.png', 'infra-insights.png infra-insights.png'], dtype=object) ]
docs.newrelic.com
Arduino MKR WiFi 101 mkrwifi1010 ID for board option in “platformio.ini” (Project Configuration File): [env:mkrwifi1010] platform = atmelsam board = mkrwifi1010 You can override default Arduino MKR WiFi 1010 settings per build environment using board_*** option, where *** is a JSON object path from board manifest mkrwifi1010.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:mkrwifi1010] platform = atmelsam board = mkrwifi1010 ; change microcontroller board_build.mcu = samd21g18a ; change MCU frequency board_build.f_cpu = 48000000L Uploading¶ Arduino MKR WiFi 1010 supports the next uploading protocols: sam-ba blackmagic jlink atmel-ice Default protocol is sam-ba You can change upload protocol using upload_protocol option: [env:mkrwifi1010] platform = atmelsam board = mkrwifi101 MKR WiFi 1010 does not have on-board debug probe and IS NOT READY for debugging. You will need to use/buy one of external probe listed below.
https://docs.platformio.org/en/latest/boards/atmelsam/mkrwifi1010.html
2018-12-09T21:24:24
CC-MAIN-2018-51
1544376823183.3
[]
docs.platformio.org
Arduino Pro Micro¶ Enter the bootloader¶ Recover a bricked board by entering the bootloader. - Power up the board. - Connect RST to GND for a second to enter the bootloader and stay in it for 8 seconds. Default system features¶ The default configuration includes those major features. They are all initialized by sys_start() at the startup of the application. Drivers¶ Supported drivers for this board. - adc — Analog to digital convertion - analog_input_pin — Analog input pin - analog_output_pin — Analog output pin - bmp280 — BMP280 temperature and pressure sensor - dht — DHT temperature and humidity sensor - ds18b20 — One-wire temperature sensor - ds3231 — RTC clock - eeprom_i2c — I2C EEPROM - exti — External interrupts - gnss — Global Navigation Satellite System - hd44780 — Dot matrix LCD - hx711 — HX711 ADC for weigh scales - i2c — I2C - i2c_soft — Software I2C - jtag_soft — Software JTAG - mcp2515 — CAN BUS chipset - nrf24l01 — Wireless communication - owi — One-Wire Interface - pin — Digital pins - pwm — Pulse width modulation - sd — Secure Digital memory - sht3xd — SHT3x-D Humidity and Temperature Sensor - spi — Serial Peripheral Interface - uart — Universal Asynchronous Receiver/Transmitter - uart_soft — Software Universal Asynchronous Receiver/Transmitter - usb — Universal Serial Bus - usb_device — Universal Serial Bus - Device - watchdog — Hardware watchdog - xbee — XBee - xbee_client — XBee client Library Reference¶ Read more about board specific functionality in the Arduino Pro Micro.
https://simba-os.readthedocs.io/en/latest/boards/arduino_pro_micro.html
2018-12-09T22:18:41
CC-MAIN-2018-51
1544376823183.3
[]
simba-os.readthedocs.io
HeroController class A Navigator observer that manages Hero transitions. An instance of HeroController should be used in Navigator.observers. This is done automatically by MaterialApp. - Inheritance - Object - NavigatorObserver - HeroController Constructors - HeroController({CreateRectTween createRectTween }) - Creates a hero controller with the given RectTween constructor if any. [...] Properties - createRectTween → CreateRectTween - Used to create RectTweens that interpolate the position of heroes in flight. [...]final -StartUserGesture( Route route, Route previousRoute) → void - The Navigator's route routeis being moved by a user gesture. [...]override - didRemove( Route route, Route previousRoute) → void - The Navigator removed route. [...]inherited - didReplace( {Route newRoute, Route oldRoute }) → void - The Navigator replaced oldRoutewith newRoute
https://docs.flutter.io/flutter/widgets/HeroController-class.html
2018-12-09T22:07:53
CC-MAIN-2018-51
1544376823183.3
[]
docs.flutter.io
mysql_username and database_name: mysql -u <mysql_username> -p <database_name> SHOW VARIABLES LIKE 'character_set%'; If the value of variable character_set_connection is not UTF-8, you need to convert. If the current encoding is latin1, please follow the next steps to convert it to UTF-8. First, make a dump of the database using the following: mysqldump -u <mysql_username> -p --opt --default-character-set=latin1 --skip-set-charset <database_name> > seek_db.sql Now a couple of commands to change the contents of the dump sed -e 's/CHARSET=latin1/CHARSET=utf8/g' seek_db.sql > seek_db_utf8.sql sed -e 's/COLLATE=utf8_unicode_ci//g' seek_db_utf8.sql > seek_db_converted.sql Now refresh the database from the dump: mysql -u <mysql_username> -p <database_name> < seek_db_converted.sql If you have started up SEEK before doing this conversion you may need to clear the SEEK cache: bundle exec rake tmp:clear You can now clear out the intermediate files: rm seek_db.sql seek_db_utf8.sql seek_db_converted.sql Updating the init.d scripts If you use init.d scripts to start and stop the Delayed Job, Solr Search and Soffice services, you may need to update these (you will need to be a user with sudo access to update these scripts). Solr Search - Delayed Job - Soffice - Starting up SEEK and the Services You can now startup the services, either using the init.d scripts or by running: bundle exec rake sunspot:solr:start ./script/delayed_job start If you don’t use SEEK with Apache, the command to start it is now: bundle exec rails server Updating Passenger Phusion If you run SEEK with Apache, you may find you need to update and reconfigure Apache and Passenger Phusion. Please follow the steps in this section of the Production Installation Guide
https://docs.seek4science.org/tech/upgrading-to-0.18.html
2018-12-09T22:04:35
CC-MAIN-2018-51
1544376823183.3
[]
docs.seek4science.org
Review Workflow¶ The following provides a list of steps with the goal of reviewing a popper pipeline that someone else has created: - The popper workflowcommand generates a graph that gives a high-level view of what the pipeline does. - Inspect the content of each of the scripts from the pipeline. - Test that the pipeline works on your machine by running popper run. - Check that the git repo was archived (snapshotted) to zenodo or figshare by running popper zenodoor popper figshare.
https://popper.readthedocs.io/en/v1.0.0/protocol/review_workflow.html
2018-12-09T22:18:17
CC-MAIN-2018-51
1544376823183.3
[]
popper.readthedocs.io
» Administrative Rules Related » Administrative Code » Department of Public Instruction (PI) » Chapter PI 11 Up Up. (2) ,) . Down Down /code/admin_code/pi/11 true administrativecode /code/admin_code/pi/11/36/6 administrativecode/PI 11.36(6) administrativecode/PI 11.36(6) section true » Administrative Rules Related » Administrative Code » Department of Public Instruction (PI) » Chapter PI 11.
http://docs.legis.wisconsin.gov/code/admin_code/pi/11/36/6
2017-02-19T16:28:58
CC-MAIN-2017-09
1487501170186.50
[]
docs.legis.wisconsin.gov
Blue Toolbar¶ Toolbar General¶ Individual Options¶ - Compact Directory - Compact Picture Directory - Contact Report - Custom Reports - Directories - Export Members - Export Organizations to Excel - Family Export - Family Directory - Family Directory, Word Merge Document - Family Picture Directory - Labels - Family and Group By Address - Printing Avery 5160 Labels - Picture Directory - Status Flag Export - Word Merge - Word Merge Picture Directory
http://docs.touchpointsoftware.com/BlueToolbar/index.html
2017-02-19T16:33:08
CC-MAIN-2017-09
1487501170186.50
[]
docs.touchpointsoftware.com
SSL and TLS Transports Reference The following Mule transports provide access to TCP connections: The TCP Transport, which uses the basic TCP transport. The Secure Sockets Layer (SSL) and Transport Layer Security (TLS) transports use TCP with socket-level security. Other than the type of socket used, these transports all behave quite similarly. The SSL transport allows sending or receiving messages over SSL connections. SSL is a layer over IP and implements many other reliable protocols such as HTTPS and SMTPS. However, you may want to use the SSL transport directly if you require a specific protocol for reading the message payload that is not supported by one of these higher level protocols. This is often the case when communicating with legacy or native system applications that don’t support web services. Namespace and Syntax XML namespace: XML schema location: Connector syntax: Protocol Types PROTOCOL-TYPE defines how messages in Mule are reconstituted from the data packets. The protocol types are: Endpoint syntax: You can define your endpoints 2 different ways: Prefixed endpoint: <ssl:inbound-endpoint Non-prefixed URI: <inbound-endpoint See the sections below for more information. Considerations SSL is one of the standard communication protocols used on the Internet, and supports secure communication both across the internet and within a local area network. The Mule SSL transport uses native Java socket support, adding no communication overhead to the classes in java.net, while allowing many of the advanced features of SSL programming to be specified in the Mule configuration rather than coded in Java. Use this transport when communicating using low-level SSL using. As shown in the examples below, the SSL transport can be used to Create an SSL Server an SSL server Send Messages to an SSL Server messages to an SSL server The use of SSL with Java is described in detail in the JSSE Reference Guide. In particular, it describes the keystores used by SSL, how the certificates they contain are used, and how to create and maintain them. Features The SSL module allows a Mule application both to send and receive messages over SSL connections, and to declaratively customize the following features of SSL SSL endpoints can be used in one of two ways: To create an SSL server that accepts incoming connections, declare an inbound ssl endpoint with an ssl:connector. This creates an SSL server socket that reads requests from and optionally writes responses to client sockets. To write to an SSL server, create an outbound endpoint with an ssl:connector. This creates an SSL client socket that writes requests to and optionally reads responses from a server socket. To use SSL endpoints, follow the following steps: Add the MULE SSL namespace to your configuration: Define the SSL prefix using xmlns:ssl="" Define the schema location with Define one or more connectors for SSL endpoints. Create an SSL Server To act as a server that listens for and accepts SSL connections from clients, create an SSL connector that inbound endpoints use: <ssl:connector Send Messages to an SSL Server To send messages on an SSL connection, create a simple TCP connector that outbound endpoints use: SSL endpoints. Messages are received on inbound endpoints. Messages are sent to outbound endpoints. Both kinds of endpoints are identified by a host name and a port. By default, SSL endpoints use the request-response exchange pattern, but they can be explicitly configured as one-way. The decision should be straightforward: Example Configurations This shows how to create an SSL server in Mule. The connector at ❶ defines that a server socket is created that accepts connections from clients. Complete mule messages are read from the connection (direct protocol) becomes the payload of a Mule message (since payload only is false). The endpoint at ❷ applies these definitions to create a server at port 4444 on the local host. The messages read from there are then sent to a remote ssl endpoint at ❸. The flow version uses the eof protocol (❹), so that every byte sent on the connection is part of the same Mule message. Note that both connectors specify separate keystores to be used by the client (outbound) and server (inbound) endpoints. Configuration Options For more details about creating protocol handlers in Java, see . Configuration Reference SSL Transport Javadoc API Reference Reference the SSL Javadoc for this module. accepts (and you can then log) bytes as they are received.
https://docs.mulesoft.com/mule-user-guide/v/3.7/ssl-and-tls-transports-reference
2017-02-19T16:30:39
CC-MAIN-2017-09
1487501170186.50
[]
docs.mulesoft.com
The MathML input processor¶ The options below control the operation of the MathML input processor that is run when you include "input/MathML" in the jax array of your configuration or load a combined configuration file that includes the MathML input jax. They are listed with their default values. To set any of these options, include a MathML section in your MathJax.Hub.Config() call. For example MathJax.Hub.Config({ MathML: { useMathMLspacing: true } }); would set the useMathMLspacing option so that the MathML rules for spacing would be used (rather than TeX spacing rules). useMathMLspacing: false Specifies whether to use TeX spacing or MathML spacing when the HTML-CSS output jax is used.
http://docs.mathjax.org/en/latest/options/MathML.html
2017-02-19T16:34:19
CC-MAIN-2017-09
1487501170186.50
[]
docs.mathjax.org
48). Overriding the error page with your own..
http://docs.spring.io/spring-boot/docs/current/reference/html/howto-actuator.html
2017-02-19T16:38:05
CC-MAIN-2017-09
1487501170186.50
[]
docs.spring.io
: How long is each Joomla! version supported?..
http://docs.joomla.org/index.php?title=Joomla_3_FAQ&oldid=84413
2014-07-10T08:31:34
CC-MAIN-2014-23
1404776407319.89
[]
docs.joomla.org
It has been suggested that this article or section be split into specific version Namespaces. (Discuss). If version split is not obvious, please allow split request to remain for 1 week pending discussions. Proposed since. script found in administrator/components/com_admin/sql. You will also need to manually delete the Flash uploader files from the media folder.... Native Recaptcha was added into Joomla in version . Using Recaptcha is a great way of preventing bots from making fake accounts and content on your site. There are five steps to setting up Recaptcha: That's it! You're done! To. The reply-to email address is not used when replying to emails sent from the Contact component and the email is sent to yourself. Joomla sends emails from the Contacts component with the following headers From: email address set as the site email address in the global configuration '"To:'" email address set in the contact component '"Reply-to:'" email address of the person who submitted the contact form On receiving the email you should be able to hit reply and the email is sent to the email address set in the reply-to field. This is the correct behaviour as defined in the Internet (). IF you are using Gmail or GoogleApps AND the "From" address is one of your "Send mail as" addresses then the "Reply-to" email address is ignored and the reply is sent to the "From" address. The best solution to this is to set the site email address in the global configuration for your website to an email address that is NOT one of your "Send mail as" addresses in Gmail.: Some , [3]..." Joomla! 2.5 eliminates sections and instead allows you to use categories within categories, with as many levels of categories as you choose. All content must be in a category, but it is not necessary to have multiple categories or levels. This. Extensions');
http://docs.joomla.org/User:Hutchy68/DPL
2014-07-10T09:21:08
CC-MAIN-2014-23
1404776407319.89
[]
docs.joomla.org
Aurora MySQL database engine updates 2018-03-13 Version: 2.01.1 Aurora MySQL 2.01.1 is generally available. Aurora MySQL 2.x versions are compatible with MySQL 5.7 and Aurora MySQL 1.x versions are compatible with MySQL 5.6. When creating a new Aurora MySQL DB cluster, you can choose compatibility with either MySQL 5.7 or MySQL 5.6. When restoring a MySQL 5.6-compatible snapshot, you can choose compatibility with either MySQL 5.7 or MySQL 5.6. You can restore snapshots of Aurora MySQL 1.14*, 1.15*, 1.16*, and 1.17* into Aurora MySQL 2.01.1. We don this release of Aurora MySQL 5.7. Upgrade to Aurora 2.03 for performance schema support. Comparison with Aurora MySQL 5.6 The following Amazon Aurora MySQL features are supported in Aurora MySQL 5.6, but these features are currently not supported in Aurora. Currently, Aurora MySQL 2.01.1 does not support features added in Aurora MySQL version 1.16 and later. For information about Aurora MySQL version 1.16, see Aurora MySQL database engine updates 2017-12-11. MySQL 5.7 compatibility Aurora MySQL 2.01.1.01.1 does not currently support the following MySQL 5.7 features: Global transaction identifiers (GTIDs). Aurora MySQL supports GTIDs in version 2.04 and.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.2011.html
2020-11-23T22:58:49
CC-MAIN-2020-50
1606141168074.3
[]
docs.aws.amazon.com
Encrypt a MySQL): installdir/mysql/conf/my.cnf - For Bitnami installations following Approach B (self-contained installations): installdir/mysql/my.cnf Add the following lines to the configuration file, within the [mysqld] section, to activate the keyring_file plugin: early-plugin-load=keyring_file.so keyring_file_data=installdir/mysql/data/keyring NOTE: The keyring file will be automatically created in the above location when the first table is encrypted. Keep a backup of this file as the data stored in the encrypted tables cannot be recovered without it. Restart the MySQL server: $ sudo installdir'
https://docs.bitnami.com/installer/apps/redmine-plus-agile/administration/encrypt-tables/
2020-11-23T22:59:37
CC-MAIN-2020-50
1606141168074.3
[]
docs.bitnami.com
R/import_countries.R import_country.Rd Download integrated rounds separately for countries from the European Social Survey import_country(country, rounds, ess_email = NULL, format = NULL) import_all_cntrounds(country, ess_email = NULL, format = NULL) download_country( country, rounds, ess_email = NULL, output_dir = getwd(), format = "stata" ) for import_country if length(rounds) is 1, it returns a tibble with the latest version of that round. Otherwise it returns a list of length(rounds) containing the latest version of each round. For download_country, if output_dir is a valid directory, it returns the saved directories invisibly and saves all the rounds in the chosen format in output_dir Use import_country to download specified rounds for a given country and import them to R. import_all_cntrounds will download all rounds for a given country by default and download_country will download rounds and save them in a specified format in the supplied directory. The format argument from import_country for Denmark dk_three <- import_country("Denmark", 1:3) # Only download the files, this will return nothing temp_dir <- tempdir() download_country( "Turkey", rounds = c(2, 4), output_dir = temp_dir ) # By default, download_country downloads 'stata' files but # you can also download 'spss' or 'sas' files. download_country( "Turkey", rounds = c(2, 4), output_dir = temp_dir, format = 'spss' ) # If email is not registered at ESS website, error will arise uk_one <- import_country("United Kingdom", 5, "[email protected]") # Error in authenticate(ess_email) : # The email address you provided is not associated with any registered user. # Create an account at # If selected rounds don't exist, error will arise czech_two <- import_country("Czech Republic", c(1, 22)) # Error in country_url(country, rounds) : # Only rounds ESS1, ESS2, ESS4, ESS5, ESS6, ESS7, ESS8 available # for Czech Republic }
https://docs.ropensci.org/essurvey/reference/import_country.html
2020-11-23T21:37:39
CC-MAIN-2020-50
1606141168074.3
[]
docs.ropensci.org
Table of Contents Configure your new Slackware System We'll assume you've read the Installation Guide, and you have a clean install of Slackware on your machine that you're happy with. This beginner's guide is meant to put you firmly on the Slackware path. If you installed Slackware for the first time, you may be daunted by the sight of the blinking cursor at a console login. Let this page guide you through the initial configuration of a freshly installed Slackware system. Before we continue, it is important to realize that the Slackware package manager does not perform any dependency checks. If you are new to Slackware, then performing a full installation (with the possible exception of the KDEI series) could prevent a lot of problems later on. The official Slackware recommendation 1) is “If you have the disk space, we encourage you to do a full installation for best results”. Post Installation Overview When Slackware starts for the first time after completing the installation and rebooting, you will notice that it boots to a console log in screen - not the graphical login screen you may expect from using other distributions. Do not let that discourage you. It is the first stage in a learning experience which will make you a lot more knowledgeable in Linux after as little as a few weeks. The installation did not offer to create a user account. At this stage, there is only the “ root” account. You should remember the root password, which you set at the very end of the installation procedure. Login as “root” now - you will find yourself at a “#” console root-prompt. So now what? The “ root” user is not the account which you are going to use as a matter of routine. Root is meant for system maintenance and configuration, software upgrades and the like. The first thing to do is create a fresh user account for yourself, without the root privileges. After that, it is time to start considering the installation of “Proprietary Graphics Drivers” (if you own a Nvidia or Ati card), setting up a wireless network connection or starting a graphical desktop environment. There is a lot that you can do with Slackware! Let's start with the basics. Create a User Account The first thing you will need to do is create your own non-root user account. There are two ways you can do this, both from the console. The recommended way is to use Slackware's own interactive adduser script, thus: # adduser and follow the prompts. Read the user management page for more detail on the adduser script. You can use the non-interactive standard Linux program useradd too: # useradd -m -g users -G wheel,floppy,audio,video,cdrom,plugdev,power,netdev,lp,scanner -s /bin/bash slacker Once that’s done you can log in to your user account. Log out of the root account (type logout at the root prompt) and then login using the new account you just created. Now come the really interesting adventures! Make Slackware Speak your Language Slackware's installer is English-only and it will also assume that English is the language in which you want to be addressed by the programs on your computer. If you are a non-English speaker and want your Slackware system to “talk” to you in your own language, you should check out our instruction article “Localization: Adapt Slackware to your own Language” Configure a Package Manager Now that you have Slackware running, you should consider spending a bit of time caring for your computer's good health. The software which was installed as part of the Slackware release you are running, may develop vulnerabilities over time. When those vulnerabilities are critical to the health of your computer, then Slackware will usually publish a patched version of the software package. These patched packages are made available online (in the /patches directory of the release) and announced on the Slackware Security mailing list. You have various options in order to keep your Slackware installation up-to-date. It's not advised to make the process of applying security updates fully automatic, but it is possible to do so using a cron job. slackpkg Your best option is to use slackpkg, which is a package manager on top of Slackware's own pkgtools. Before you can use slackpkg you will need to define an online mirror from which it will download updates to your computer. A list of available mirrors for your Slackware version can be found in this file: /etc/slackpkg/mirrors Open the file in a text editor such as nano or vi and uncomment a single mirror URL. Make sure that the URL mentions the release number for the version of Slackware you are running! Also, pick a mirror which is close to you or of which you know it is fast. When you have done that, you need to initialize slackpkg's database by running # slackpkg update gpg # slackpkg update Note that package management is done as the “ root” user! You will need to update the slackpkg database from time to time, when you learn about the availability of new patches for your distribution. After updating the database you can let it download and install the updates. Again, see the slackpkg page for guidelines about the use of this tool. The “install-new”, “upgrade-all” and “clean-system” commands will always show you a list of candidate packages to act on before excecuting anything. This allows you to review the suggested package alterations and select/deselect anything you do not agree with. The “clean-system” is technically only needed after you upgrade from one Slackware release to the next (say, from 14.1 to 14.2) and it is meant to remove any Slackware package which is not (or no longer) part of the core distribution. slackpkg clean-systemcommand regards any 3rd package as a candidate for removal! Therefore, be smart with your blacklist ( /etc/slackpkg/blacklist) Watching for Updated Packages The Slackware Essentials book has a chapter about keeping up to date. It would be good if you read it now if you have not done so already. - One way to look out for updated packages (patches) is to subscribe yourself to the Slackware Security mailing list and act when you read about new patches. - Another way is to setup a script to check for updates once a day and make the script email you when updates are available. For this to work you need to have sendmail configured (although it usually runs out of the box) and know how to create a cron job. And of course, have a script that does the work. An example of such a script is rsync_slackware_patches.sh which watches the Slackware ChangeLog.txt for updates. You download the script, edit it to use your favorite mirror server and make it executable so that it can be used in a cron job: # wget -O /usr/local/bin/rsync_slackware_patches.sh # chmod +x /usr/local/bin/rsync_slackware_patches.sh The script uses a couple of defaults which you may want to change to suit your environment - such as the location where the script will download the patches to. Simply run the script once, and see what it reports: # /usr/local/bin/rsync_slackware_patches.sh [rsync_slackware_patches.sh:] Syncing patches for slackware version '13.37'. [rsync_slackware_patches.sh:] Target directory /home/ftp/pub/Linux/Slackware/slackware-13.37/patches does not exist! [rsync_slackware_patches.sh:] Please create it first, and then re-run this script. You notice that you will have to edit the script and define a local directory (and create that directory too!) for the script to use. When that is done, you should run the script once - for a first-time download of patches. Then you can use cron to run the script once a day. For instance, schedule the script to run at 05:33 every day, and let it check for updates to the 64-bit version of Slackware-13.37. Open the crontab editor by typing crontab -e and then you add the following line to your cron table: 33 5 * * * /usr/local/bin/rsync_slackware_patches.sh -q -r 13.37 -a x86_64 This command will be executed silently (meaning you will not get emailed) if no new patches are found. However when the script finds updates it will download them and email you the script's output. You will get an email like this: [rsync_slackware_patches.sh:] New patches have arrived for Slackware 13.37 (x86_64)! ....................................................................... 0a1,10 > Mon Sep 10 20:26:44 UTC 2012 > patches/packages/seamonkey-2.12.1-x86_64-1_slack13.37.txz: Upgraded. > This is a bugfix release. > patches/packages/seamonkey-solibs-2.12.1-x86_64-1_slack13.37.txz: Upgraded. > This is a bugfix release. > +--------------------------+ > Sun Sep 9 19:11:35 UTC 2012 > patches/packages/mozilla-thunderbird-15.0.1-x86_64-1_slack13.37.txz: Upgraded. > This is a bugfix release. > +--------------------------+ And then you know you have to update slackpkg and make it install the latest patches. This gives you control over your updates (you decide when you update) while being automatically warned about the availability of new patches (which will already have been downloaded for you). Configure your Network If you installed the network packages, then at the end of the Slackware installation, you will have been asked a couple of simple questions, like: - do you use DHCP; - or else, what IP address do you want to use; - what is your computer's hostname; - do you have a (DNS) nameserver in the network? All of these questions have resulted in the setup of a few network related configuration files. /etc/rc.d/rc.inet1.conf This is where the details for your network interfaces go. Slackware's netconfigtool will only configure your eth0interface. If you have additional network interfaces, you can edit the file with a text editor such as nanoor viand add you configuration details. There is a man page for this: man rc.inet1.conf /etc/resolv.conf This is where your nameserver and domain search list are added. If you use DHCP then the DHCP client will update the file. If you use static IP addresses, then you are supposed to edit the file yourself. There is a man page for this: man resolv.conf /etc/HOSTNAME This is where your computer's hostname is defined. /etc/hosts This is where you will find a definition for your loopback interface which connects that to your hostname. You can add further hostname-to-IP-address mappings in this file if you do not use a DNS server or if you need specific mappings which the DNS server does not provide. There is a man page for this: man hosts If you want to read in more detail about how to configure your network, have a look at this online comprehensive guide to networking in Slackware. Traditional Network Configuration Wired Network To configure your wired network interface eth0, run (as root) # netconfig The same script which was run during the installation process. netconfigonly deals with the wired connection for eth0. On the assumption that you configured your wired connection with netconfig, your network should be connected automatically without the need for any post-installation configuration. If you didn’t enter your network configuration details during installation, just run netconfig as root; then run # /etc/rc.d/rc.inet1 eth0_start and you should have a working network connection instantaneously. Wireless Most common wireless hardware is supported by Linux these days. You can search online if your wireless hardware is supported by 3rd parties that have written Linux drivers. If you want to know if your computer recognizes your wireless card, simply run # iwconfig as root. If that tool reports “no wireless extensions” for all your network interfaces then the kernel does not have a driver for your wireless card and you'll have to find one online. As with the wired network interfaces, your wireless card is traditionally configured in the file /etc/rc.d/rc.inet1.conf. You can read a lot more about it in this wireless configuration guide. There is also the man page: # man rc.inet1.conf You will also need to take steps to include wireless security, whether WEP or WPA2. Unencrypted wireless connections are strongly discouraged. Note that WPA/WPA2 encryption is not configured just in /etc/rc.d/rc.inet1.conf, you will also need to edit /etc/wpa_supplicant.conf and add an encryption key there. Wireless encryption issues, in particular for WPA, can be hard to troubleshoot. Some basic troubleshooting steps are detailed in the above networking guide, just in case you do not get your computer associated to the Access Point. Graphical Network Configuration Services Slackware currently has some alternatives to configure and monitor your network connections. These install a daemon (aka a background service) which will allow you to switch between wired and wireless connections easily. That makes them perfectly suited for mobile users. They come with graphical configuration utilities and do not depend on the traditional Slackware configuration files - in fact, those files will cause conflicts if they contain network configuration. - You will find wicd in the extra section of the Slackware release tree (the word extra means that it is not part of the core distribution and will not have been installed as part of a full installation). After installing the wicd package, you have to make its init script executable so that the network daemon automatically starts at boot: # chmod +x /etc/rc.d/rc.wicd You can then configure your network using the graphical tool wicd-clientor if you are running Slackware 14 you can use the KDE widget for wicd instead. For console lovers, there is also wicd-curseswhich offers the same configuration capabilities as the X-based counterparts. - Starting with Slackware 14, there is also Networkmanager. It will be installed as part of a full install, but the network daemon will not be started by default. As with wicd, you have to make its init script executable: # chmod +x /etc/rc.d/rc.networkmanager which will make NetworkManager start at boot. You will have to configure NetworkManager using an X-based graphical utility. Slackware 14 includes a KDE widget for Networkmanager. If you are using another Desktop Environment like XFCE, you can install the Gnome network-manager-applet from SlackBuilds.org. Switch to a generic kernel It's recommended that you switch to Slackware's generic kernel. This is easy to do but there are a few steps to follow. The “huge” kernel is essentially a kernel which has every hardware driver built in which you might need for a successful installation of your computer. Think of storage and (wired) network drivers, filesystem and encryption drivers and a lot more. All these built-in drivers result in a big kernel image (hence the name “huge”). When this kernel boots it will use up a lot your RAM (relatively speaking… with 1 GB of RAM you will not really be troubled by a few MB less RAM). The “generic” kernel on the other hand, is a kernel which has virtually no drivers built in. All drivers will be loaded into RAM on demand. This will make your kernel's memory consumption lower and the boot process a bit faster. The smaller size allows for the use of an initial RAM disk or “initrd”. An initial RAMdisk is required in certain configurations, like software RAID, or a fully encrypted hard drive. For now, you need to remember that a “huge” kernel will not support an intial RAM disk, but the “generic” kernel will. We go for maximum flexibility and use a “generic” kernel. - You will need to create an initial RAM disk (“initrd” for short). The initrd functions as a temporary root file system during the intial stage of the kernel booting, and it helps get the actual root system mounted when your system boots. Run this, as root: # /usr/share/mkinitrd/mkinitrd_command_generator.sh This command will not actually do anything. It is informational only, and will output something like this - depending on your kernel version, your hardware configuration, the root filesystem you chose when you installed Slackware and so on: # # 3.2.29 -f ext4 -r /dev/sdb2 -m usb-storage:ehci-hcd:usbhid:ohci-hcd:mbcache:jbd2:ext4 -u -o /boot/initrd.gz Run the script's suggested mkinitrdcommandline (as root) to generate the initrd.gzimage. - If you have installed LILO (the default bootloader of Slackware), then you will also need to make changes to its configuration file /etc/lilo.confby adding a section to your Slackware entry as follows: image = /boot/vmlinuz-generic-3.2.29 initrd = /boot/initrd.gz # add this line so that lilo sees initrd.gz root = /dev/sda1 label = Slackware read-only Actually, the “ mkinitrd_command_generator.sh” script will show an example section which can be added to /etc/lilo.confif you pass it the name of the generic kernel as an argument, like this: # /usr/share/mkinitrd/mkinitrd_command_generator.sh -l /boot/vmlinuz-generic-3.2.29 Note that it is recommended to add a new section instead of editing the existing kernel image section. Assign a unique label to your new section. After reboot, LILO will give you two options: to boot into your freshly added generic kernel, or to boot into the failsafe huge kernel (of which you are certain that it will work). - After making the changes to /etc/lilo.confyou have to save the file and then run # lilo -v to make your change permanent. Then, reboot. - Have a look at mkinitrdmanual page ( man mkinitrd) for more information. - If you use grub or another bootloader, then make changes which are applicable to the program you use. - If you try to use the generic kernel without creating an initrd.gz, then booting will fail with a kernel panic. Start a Graphical Desktop Environment Configure X If Required X.Org is the X-Window framework used in Slackware. The X server will usually auto-detect your graphics card and load applicable drivers. If auto-detect does not work (X crashes on startup), you will need to create a file /etc/X11/xorg.conf and set the correct options for your graphics card and display resolution. You can use # X -configure to generate a basic xorg.conf configuration file in your current directory. This file can then be customized and placed in the /etc/X11/ directory. For a detailed overview of X configuration, check the xorg.conf manual page ( man xorg.conf). Non-free Display Drivers Many people use computers with a modern graphics card powered by a Nvidia or Ati GPU (graphics processing unit). The vendors of these high-performance graphics card offer non-free (proprietary binary-only) drivers for their cards. These binary-only drivers will boost your computer's graphical and in particular OpenGL performance. If you own such a card you may want to read our Wiki article “Proprietary Graphics Drivers”. Choosing a Desktop Environment/Window Manager To choose the Window Manager or Desktop Environment you wish to use, run the xwmconfig utility: $ xwmconfig and select one of the available options. Note that you can run the xwmconfig command as the root user which will set a global default for all users. By running the same command as your ordinary user account, you override that global default and pick your own. After making your choice you can simply run $ startx Your preferred Desktop Environment or Window Manager will then start up. Graphical Login To start with a graphical login screen on boot instead of Slackware's default console login, change the default runlevel to 4. Edit the file /etc/inittab and change the line that looks like id:3:initdefault: to id:4:initdefault: Note the difference from other Linux distributions; many of those use runlevel 5 for their graphical login. In Slackware, runlevel 5 is identical to runlevel 3 (console boot). In the graphical runlevel, you will be greeted by one of the available display (login session) managers. Slackware will by default look for the availability of GDM (Gnome Display Manager), KDM (KDE Display Manager) and XDM (X Display Manager) - in that order. You can also install a third-party login manager like SliM but you will have to edit /etc/rc.d/rc.4 and add a call to your new session manager all the way at the top. Further Exploration The Command Line It may be of interest to new Linux users to explore the command line a bit more before installing a graphical desktop, just to learn some shell commands and applications available in non-graphical mode. Slackware excels in having an abundance of command line programs for a wide range of tasks. For instance, web browsing can be done with lynx or links, which are console based web browsers. You can listen to music (even network audio streams) on the console using audio players like moc, mpg123, ogg123. Mixing 64-bit with 32-bit If you just installed the 64-bit version of Slackware (often called slackware64 or Slackware for x86_64) you will soon discover that it will refuse to run 32-bit programs like Wine. You may want to read our page on adding multilib capabilities in that case. Slackware Documentation Even a Slackware user can benefit from good documentation (why else are you reading this?). Our suggestion is that you browse this Wiki for additional tips and HOWTOs. And don't forget to check out the root directory of the Slackware DVD or CD1! You'll find Slackware's own main documentation there. Every text file there is worth a read. Upgrading the System If you have been using Slackware for a while and want to upgrade to the next release once that becomes available, we have a nice HOWTO available here: Upgrading Slackware to a New Release When tracking current, you should always read the latest ChangeLog.txt before upgrading the system, to see whether any additional steps are required to be performed before or after upgrading. For upgrades to a stable release, it is a good idea to read the UPGRADE.TXT and CHANGES_AND_HINTS.TXT files located on the CD/DVD or the official mirror.
https://docs.slackware.com/slackware:beginners_guide?rev=1508062122
2020-11-23T22:49:35
CC-MAIN-2020-50
1606141168074.3
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
Fixed issues Splunk Enterprise 8.0.2.1 Splunk Enterprise 8.0.2.1 was released on March 11, 2020. This release fixes the following issue. Splunk Enterprise 8.0.2 Splunk Enterprise 8.0.2 was released on February 11, 2020. This release includes fixes for the following issues. Issues are listed in all relevant sections. Some issues might appear more than once. To check for additional security issues related to this release, visit the Splunk Security Portal. Search issues Saved search, alerting, scheduling, and job management issues Data model and pivot issues Indexer and indexer clustering issues Distributed search and search head clustering issues Data Fabric Search issues Splunk Web and interface issues Windows-specific issues Admin and CLI issues Uncategorized issues Splunk Analytics Workspace This documentation applies to the following versions of Splunk® Enterprise: 8.0.2 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/8.0.2/ReleaseNotes/Fixedissues
2020-11-23T23:07:30
CC-MAIN-2020-50
1606141168074.3
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
DeleteTrafficPolicy Deletes a traffic policy. When you delete a traffic policy, Route 53 sets a flag on the policy to indicate that it has been deleted. However, Route 53 never fully deletes the traffic policy. Note the following: Deleted traffic policies aren't listed if you run ListTrafficPolicies. There's no way to get a list of deleted policies. If you retain the ID of the policy, you can get information about the policy, including the traffic policy document, by running GetTrafficPolicy. Request Syntax DELETE /2013-04-01/trafficpolicy/ Id/ Version This example illustrates one usage of DeleteTrafficPolicy. DELETE /2013-04-01/trafficpolicy/12345678-abcd-9876-fedc-1a2b3c4de5f6/2 Example Response This example illustrates one usage of DeleteTrafficPolicy.:
https://docs.aws.amazon.com/Route53/latest/APIReference/API_DeleteTrafficPolicy.html
2020-11-23T22:57:53
CC-MAIN-2020-50
1606141168074.3
[]
docs.aws.amazon.com
OpenVPN VPN data allows you to track user activity while they are connected to the virtual private network, and additionally populates the location map with ingress activity. Before You Begin By default, some OpenVPN deployments will log to syslog automatically. Others, like OpenVPN AS, require a change to the configuration. To enable automatic syslog logging for OpenVPN AS: - Stop the OpenVPN AS service on your machine. - Find the as.conffile, add SYSLOG=trueto the file, and save it. - Restart the service. Rsyslog If you are using rsyslog, you also need to enable automatically logging over TCP or UDP. To enable automatic logging for rsyslog: - Stop the service. - Open the configuration file. - If you are using TCP, add in @@IP:port, such as *.info @@10.10.10.1:514. - If you are using UDP, add in *.info @10.10.10.1:514, - Save the file, and restart the service. You can read more information about this rsyslog configuration.?
https://docs.rapid7.com/insightidr/open-vpn/
2020-11-23T22:31:35
CC-MAIN-2020-50
1606141168074.3
[]
docs.rapid7.com
Wiki - Sprint 152 Update Features New modern user experience. Tip You can quickly navigate to the edit page by pressing e on your keyboard. We also made the following changes to the menu items: The menu actions have been consolidated into the following three categories: Wiki level actions are next to the wiki picker Tree level actions Page level actions The New page button has been moved into the tree. You can also press n on the keyboard to create a new page. We have also added count to the Follow functionality to tell you how many people are following a page. This can give you an idea of how important a page is.. In addition, you can add a caption to your images using the figure and figcaption tags. These tags let you add alternate text for images and create associated image blocks. The figcaption tag can be added above or below the image. For more information on the figcaption tag, see the documentation here. Finally, you can highlight parts of text in your wiki pages by using the mark tag. This lets you highlight important text in your wiki pages to draw readers attention. For more information about the mark tag, see the documentation here., Todd Manion
https://docs.microsoft.com/en-us/azure/devops/release-notes/2019/wiki/sprint-152-update
2020-11-23T23:38:16
CC-MAIN-2020-50
1606141168074.3
[array(['../media/152_04.png', 'Toggle word wrap for your editor.'], dtype=object) array(['../../media/make-a-suggestion.png', 'Make a suggestion'], dtype=object) ]
docs.microsoft.com
If a resource is not in a resource group, you can back up the resource on demand from the Resources page. If you want to back up a resource that has a SnapMirror relationship with secondary storage, the role assigned to the storage user should include the snapmirror all privilege. However, if you are using the vsadmin role, then the snapmirror all privilege is not required.
https://docs.netapp.com/ocsc-43/topic/com.netapp.doc.ocsc-dpg-wfs/GUID-60CDE41F-1CEA-4287-A7AF-0AEFB70A1C0C.html?lang=en
2020-11-23T22:25:33
CC-MAIN-2020-50
1606141168074.3
[]
docs.netapp.com
Hortonworks JDBC driver from the Hive JDBC driver archive. - Download the ODBC driver > Addons..
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/integrating-hive/content/hive_locate_the_jdbc_driver.html
2019-05-19T13:22:18
CC-MAIN-2019-22
1558232254882.18
[]
docs.hortonworks.com
Upgrading Central¶ We release new versions of Central regularly. You do not have to upgrade to the latest version immediately, but we generally recommend that you do so to get access to the newest features, bug fixes, and security updates. are logged into your server, navigate back to the project folder ( cd central). Then, get the latest version of the infrastructure: git pull. (If you have made local changes to the files, you may have to start with git stash, then run git stash pop after you perform the pull. If you aren't sure, just run git pull anyway and it will tell you.) Now, get the latest client and server: git submodule update -i. Then, build your server from the latest code you just fetched: docker-compose build. Note If you run into problems with this step, try stopping the Central software ( systemctl stop docker-compose@central) and retry docker-compose build after it has shut down the Central website. Finally, restart the running server to pick up the changes: systemctl restart docker-compose@central.
https://docs.opendatakit.org/central-upgrade/
2019-05-19T12:34:35
CC-MAIN-2019-22
1558232254882.18
[]
docs.opendatakit.org
Responds to changes in the Active property. procedure ActiveChanged; virtual; virtual __fastcall ActiveChanged(); The ActiveChanged method defined by TDataLink merely provides an interface for a method that can respond to changes in the Active property. Derived objects that do not need to respond to such changes can allow the inherited method to ignore them. Active
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/DB_TDataLink_ActiveChanged.html
2019-05-19T12:28:19
CC-MAIN-2019-22
1558232254882.18
[]
docs.embarcadero.com
New in version 2.5. ansible_net_<fact>. The facts module will always collect a base set of facts from the device and can enable or disable collection of additional facts. --- - name: Collect all facts from the device onyx_facts: gather_subset: all - name: Collect only the interfaces facts onyx_facts: gather_subset: - interfaces - name: Do not collect version facts onyx_facts: gather_subset: - "!version" Common return values are documented here, the following are the fields unique to this module:
https://docs.ansible.com/ansible/latest/modules/onyx_facts_module.html
2019-05-19T13:35:11
CC-MAIN-2019-22
1558232254882.18
[]
docs.ansible.com
Fieldsets are cool. They group a bunch of fields together. You can control a whole block at once too! 1. Go to Marketing Activities. 2. Select your form and click Edit Form. 3. Click the + sign and select Fieldset. 4. Select the fieldset and enter a Label. 5. Drag the fields you want into the fieldset. 6. Here's what it should look like when done. There you have it! Tip You can dynamically hide/show the entire fieldset depending on another field. Learn about visibility rules.
https://docs.marketo.com/display/public/DOCS/Add+a+FieldSet+to+a+Form
2019-05-19T12:22:45
CC-MAIN-2019-22
1558232254882.18
[array(['/download/attachments/557070/pin_red.png', None], dtype=object)]
docs.marketo.com
Advanced Search This section provides information on how to use the viewneo Butler to respond to external events or switch external devices. You can find more information about all of the possible ways the system can be used in the section VIEWNEO BUTLER. In order to use the viewneo butler, it needs to be configured correctly and linked to a viewneo user account. Additiona informaiton on the set up process can be found here.
https://docs.viewneo.com/en/users-guide/viewneo-butler/butler-verwenden
2019-05-19T12:36:39
CC-MAIN-2019-22
1558232254882.18
[]
docs.viewneo.com
This page has been moved to the wiki here. Please use the wiki, as this information here is out of date. A server is a computer with the Asterisk/app_rpt software installed. Servers host one or more nodes. Servers run the Linux OS. There is no Windows version of Asterisk/app_rpt. However, a fully functional system can be built with almost no Linux knowledge from the Allstart portal web site.
https://docs.allstarlink.org/node/159
2019-05-19T12:57:49
CC-MAIN-2019-22
1558232254882.18
[]
docs.allstarlink.org
Marketo manages your GoToWebinar registration and attendance. Admin Permissions Required Reminder An existing subscription to GoToWebinar and administration rights are necessary for this step. Have the email and password you use to sign on to GoToWebinar at hand. Note GoToMeeting, GoToWebcast, and GoToTraining are not currently supported. 1. Go to Admin and select LaunchPoint. 2. Select New and New Service. 3. Enter a Display Name. Under Service, select GoToWebinar. 4. Next, click Log Into GoToWebinar. Note If you want to sync Company Name and Job Title from your Marketo form to GoToWebinar, select the Enable Additional Fields box. 5. In the GoToWebinar Sign In pop-up window, enter your GoToWebinar email and password and click Sign In. 6. After the window closes, click Create. 7. Great! Your GoToWebinar account is now synced with Marketo. Caution When you update your password in GoToWebinar, you must update your password in Marketo as well. Related Articles Learn how to create an event with GotoWebinar.
http://docs.marketo.com/display/public/DOCS/Add+GoToWebinar+as+a+LaunchPoint+Service
2019-05-19T12:16:31
CC-MAIN-2019-22
1558232254882.18
[array(['/download/attachments/557070/magic_wand.png', None], dtype=object) array(['/download/attachments/557070/alert.png', None], dtype=object) array(['/download/attachments/557070/attach.png', None], dtype=object) array(['/download/attachments/557070/attach.png', None], dtype=object) array(['/download/attachments/557070/burn.png', None], dtype=object)]
docs.marketo.com
To log in to Tower, browse to the Tower interface at: http://<Tower server name>/ Log in using a valid Tower username and password. The default username and password set during installation are admin and password, but the Tower administrator may have changed these settings during installation. If the default settings have not been changed, you can do so by accessing the Users link from the Settings (
https://docs.ansible.com/ansible-tower/latest/html/userguide/logging_in.html
2019-05-19T13:34:35
CC-MAIN-2019-22
1558232254882.18
[]
docs.ansible.com
Default concrete TagHelperContent. Used to override an ITagHelper property's HTML attribute name. Indicates the associated ITagHelper property should not be bound to HTML attributes. Provides an ITagHelper's target. A HtmlEncoder that does not encode. Should not be used when writing directly to a response expected to contain valid HTML. Provides a hint of the ITagHelper's output element. A read-only collection of TagHelperAttributes. Restricts children of the ITagHelper's element. An abstract base class for ITagHelper. An HTML tag helper attribute. A collection of TagHelperAttributes. An abstract base class for ITagHelperComponent. Abstract class used to buffer content returned by ITagHelpers. Contains information related to the execution of ITagHelpers. Class used to represent the output of an ITagHelper. Contract used to filter matching HTML elements. Marker interface for TagHelpers. Contract used to modify an HTML element. The mode in which an element should render. The structure the element should be written in.
https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.razor.taghelpers?view=aspnetcore-2.1
2019-05-19T12:37:25
CC-MAIN-2019-22
1558232254882.18
[]
docs.microsoft.com
Bulk edit Content Searches You can use the Bulk Search Editor in the Content Search tool to edit multiple searches at the same time. Using this tool lets you quickly change the query and content locations for one or more searches. Then you can re-run the searches and get new estimated search results for the revised searches. The editor also lets you copy and paste queries and content locations from a Microsoft Excel file or text file. This means you can use the Search Statistics tool to view the statistics of one or more searches, export the statistics to a CSV file where you can edit the queries and content locations in Excel. Then you use the Bulk Search Editor to add the revised queries and content locations to the searches. After you've revised one or more searches, you can re-start them and get new estimated search results. For more information about using the Search Statistics tool, see View keyword statistics for Content Search results. Use the Bulk Search Editor to change queries Go to, and then click Search > Content search. In the list of searches, select one or more searches, and then click Bulk Search Editor . The following information is displayed on the Queries page of the Bulk Search Editor. a. The Search column displays the name of the Content Search. As previously stated, you can edit the query for multiple searches. b. The Query column displays the query for the Content Search listed in the Search column. If the query was created using the keyword list feature, the keywords are separated by the text ** (c:s). This indicates that the keywords are connected by the OR operator. Additionally, if the query includes conditions, the keywords and the conditions are separated by the text ** (c:c). This indicates that the keywords (or keyword phases) are connected to the conditions by the AND operator. For example, in the previous screenshot the for search ContosoSearch1, the KQL query that is equivalent to customer (c:s) pricing(c:c)(date=2000-01-01..2016-09-30)would be (customer OR pricing) AND (date=2002-01-01..2016-09-30). To edit a query, click in the cell of the query that you want to change and doing one of the following things. Note that the cell is bordered by a blue box when you click it. Type the new query in the cell. Note that you can't edit a portion of the query. You have to type the entire query. Or Paste a new query in the cell. This assumes that you've copied the query text from a file, such as a text file or an Excel file. After you've edited one or more queries on the Queries page, click Save. The revised query is displayed in the Query column for the selected search. Click Close to close the Bulk Search Editor. On the Content search page, select the search that you edited, and click Start search to restart the search using the revised query. Here are some tips for editing queries using the Bulk Search Editor: Copy the existing query (by using Ctrl C ) to a text file. Edit the query in the text file, and then copy the revised query and paste it (using Ctrl V ) back into the cell on the Queries page. You can also copy queries from other applications (such as Microsoft Word or Microsoft Excel). However, be aware that you might inadvertently add unsupported characters to a query using the Bulk Search Editor. The best way to prevent unsupported characters is to just type the query in a cell on the Queries page. Queries page. Use the Bulk Search Editor to change content locations In the Bulk Search Editor for one or more selected searches, click Enable bulk location editor, and then click the Locations link that is displayed on the page. The following information is displayed on the Locations page of the Bulk Search Editor. a. Mailboxes to searchThis section displays a column for each selected Content Search, and row for each mailbox that's included in the search. A checkmark indicates that the mailbox is included in the search. You can add additional mailboxes to a search by typing the email address of the mailbox in a blank row and then clicking the checkbox for the Content Search that you want to add it to. Or you can remove a mailbox from a search by clearing the checkbox. b. SharePoint sites to searchThis section displays a row for each SharePoint and OneDrive site that included in each selected Content Search. A checkmark indicates that the site is included in the search. You can add additional sites to a search by typing the URL for the site in a blank row and then and clicking the checkbox for the Content Search that you want to add it to. Or you can remove a site from a search by clearing the checkbox. c. Other search optionsThis section indicates whether unindexed items and public folders are included in the search. To include these, make sure the checkbox is selected. To remove them, clear the checkbox. After you've edited one or more of the sections on the Locations page, click Save. The revised content locations are displayed in the appropriate section for the selected searches. Click Close to close the Bulk Search Editor. On the Content search page, select the search that you edited, and click Start search to restart the search using the revised content locations. Here are some tips for editing content locations using the Bulk Search Editor: You can edit Content Searches to search all mailboxes or sites in the organization by typing All in a blank row in the Mailboxes to search or SharePoint sites to search section and then clicking the checkbox. You can add multiple content locations to one or more searches by copying multiple rows from a text file or an Excel file and then pasting them to a section on the Locations page. After you add new locations, be sure to select the checkbox for each search that you want add the location to. Tip To generate a list of email addresses for all the users in your organization, run the PowerShell command in Step 2 in Use Content Search to search the mailbox and OneDrive for Business site for a list of users. Or use the script in Create a list of all OneDrive locations in your organization to generate a list of all OneDrive for Business sites in your organization. Note that you'll have to append the URL for your's organization's MySite domain (for example,) to the OneDrive for Business sites that's created by the script. After you have list of email addresses or OneDrive for Business sites, you can copy and paste them to the Locations page in the Bulk Search Editor. After you click Save to save changes in Bulk Search Editor, the email address for mailboxes that you added to a search will be validated. If the email address doesn't exist, an error message is displayed saying the mailbox can't be located. Note that URLs for sites aren't validated. Feedback Send feedback about:
https://docs.microsoft.com/en-us/office365/securitycompliance/bulk-edit-content-searches?redirectSourcePath=%252fbg-bg%252farticle%252f%2525D0%252593%2525D1%252580%2525D1%252583%2525D0%2525BF%2525D0%2525BE%2525D0%2525B2%2525D0%2525BE-%2525D1%252580%2525D0%2525B5%2525D0%2525B4%2525D0%2525B0%2525D0%2525BA%2525D1%252582%2525D0%2525B8%2525D1%252580%2525D0%2525B0%2525D0%2525BD%2525D0%2525B5-%2525D1%252581%2525D1%25258A%2525D0%2525B4%2525D1%25258A%2525D1%252580%2525D0%2525B6%2525D0%2525B0%2525D0%2525BD%2525D0%2525B8%2525D0%2525B5-%2525D1%252582%2525D1%25258A%2525D1%252580%2525D1%252581%2525D0%2525B5%2525D0%2525BD%2525D0%2525B8%2525D1%25258F-%2525D0%2525B2-office-365-%2525D0%2525B7%2525D0%2525B0%2525D1%252589%2525D0%2525B8%2525D1%252582%2525D0%2525B0-%2525D0%2525B8-%2525D1%252586%2525D0%2525B5%2525D0%2525BD%2525D1%252582%2525D1%25258A%2525D1%252580-%2525D0%2525B7%2525D0%2525B0-%2525D1%252581%2525D1%25258A%2525D0%2525BE%2525D1%252582%2525D0%2525B2%2525D0%2525B5%2525D1%252582%2525D1%252581%2525D1%252582%2525D0%2525B2%2525D0%2525B8%2525D0%2525B5-39e4654a-9588-41f6-892b-c33ab57bfbe2
2019-05-19T12:34:19
CC-MAIN-2019-22
1558232254882.18
[]
docs.microsoft.com
Binary type converter. Bool type converter. Datetime type converter. Datetime type converter. Decimal type converter. Float type converter. Integer type converter. Json type converter. String type converter. Time type converter. Provides behavior for the UUID type An interface used by Type objects to signal whether the value should be converted to an ExpressionInterface instead of a string when sent to the database. An interface used by Type objects to signal whether the casting is actually required. Offers a method to convert values to ExpressionInterface objects if the type they should be converted to implements ExpressionTypeInterface © 2005–2018 The Cake Software Foundation, Inc. Licensed under the MIT License. CakePHP is a registered trademark of Cake Software Foundation, Inc. We are not endorsed by or affiliated with CakePHP.
https://docs.w3cub.com/cakephp~3.5/namespace-cake.database.type/
2019-05-19T12:20:03
CC-MAIN-2019-22
1558232254882.18
[]
docs.w3cub.com
SEARCH Returns the number of the character at which a specific character or text string is first found, reading left to right. Search is case-insensitive and accent sensitive. Syntax SEARCH(<find_text>, <within_text>[, [<start_num>][, <NotFoundValue>]]) Parameters Return value The number of the starting position of the first text string from the first character of the second text string. Remarks The search function is case insensitive. Searching for "N" will find the first occurrence of 'N' or 'n'. The search function is accent sensitive. Searching for "á" will find the first occurrence of 'á' but no occurrences of 'a', 'à', or the capitalized versions 'A', 'Á'.. Example: Search within a String Description The following formula finds the position of the letter "n" in the word "printer". Code =SEARCH("n","printer") The formula returns 4 because "n" is the fourth character in the word "printer." Example: Search within a Column Description You can use a column reference as an argument to SEARCH. The following formula finds the position of the character "-" (hyphen) in the column, [PostalCode]. Code =SEARCH("-",[PostalCode]) The return result is a column of numbers, indicating the index position of the hyphen. Example: Error-Handling with SEARCH Description The formula in the preceding example will fail if the search string is not found in every row of the source column. Therefore, the next example demonstrates how to use IFERROR with the SEARCH function, to ensure that a valid result is returned for every row. The following formula finds the position of the character "-" within the column, and returns -1 if the string is not found. Code = IFERROR(SEARCH("-",[PostalCode]),-1) Note that the data type of the value that you use as an error output must match the data type of the non-error output type. In this case, you provide a numeric value to be output in case of an error because SEARCH returns an integer value. However, you could also return a blank (empty string) by using BLANK() as the second argument to IFERROR. See also MID function (DAX) REPLACE function (DAX) Text functions (DAX)
https://docs.microsoft.com/en-us/dax/search-function-dax
2019-05-19T13:06:26
CC-MAIN-2019-22
1558232254882.18
[]
docs.microsoft.com
Understanding license types and the licensed method helps you manage the licenses in a cluster. A package can have one or more of the following types of license installed in the cluster. The system license show command displays the installed license type or types for a package. A standard license is a node-locked license. It is issued for a node with a specific system serial number (also known as a controller serial number). A standard license is valid only for the node that has the matching serial number. Installing a standard, node-locked license entitles a node to the licensed functionality. For the cluster to use licensed functionality, at least one node must be licensed for the functionality. It might be out of compliance to use licensed functionality on a node that does not have an entitlement for the functionality. ONTAP releases treat a license installed prior to Data ONTAP 8.2 as a standard license. Therefore, in ONTAP releases, all nodes in the cluster automatically have the standard license for the package that the previously licensed functionality is part of. The system license show command with the -legacy yes parameter indicates such licenses. A site license is not tied to a specific system serial number. When you install a site license, all nodes in the cluster are entitled to the licensed functionality. The system license show command displays site licenses under the cluster serial number. If your cluster has a site license and you remove a node from the cluster, the node does not carry the site license with it, and it is no longer entitled to the licensed functionality. If you add a node to a cluster that has a site license, the node is automatically entitled to the functionality granted by the site license. An evaluation license is a temporary license that expires after a certain period of time (indicated by the system license show command). It enables you to try certain software functionality without purchasing an entitlement. It is a cluster-wide license, and it is not tied to a specific serial number of a node. If your cluster has an evaluation license for a package and you remove a node from the cluster, the node does not carry the evaluation license with it. It is possible to install both a cluster-wide license (the site or demo type) and a node-locked license (the license type) for a package. Therefore, an installed package can have multiple license types in the cluster. However, to the cluster, there is only one licensed method for a package. The licensed method field of the system license status show command displays the entitlement that is being used for a package. The command determines the licensed method as follows: For example:
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sag/GUID-FC8F5AF8-4A66-46AD-B065-80FA88E3752A.html
2019-05-19T13:08:53
CC-MAIN-2019-22
1558232254882.18
[]
docs.netapp.com
Introduction¶ MOSA is an open source software project that natively executes .NET applications within a virtual hypervisor or on bare metal hardware! The MOSA project consists of: - Compiler - a high quality, multithreaded, cross-platform, optimizing .NET compiler - Kernel - a small, micro-kernel operating system - Device Drivers Framework - a modular, device drivers framework and device drivers - Debugger - QEMU-based debugger Read our Frequently Asked Questions for more information about this project. - Block Reordering - Greedy Register Allocation Getting Started¶ Download¶ The MOSA project is available as a zip download or via git: git clone Prerequisites¶ You will also need the following prerequisites: Windows¶ Install any Visual Studio version 2018 or newer. All editions are supported including the fully-featured free Community Edition. Note: The MOSA source code repository includes Qemu virtual emulator for Windows. The CodeMaid Visual Studio Extension is strongly recommended for MOSA contributors. Linux¶ The minimum supported version of Mono is 5.16. If using the APT package manager you can use the following command to quickly set up QEMU and Mono sudo apt-get -y install mono-devel qemu Running¶ Windows¶ Double click on the “Compile.bat” script in the root directory to compile all the tools, sample kernels, and demos. Next double click on the “Launcher.bat” script, which will bring up the MOSA Launcher tool (screenshot below) that can: - Compile the operating system - Create a virtual disk image, with the compiled binary and boot loader - Launch a virtual machine instance (QEMU by default) By default, the CoolWorld operating system demo is pre-selected. Click the “Compile and Run” button to compile and launch the demo. Join the Discussion¶ Join us on Gitter chat. This is the most interactive way to connect to MOSA’s development team. License¶ MOSA is licensed under the New BSD License.
http://docs.mosa-project.org/en/latest/intro.html
2019-05-19T12:56:28
CC-MAIN-2019-22
1558232254882.18
[]
docs.mosa-project.org