content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Debug Example : Filter on a specific IP Address in JSP/Servlet requests¶ This example is going to guide you through the creation of a breakpoint with a Specific IP Address by the use of a jsp script. <%@ include file="/header.jsp" %> <h1>Loop</h1> <% String[] months = {"January","February","March","April","May","June", "July","August","September","October","November","December"}; for( int i=0;i<months.length;i++) { out.println("The month is: " + months[i] + "<br>"); } %> <%@ include file="/footer.jsp" %> The loop.jsp script creates a new Array of Strings that contains all the months of the year. The script is going to print out all the months of the year from the Array. First Steps before the Breakpoint Configuration¶ - Install the jsp file into your application. If you know the source code directory, it is advisable to add that directory as a source inside FusionReactor. This can be done by accessing FusionReactor → Debug → Debugger → Click the initial add source link or Configure Sources. In the directory field of the configuration you should add the source code directory. If you do not have access to the source you will still be able to set and trigger and the breakpoint. Create a New Breakpoint¶ - Go to FusionReactor → Debug → Debugger and select New Breakpoint. Configure the breakpoint as below: In the Trigger Condition field we have added request.getRemoteAddr().toString().equals("IP Address"). Instead of the IP Address String you should add your local IP Address or an External IP Address. When the breakpoint has been set up, in the case that your page has not yet been executed, you should be able to see the following: In the case that your page has been executed, your breakpoint will look like this: Fire a Breakpoint¶ In order for the Breakpoint to be fired, you need to execute the loop.jsp script. The Trigger Condition will be fired and the page will halt the execution. If the breakpoint was fired, you should be able to see the Production Debugger icon on the top menu of FusionReactor. You can either click on the Debugger icon or click the Debugger link in the Debug Menu: you will then see the Paused Thread - together with the Timeout Countdown. See screenshot below. To start the Production Debugger session, you need to click on the Debug Icon: Change the IP Address to an External IP Address¶ In order to make the Production Debugger work under a specific IP Address, you only need to change the Trigger Condition field inside the New Breakpoint or Edit Breakpoint configuration. For example, if you want to debug a jsp script that is not located in your local machine but is located in a different IP Address, the configuration of the breakpoint will need to be changed. In this example, we are going to use 192.168.0.1. To change the Trigger Condition to use this new IP Address, the breakpoint must be configured as follows.
https://docs.fusion-reactor.com/Debugger/Debug-Example-5/
2020-08-04T00:20:57
CC-MAIN-2020-34
1596439735836.89
[array(['/attachments/245553669/245553710.png', None], dtype=object) array(['/attachments/245553669/245553704.png', None], dtype=object) array(['/attachments/245553669/245553698.png', None], dtype=object) array(['/attachments/245553669/245553692.png', None], dtype=object) array(['/attachments/245553669/245553767.png', None], dtype=object) array(['/attachments/245553669/245553686.png', None], dtype=object) array(['/attachments/245553669/245553680.png', None], dtype=object) array(['/attachments/245553669/245553674.png', None], dtype=object)]
docs.fusion-reactor.com
Tarball - Upgrade database components Follow these steps to perform a Tarball MySQL to v5.7.26 You must update the ~/.my.cnf file to reflect the correct path for MySQL. Run the following commands to update this file. The command below assumes the Database .my.cnf file is under the current user's 'home' folder. MYSQL_HOME=$(dirname $(dirname $(which mysqld))) sed -i "s;basedir.*;basedir = ${MYSQL_HOME};" ~/.my.cnf sed -i '/\[mysqld\]/a log_bin_trust_function_creators = 1\nthread_stack = 524288\n' ~/.my.cnf Restart MySQL: $MOOGSOFT_HOME/bin/utils/process_cntl mysqld restart Note Use the following instructions if the database deployed is MySQL Community instead of Percona. You should upgrade MySQL to 5.7.26 to address a number of bugs and security vulnerabilities. Before you start the upgrade process, you should back up your database. Download the MySQL 5.7.26 tarball to the server: curl -L -O Check if MySQL is configured to run with --gtid-mode=ONusing the following command in the MySQL CLI: show variables like 'gtid_mode'; Remember what the value is because you will need it later. Move the MySQL tarball into the desired location and extract it: tar -xf mysql-5.7.26-el7-x86_64.tar.gz Update the system PATHto point at the new MySQL folder. The command below assumes the new MySQL folder is in the same folder as the previous MySQL folder (user home). If this is not the case, you must manually update the ~/.bashrcfile. sed -i 's/5.7.22/5.7.26/g' ~/.bashrc; source ~/.bashrc; Restart MySQL: $MOOGSOFT_HOME/bin/utils/process_cntl mysql restart If the gtid-modewas OFF (based on the command run earlier in the upgrade), run the MySQL upgrade utility. Provide the MySQL root password when prompted or just press Enter if no password set. mysql_upgrade -u root -p -S $MOOGSOFT_HOME/var/lib/mysql/var/lib/mysql/mysql.sock if the gtid-modewas ON, you do not need to do this step. See the MySQL documentation on restrictions on replication with GTIDs for more information. Restart MySQL to save any changes to system tables: $MOOGSOFT_HOME/bin/utils/process_cntl mysql restart See the Upgrading MySQL documentation for more information. Upgrade Moogsoft AIOps To upgrade Moogsoft AIOps, run the following commands. If you have already run this step on the current host as part of this upgrade (for single-host upgrade for example), you can skip this step. tar -xf moogsoft-aiops-7.3.1.1.tgz bash moogsoft-aiops-install-7.3.1 AIOps7.3 versions of the config and bot files are stored in $MOOGSOFT_HOME/dist/7.3.1.1/config/ and $MOOGSOFT_HOME/dist/7.3.1.1/bots/ respectively. The config and bot files from the previous version should not be copied on top of (replace) the new version of those files in 7.3, as they are not always forwards-compatible, and some config/bot lines need to be added for the new version to work. Identify the config files that have changed between the previously installed version and v7.3. For example: diff -rq $MOOGSOFT_HOME/dist/7.[^3]*/config $MOOGSOFT_HOME/dist/7.3.*/config | grep -i 'differ' Update files in $MOOGSOFT_HOME/configwith any changes introduced in the v7.3 versions of these files. Identify the contrib files that have changed between the previously installed version and v7.3. For example: diff -rq $MOOGSOFT_HOME/dist/7.[^3]*/contrib $MOOGSOFT_HOME/dist/7.3.*/contrib | grep -i 'differ' Update files in $MOOGSOFT_HOME/contribwith any changes introduced in the v7.3 versions of these files. Identify the bot files that have changed between the previously installed version and v7.3. For example: diff -rq $MOOGSOFT_HOME/dist/7.[^3]*/bots $MOOGSOFT_HOME/dist/7.3.*/bots | grep -i 'differ' Update files in $MOOGSOFT_HOME/botswith any changes introduced in the v7.3 versions of these files. Upgrade the Moogsoft AIOps database schema Before upgrading the schema, check if the MySQL variable log_bin_trust_function_creators is enabled: mysql -e "show variables like '%log_bin_trust_function_creators%';" You must enter the root username and password when you run this command.: $MOOGSOFT_HOME/bin/utils/process_cntl. Ensure the database is running, then execute the following command, replacing <MySQL-SuperUsername>with the username of your super user: bash $MOOGSOFT_HOME/bin/utils/moog_db_auto_upgrader -t 7 AIOps Tarball v7.3 - Upgrade data ingestion components.
https://docs.moogsoft.com/AIOps.7.3.0/tarball---upgrade-database-components.html
2020-08-03T23:22:09
CC-MAIN-2020-34
1596439735836.89
[]
docs.moogsoft.com
[−][src]Crate sendmmsg This crate provides a convenient approach for transmitting multiple messages using one system call (but only on Linux-based operating systems). Kernel calls might take relatively long time, consequently, calling the same kernel function lots of times will increase CPU load significantly. This is why I have created this library; it simply wraps the sendmmsg kernel function. Examples This example sends four messages called data portions to the example.com website using a single system call on an orginary TcpStream: #![feature(iovec)] use sendmmsg::Sendmmsg; use std::io::IoVec; use std::net::TcpStream; fn main() { // Specify all the messages you want to send let messages = &mut [ (0, IoVec::new(b"Generals gathered in their masses")), (0, IoVec::new(b"Just like witches at black masses")), (0, IoVec::new(b"Evil minds that plot destruction")), (0, IoVec::new(b"Sorcerers of death's construction")), ]; // Setup the `TcpStream` instance connected to example.com let socket = TcpStream::connect("93.184.216.34:80").unwrap(); // Finally, send all the messages above match socket.sendmmsg(messages) { Err(error) => eprintln!("An error occurred: {}!", error), Ok(packets) => println!("Packets sent: {}", packets), } }
https://docs.rs/sendmmsg/0.3.2/sendmmsg/
2020-08-03T23:36:37
CC-MAIN-2020-34
1596439735836.89
[]
docs.rs
Before using the Google Maps Platform Activities Package, you need to configure your applications using the Google Cloud Platform. There are two configuration steps: - Enable APIs - this step enables automation by granting API access to your Google applications. - Create credentials - this specifies the authentication type used to interact with your enabled APIs. Enable APIs: Follow the steps listed below to enable your APIs. - Navigation menu in the top navigation bar - Hover over APIs & Services (showing a menu of options) and select Library (opening the API Library). - From the API Library, go to Maps section - Click on Places API - In the Places API page, click Enable Create credentials: Follow the steps listed below to create credentials for your project. - From your project APIs & Services page (Google Cloud Platform> Project > APIs & Services), click Credentials in the left-hand navigation panel. - After the Credentials page opens, click Create credentials and select API Key. An API key is the simplest authentication mechanism. For more information about API Keys, see Using API Keys in the Google Cloud Documentation. Activities: (UiPathTeam.GoolgeMaps.Activities) The Package contains a set of 7 activities which are as follows. - Google Maps Scope - Find Place - Nearby Search - Get Place Details - Get Place Photo - Place Autocomplete - Query Autocomplete Updated about a month ago
https://docs.uipath.com/marketplace/docs/google-maps-platform-setup
2020-08-03T23:28:10
CC-MAIN-2020-34
1596439735836.89
[]
docs.uipath.com
What is performance? From an end user’s perspective, performance of the application is defined by the response time. The response time is the time it takes the application to respond to the end user’s request, measured from the end user’s point of view. Performance always is one of the challenges of software platforms. Performance is about isolating the problem(s) which may contribute to a slow user experience. In this document we pinpoint the performance issues that users of UiPath Process Mining may experience. These users include both developers (SuperAdmins) and end users. Performance issues should be primarily mitigated before issues occur at the end user side. Although the solutions are done by technical people, end users need to be involved in making the trade-offs necessary for the solutions. Topics In general, performance issues can be related to the following topics: - Data volume: the amount of data and the level of detail; - Data loading: the way in which the data is loaded; - Server configuration: hardware and system resources; - Application design: the application itself; - Data model design; - Internet connection. Dependent on the issue and the solution, the resolve may take a long time. It starts with expectation management. Each topic in this document starts with an introduction of the aspects which are related to performance. This document also contains suggestions for analyzing performance and for possible solutions. Updated 4 days ago
https://docs.uipath.com/process-mining/docs/performance
2020-08-03T23:31:21
CC-MAIN-2020-34
1596439735836.89
[]
docs.uipath.com
How to Append an Item to an Array in JavaScript In this tutorial, you will find out the solutions that JavaScript offers for appending an item to an array. Imagine you want to append a single item to an array. In this case, the push() method, provided by the array object can help you. So, it would be best if you act as follows: const animals = ['dog', 'cat', 'mouse']; animals.push('rabbit'); console.log(animals); Please, take into account that push() changes your original array. For creating a new array, you should implement the concat() method, like this: const animals = ['dog', 'cat', 'mouse']; const allAnimals = animals.concat('rabbit'); console.log(allAnimals); Also notice that the concat() method doesn’t add an item to the array, but makes a completely new array that can be assigned to another variable or reassigned to the original one: let animals = ['dog', 'cat', 'mouse']; animals = animals.concat('rabbit'); console.log(animals); If you intend to append multiple items to an array, you can also use the push() method and call it with multiple arguments, like here: const animals = ['dog', 'cat', 'mouse']; animals.push('rabbit', 'turtle'); console.log(animals); Also, you can use the concat() method passing an items’ list, separated by a comma, as follows: const animals = ['dog', 'cat', 'mouse']; const allAnimals = animals.concat('rabbit', 'turtle'); console.log(allAnimals); Either an array: const animals = ['dog', 'cat', 'mouse']; const allAnimals = animals.concat(['rabbit', 'turtle']); console.log(allAnimals); Notice that this method will not mutate your original array, but will return a new one. Describing Arrays¶ JavaScript arrays are a super-handy means of storing multiple values in a single variable. In other words, an array is a unique variable that can hold more than a value at the same time. Arrays are considered similar to objects. The main similarity is that both can end with a comma. One of the most crucial advantages of an array is that it can contain multiple elements on the contrary with other variables. An element inside an array can be of a different type, such as string, boolean, object, and even other arrays. So, it means that you can create an array having a string in the first position, a number in the second, and more.
https://www.w3docs.com/snippets/javascript/how-to-append-an-item-to-an-array-in-javascript.html
2020-08-03T23:40:10
CC-MAIN-2020-34
1596439735836.89
[]
www.w3docs.com
The Center for Rheumatology has entered Phase 3. We are happy to announce we are increasing our in office visits. Here is what you need to knowLearn More › Please be sure to wear a facemask to the office. Details on policy changes, click hereLearn More › Bringing your office visit to the safety of your home.Learn More ›
http://www.joint-docs.com/
2020-08-03T23:37:13
CC-MAIN-2020-34
1596439735836.89
[]
www.joint-docs.com
Forms are collections of fields displayed on a page that allow you to enter the information necessary to complete a specific task. When setting up or modifying your export, you can keep up with API changes in real-time and get more control by switching between application-specific and universal forms. Contents - Understand forms - Form View options - Reasons to switch from application-specific connector forms to universal connector forms Understand forms You use forms to establish your connections or map your data when building an integration on integrator.io. Forms simplify the integration process by displaying fields for all required API information you need for your integration. The export configuration form allows you to toggle between specific connector forms and generic connector forms. Form View options All export forms have a Form View drop-down menu you can use to select specific connector types. Each form view option you select displays the fields required to establish the selected connection type. Application-specific connector forms Every app and system you use integrator.io to communicate with has its own dedicated form that allows you to enter all information needed to connect, configure exports, map data, and configure imports for those apps. Application-specific forms guide you through the process of building integrations with a specific system's required and optional fields so you can optimize the effectiveness of the connector. Universal connector forms You can use universal connector forms to integrate with systems that don't yet have a dedicated form. These connectors include REST and HTTP connection types and guide you through the process of configuring exports or imports with generic protocols. These forms are dynamic in that they add, change, or remove fields based on the selections you make. For example, the fields you see on a generic form that’s using a GET method displays different fields from those you see on a form using POST. Since universal forms are not specific to any app or system, they can have more fields available than are necessary for a given system in order to account for a variety of systems that use those protocols. Reasons to switch from application-specific connector forms to universal connector forms A universal connector form can occasionally give you more control and allow you to keep up with API updates that have not yet been accounted for by an application-specific connector form. Generic forms have many fields in order to account for the more technical details of configuring the export, like configuring headers, providing paths to resources and status codes, and setting up pagination. If the app or system that you’re working with has updated their API, you can keep up with those changes in real-time in cases where the app-specific form does not yet include the expanded functionality. For example, if the API gives access to an expanded menu of resources or pagination capabilities, you can use the generic connector’s form to work with those changes in your export. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360036152811-Using-application-specific-or-universal-connector-forms
2020-08-03T23:43:43
CC-MAIN-2020-34
1596439735836.89
[array(['/hc/article_attachments/360042845172/mceclip0.png', None], dtype=object) array(['/hc/article_attachments/360042875091/mceclip1.png', None], dtype=object) array(['/hc/article_attachments/360042875171/mceclip2.png', None], dtype=object) ]
docs.celigo.com
SharePoint 2010 – Clearing the Configuration Cache ThereData\Microsoft\SharePoint\Config\<GUID> The overall steps remain largely the same: Stop the Timer service. To do this, follow these steps: - Click Start, point to Administrative Tools, and then click Services. - Right-click SharePoint 2010 Timer, and then click Stop. - Close the Services console. On the computer that is running Microsoft SharePoint Server 2010 and on which the Central Administration site is hosted, click Start, click Run, type explorer, and then press ENTER. In Windows Explorer, locate and then double-click the following folder: %SystemDrive%\ProgramData\Microsoft\SharePoint\Config\GUID Notes -. Back up the Cache.ini file. (Make a copy of it. DO NOT DELETE THIS FILE, Only the XML files in the next step) Delete all the XML configuration files in the GUID folder (DO NOTE DELETE THE FOLDER). Do this so that you can verify that the GUID folders content. (Basically when you are done, the only text in the config.ini file should be the number 1) On the File menu, click Exit. Start the Timer service. To do this, follow these steps: - Click Start, point to Administrative Tools, and then click Services. - Right-click SharePoint 2010 Timer, and then click Start. - Close the Services console.. Check in the GUID folder to make sure that the xml files are repopulating. This may take a bit of time. For the original steps for clearing out the configuration cache in SharePoint 2007, there are many articles that cover the steps, one of them is the following:
https://docs.microsoft.com/en-us/archive/blogs/jamesway/sharepoint-2010-clearing-the-configuration-cache
2020-08-04T00:38:48
CC-MAIN-2020-34
1596439735836.89
[]
docs.microsoft.com
Date and Time Data Types and Functions (Transact-SQL) The following sections in this topic provide an overview of all Transact-SQL date and time data types and functions. - Date and Time Data Types - Date and Time Functions - Function That Get System Date and Time Values - Functions That Get Date and Time Parts - Functions That Get Date and Time Values from Their Parts - Functions That Get Date and Time Difference - Functions That Modify Date and Time Values - Functions That Set or Get Session Format Functions - Functions That Validate Date and Time Values - Date and Time–Related Topics Date and Time data types The Transact-SQL date and time data types are listed in the following table: Note The Transact-SQL rowversion data type is not a date or time data type. timestamp is a deprecated synonym for rowversion. Date and Time functions 2017. Lower-Precision system Date and Time functions Functions that get Date and Time parts Functions that get Date and Time values from their parts Functions that get Date and Time difference Functions that modify Date and Time values Functions that get or set session format Functions that validate Date and Time values Date and time-related topics See also Functions Data Types (Transact-SQL)
https://docs.microsoft.com/en-us/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql
2018-03-17T06:38:48
CC-MAIN-2018-13
1521257644701.7
[array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object)]
docs.microsoft.com
Make sure that your laptop/phone/tablet/... has 5 GHz support. You should look for 802.11 ac support. Connect to WiFi network, Erle-Brain creates a network called "erle-< product > ". The password for the network is "holaerle". Afterwards, SSH into the board: #Under Debian ssh [email protected] #Under Snappy Ubuntu Core #password: ubuntu ssh [email protected] Through mini USB Erle-Brain supports client mode USB. Using this connection mechanism and the Ethernet-over-USB kernel module you should be able to SSH into the board. First check which interfaces you have in your computer, execute ifconfig. Connect Erle-Brain using the mini USB connector: Find the new network interface that should've been created in your OS, executing again ifconfig. Assign the following IP address: 192.168.7.1. Assuming that your new interface is eth6: sudo ifconfig eth6 192.168.7.1 Now that you are in the same subnet, just ssh into the board: #Under Debian ssh [email protected] #Under Snappy Ubuntu Core #password: ubuntu ssh [email protected]
http://docs.erlerobotics.com/brains/discontinued/erle_brain/getting_started/connecting_to_erle_brain/linux_and_macos
2018-03-17T06:14:43
CC-MAIN-2018-13
1521257644701.7
[]
docs.erlerobotics.com
How to Add Custom Fields to Posts and Pages Custom Meta fields are simply form fields added to the post editor, customizer or widget options that allow user input. The values of these fields may be submitted and retrieved in a few different ways. If you are working on a single site install and not a theme/extension, you might find it easier to work with a plugin such as Simnple Meta or Advanced Custom Fields. Developers may also use extended frameworks like CMB2 or Meta Box in your themes or extensions for more control over repeater and advanced fields. Custom Fields in Widgets See the following: Customizer Controls If you are looking to add custom fields to the WordPress Customizer, go to the Adding Customizer Sections & Controls developer guide. Custom Fields in the Editor There is no special way to add meta to the post editor from a Layers child theme or plugin – it is done the traditional WordPress way. However, you may take advantage of the Layers framework for building the form elements themselves, by extending Layers_Form_Elements via the wonderful input() helper function. There are two methods for housing your custom meta, via using a traditional procedural structure, or via a PHP class. The following tutorial follows a procedural order. From extensisons, you should use the class structure described here to put the actions into your constructor, and the functions into your main plugin file or class. Create the file To begin, create a PHP document called meta.php and save it to your child theme or plugin’s includes folder. Here is the starting structure of our file, which contains a comment describing what the file contains. You will be creating the following functions in that file, then including it in your theme or plugin’s main functions file. In this example, I’ll demonstrate how you could add a Photo Credit and Source URL field to posts to support a Magazine-style site. If you have created a custom post type, you can apply this to a different post type easily. Add the Meta Box Line 3: Setup your function. This should have a unique prefix to add_meta_box to avoid potential conflict with other themes or plugins (i.e. do not use ‘layers_child_’ as your prefix). Example: yourtheme_postype_add_meta_box Line 6: The $screens variable allows you to setup an array of post types you want your meta box associated with. The default in Layers is simply ‘post’. If your plugin is adding a custom post type, you can add that post type’s slug to the array, e.g. $screens = array('post', 'my_portfolio'); Line 9: Here we use add_meta_boxes to create the panel. The following parameters are set: Line 10: The unique ID for this meta box. We will reference this later when building the fields. Line 11: The display title of the meta box. Please note: if the meta box you are creating is intended for public release (in a theme, plugin, etc.), this string should be internationalized using _() Line 12: The name of the function that will output the contents of the meta box. Line 13: The post type where this meta box should be displayed. Since we are using a variable to define these on line 6, we set this to $screen Line 14: Specifies the section of the page where the meta box should be placed (normal, advanced, or side). 'normal' places it below the editor. Line 15: Specifies the order within the section where the meta box will be placed (high, core, default, or low). 'high' places it above the Layers Options meta panel, directly below the editor. Now we need to hook this function onto add_meta_boxes: If you check your Add New Post screen, you should see the box, though you will have a blank panel or an error until the callback function is created. Build Your Form The callback we setup on line 11 of our add_meta_box function is used to output the meta field’s form elements in our meta box. Line 5: Create the callback function using the name we already defined Line 8: Use wp_nonce_field to setup a nonce for security purposes. There are only two required parameters: - Action name: This can be anything, and simply adds context to what the nonce is doing. We are outputting a meta option, so we simply reference layers_child_meta_box . This should be unique to any other names you define. - Nonce name: This is the name of the nonce hidden form field to be created. Once the form is submitted, you can access the generated nonce via $_POST[$name] . To make it simple we just append our action name to nonce e.g layers_child_meta_box_nonce Line 14-15: Here we set two variables to represent our meta field keys using get_post_meta, $credit_url and $credit_name, which will correspond to our two fields. Meta-data is handled with key/value pairs. The key is the name of the meta-data element. The value is the information that will appear in the meta-data list on each individual post that the information is associated with. $post->ID retrieves the id of the post being created/viewed/queried 'my_photo_credit' , or the second parameter, is the meta key name. This should be unique and semantic. true is used to return the value Line 16: Your form elements go last. For most form elements, you can instantiate the Layers_Form_Elements class and use the input() helper function to generate your form fields. For creating special field types such as a WYSIWYG field, we use core WP functions. Layers Form Elements For our purposes we only need two text fields. You can create text, message, icon/image select, image, file, drop-down, checkboxes and radio buttons using input(), and rich text fields using a special core function. “Fancy” fields such as color pickers, repeatable fields, etc require additional javascripting explained in separate tutorials. Line 1-2: We start with a condition that checks to ensure Layers is installed (and thus the class we will be extending exists). This is helpful for avoiding class errors if the user happens to have your extension active but Layers is not installed. Set your $form_elements value to new Layers_Form_Elements Line 4: (optional) Set your field wrapper. Using a paragraph tag with the layers-form-item class will render your field full width like the Video URL field. If you do not use a wrapper, the fields will simply sit next to one another: Line 5: (optional) add a <label> for elements that don’t take a 'label' argument. Labels should be wrapped in __(). Line 6-16: Setup your input() array. Follow this link for detailed information on each parameter. Note that we reference our meta key names in each input’s 'name' and 'id' value. We then use our output variables to set the 'value' of the fields. By using isset , we ensure data is only returned when it exists: ( isset( $credit_url ) ? $credit_url : '' ) Finally, the 'class' parameter allows us to use the existing framework styles for our fields. This is optional – you can replace our class with your own, if preferred (keep in mind admin CSS must be loaded separately from your front-end CSS). Classes for each field type are referenced in the input() link above. TinyMCE Fields One exception for Custom Fields vs Widgets/Customizer controls is the rich-text editor field. In widgets, we can use the input type of rte . In the admin, we cannot. Instead, we use the wp_editor function. In this example, we are still inside our layers_child_meta_box_callback function: Saving Meta Data Line 1: create your save function. Ensure this is unique, i.e. <strong>yourtheme_custom</strong>_save_meta_box_data The first part of the function checks to make sure we really want to save the meta input. These lines are based on Tom McFarlin’s user_can_save() linked in the References section below. Basically, they make sure that the nonce is valid and this isn’t an autosave, and does it in a very efficient way. Line 5: The first value should be set to your nonce name defined earlier, and the second set to your action name. The second part of the function checks to see if any data has been entered into the the two fields we created. If there is text to save, it uses the update_post_meta() function to save the text to the database. This function has 4 parameters: $post_id = The unique ID of the post being saved. Use $post_id . $meta_key = The unique meta key specifying where the input should be saved. In our example, our keys are 'my_photo_credit' and 'my_credit_url' which we defined earlier. $meta_value = The input to be saved to the database. Lines 13-17: In the code above, we set the required $post_id and meta key for each update_post_meta function. However, notice that we didn’t give it the meta value of the input directly. Instead we used sanitize_text_field function to prepare the input before placing it in the database. This isn’t a tutorial about validation and sanitization and not all input types require sanitization, but if you’re working with user input, you should never be placing unchecked user input into the database. Refer to the References section below to check if your field needs a specific sanitization function here. Now you just need to hook your save function into save_post and you’re done! Test your code by creating a new post and filling our your fields, then click Save Draft. Your input should be retained after the editor refreshes. If you enable the Custom Fields panel under Screen Options, you should also see your field keys and values: Include The File Both theme’s and plugins use require_once() to include files, however the way you set the path is different. From a plugin, add it to your main plugin file: From a child theme‘s functions.php you use get_stylesheet_directory() Displaying Meta Data Now that we’ve successfully saved our meta data, it can be displayed anywhere using get_post_meta() , the same way we did it above to be used internally. This can be done directly in a template, or inside a function you hook into a Layers action. - Line 2-3: Setup some variables to grab the value of our saved photo credit and photo credit link fields. Make sure you define the correct meta key name, and that the third argument is true to ensure the value is returned as a string and not an array. - Line 6: Condition check to make sure there is a photo credit before we bother outputting any additional HTML. This helps avoid empty elements - Line 7: Here we wrap our data in a . meta-info span. This framework style will apply the normal post meta style to these elements, and ensure customizer colors that target the meta-info class also work. This is is only important if your custom meta data will be paired with the existing post meta. You may apply your own containers and styling for elements used in other ways and locations. References: - Mark Jaquith’s post on nonces in WordPress - Save Custom Post Meta – Revisited, Refactored, Refined - Custom Fields on the WordPress Codex - Sanitizing User Input
http://docs.layerswp.com/how-to-add-custom-fields-to-posts-and-pages/
2018-03-17T06:24:19
CC-MAIN-2018-13
1521257644701.7
[array(['http://docs.layerswp.com/wp-content/uploads/meta-box2.jpg', 'meta-box2'], dtype=object) array(['http://docs.layerswp.com/wp-content/uploads/meta-box4.jpg', 'meta-box4'], dtype=object) array(['http://docs.layerswp.com/wp-content/uploads/mce-field.jpg', 'mce-field'], dtype=object) array(['http://docs.layerswp.com/wp-content/uploads/custom-fields-saved.jpg', 'custom-fields-saved'], dtype=object) ]
docs.layerswp.com
Types related to tiles. More... Go to the source code of this file. Types related to tiles. Definition in file tile_type.h. The different types of tiles. Each tile belongs to one type, according whatever is build on it. Definition at line 42 of file tile_type.h. Additional infos of a tile on a tropic game. The tropiczone is not modified during gameplay. It mainly affects tree growth. (desert tiles are visible though) In randomly generated maps: TROPICZONE_DESERT: Generated everywhere, if there is neither water nor mountains (TileHeight >= 4) in a certain distance from the tile. TROPICZONE_RAINFOREST: Generated everywhere, if there is no desert in a certain distance from the tile. TROPICZONE_NORMAL: Everywhere else, i.e. between desert and rainforest and on sea (if you clear the water). In scenarios: TROPICZONE_NORMAL: Default value. TROPICZONE_DESERT: Placed manually. TROPICZONE_RAINFOREST: Placed if you plant certain rainforest-trees. Definition at line 71 of file tile_type.h.
http://docs.openttd.org/tile__type_8h.html
2018-03-17T06:19:54
CC-MAIN-2018-13
1521257644701.7
[]
docs.openttd.org
Command Prompt¶ A prompt is quite common in MUDs. The prompt display useful details about your character that you are likely to want to keep tabs on at all times, such as health, magical power etc. It might also show things like in-game time, weather and so on. Many modern MUD clients (including Evennia’s own webclient) allows for identifying the prompt and have it appear in a correct location (usually just above the input line). Usually it will remain like that until it is explicitly updated. Sending a prompt¶ A prompt is sent using the prompt keyword to the msg() method on objects. The prompt will be sent without any line breaks. self.msg(prompt="HP: 5, MP: 2, SP: 8") You can combine the sending of normal text with the sending (updating of the prompt): self.msg("This is a text", prompt="This is a prompt") You can update the prompt on demand, this is normally done using OOB-tracking of the relevant Attributes (like the character’s health). You could also make sure that attacking commands update the prompt when they cause a change in health, for example. Here is a simple example of the prompt sent/updated from a command class: from evennia import Command class CmdDiagnose(Command): """ see how hurt your are Usage: diagnose [target] This will give an estimate of the target's health. Also the target's prompt will be updated. """ key = "diagnose" def func(self): if not self.args: target = self.caller else: target = self.search(self.args) if not target: return # try to get health, mana and stamina hp = target.db.hp mp = target.db.mp sp = target.db.sp if None in (hp, mp, sp): # Attributes not defined self.caller.msg("Not a valid target!") return text = "You diagnose %s as having " \ "%i health, %i mana and %i stamina." \ % (hp, mp, sp) prompt = "%i HP, %i MP, %i SP" % (hp, mp, sp) self.caller.msg(text, prompt=prompt) A prompt sent with every command¶ The prompt sent as described above uses a standard telnet instruction (the Evennia web client gets a special flag). Most MUD telnet clients will understand and allow users to catch this and keep the prompt in place until it updates. So in principle you’d not need to update the prompt every command. However, with a varying user base it can be unclear which clients are used and which skill level the users have. So sending a prompt with every command is a safe catch-all. You don’t need to manually go in and edit every command you have though. Instead you edit the base command class for your custom commands (like MuxCommand in your mygame/commands/command.py folder) and overload the at_post_cmd() hook. This hook is always called after the main func() method of the Command. from evennia import default_cmds class MuxCommand(default_cmds.MuxCommand): # ... def at_post_cmd(self): "called after self.func()." caller = self.caller prompt = "%i HP, %i MP, %i SP" % (caller.db.hp, caller.db.mp, caller.db.sp) caller.msg(prompt=prompt) Modifying default commands¶ If you want to add something small like this to Evennia’s default commands without modifying them directly the easiest way is to just wrap those with a multiple inheritance to your own base class: # in (for example) mygame/commands/mycommands.py from evennia import default_cmds # our custom MuxCommand with at_post_cmd hook from commands.command import MuxCommand # overloading the look command class CmdLook(default_cmds.CmdLook, MuxCommand): pass The result of this is that the hooks from your custom MuxCommand will be mixed into the default CmdLook through multiple inheritance. Next you just add this to your default command set: # in mygame/commands/default_cmdsets.py from evennia import default_cmds from commands import mycommands class CharacterCmdSet(default_cmds.CharacterCmdSet): # ... def at_cmdset_creation(self): # ... self.add(mycommands.CmdLook()) This will automatically replace the default look command in your game with your own version.
http://evennia.readthedocs.io/en/latest/Command-Prompt.html
2018-03-17T06:10:01
CC-MAIN-2018-13
1521257644701.7
[]
evennia.readthedocs.io
The official assembly instructions of Bluerobotic's BlueROV are available here First, the main structure is needed to be mounted. Using the two Aluminum T-Slot Bars, four T-Slot Nuts, and four M5x12 Cross Head Plastic Screws. Insert the four screws into the Center Panel and lightly attached the T-slot Nuts.. Install all the motors. Refer to the next image to set the correct propellers. Note that the propellers can be CW or CCW. The BlueROV comes with an electronics tray designed to fit the ESCs. As we are going to use PXFmini/Erle-Brain 2, the Secondary Tray is not necessary. Attach the Terminal Block Jumpers to the Terminal Block, leaving out a jumper in the middle. Connect the wires to the Terminal Block. The Terminal Block should be connected to the power module output. The battery can be hold with a Velcro strip. The T200 (Thrusters) can be safely run with up to a 5s (18.5v) battery. The autopilot is going to be the main brain of the submarine. You can use a Erle-Brain 2 or a PXFmini with a Raspberry Pi 2 or 3. Erle-Brain 2 includes a case, the PXFmini will need some extra case for the Raspberry Pi. The case can be attached to the Electronic Try with some Velcro strip. Note during the installation that the Autopilot should be facing forward becouse the PXFmini and Erle-Brain 2 have accelerometers useful during the stabilization of the submarine You will need some base case for the Raspberry Pi 2 or 3. You can search any base in the Internet, but you would need something like this from Thingiverse Now you will need to wire up the connections between the ESCs and PWMs from the PXFmini. If you are using an Erle-Brain 2 with camera you will need to install the camera at the bottom of the electronics tray like show in the next image: Now you will need to wire up the connections between the ESCs and PWMs from the PXFmini. If you are using a PXFmini a camera for the Raspberry Pi is necessary. If you are using an Erle-Brain 2 with camera you will need to install the camera at the bottom of the electronics tray, like in the next image: Install the penetrators. Use super glue to fix the cables, then fill them with epoxy. See the Cable Penetrator Tutorial from Bluerobotics. The connections between ESCs and motors should be direct: Green <=> Red Blue <=> Black Yellow <=> White The enclosure vent allows the release of trapped air when using the watertight enclosures. Once sealed, it provides a reliable high-pressure seal against water. More info here. Also an Ethernet cable should be installed with a cable penetrator. Note that the RJ45 of the Ethernet should be removed in order to pass through the cable penetrator. A RJ45 crimper will be necessary.
http://docs.erlerobotics.com/erle_robots/projects/scuba/assembly
2018-03-17T06:01:43
CC-MAIN-2018-13
1521257644701.7
[]
docs.erlerobotics.com
JavaScript API Setup Our JavaScript library contains the functions required by our services. The library is loaded asynchronously. You can place the JavaScript snippet anywhere in the HTML document. Our recommendation is to paste this code snippet into your website template page so that it appears before the closing </head> section. >
http://docs.oneall.com/api/javascript/library/setup/
2018-03-17T06:31:46
CC-MAIN-2018-13
1521257644701.7
[]
docs.oneall.com
The os module provides dozens of functions for interacting with the operating system: >>> import os >>> os.getcwd() # Return the current working directory 'C:\\Python31' >>>') >>> shutil.move('/build/executables', 'installdir') The glob module provides a function for making file lists from directory wildcard searches: >>> import glob >>> glob.glob('*.py') ['primes.py', 'random.py', 'quote.py'] optparse module. The sys module also has attributes for stdin, stdout, and stderr. The latter is useful for emitting warnings and error messages to make them visible even when stdout has been redirected: >>> sys.stderr.write('Warning, log file not found starting a new one\n') Warning, log file not found starting a new one The most direct way to terminate a script is to use sys.exit(). The re module provides regular expression tools for advanced string processing. For complex matching and manipulation, regular expressions offer succinct, optimized solutions: >>> import re >>> re.findall(r'\bf[a-z]*', 'which foot or hand fell fastest') ['foot', 'fell', 'fastest'] >>> re.sub(r'(\b[a-z]+) \1', r'\1', 'cat in the the hat') 'cat in the hat' When only simple capabilities are needed, string methods are preferred because they are easier to read and debug: >>> 'tea for too'.replace('too', 'two') 'tea for two' The math module gives access to the underlying C library functions for floating point math: >>> import math >>> math.cos(math.pi / 4) 0.70710678118654757 >>> math.log(1024, 2) 10.0 The random module provides tools for making random selections: >>> import random >>> random.choice(['apple', 'pear', 'banana']) 'apple' >>> random.sample(range(100), 10) # sampling without replacement [30, 83, 16, 4, 8, 81, 41, 50, 18, 33] >>> random.random() # random float 0.17970987693706186 >>> random.randrange(6) # random integer chosen from range(6) 4 The SciPy project <> has many other modules for numerical computations. There are a number of modules for accessing the internet and processing internet protocols. Two of the simplest are urllib.request for retrieving data from urls and smtplib for sending mail: >>> from urllib.request import urlopen >>> for line in urlopen(''): ... line = line.decode('utf-8') # Decoding the binary data to text. ... if 'EST' in line or 'EDT' in line: # look for Eastern Time ... print(line) <BR>Nov. 25, 09:43:32 PM EST >>> import smtplib >>> server = smtplib.SMTP('localhost') >>> server.sendmail('[email protected]', '[email protected]', ... """To: [email protected] ... From: [email protected] ... ... Beware the Ides of March. ... """) >>> server.quit() (Note that the second example needs a mailserver running on localhost.) The datetime module supplies classes for manipulating dates and times in both simple and complex ways. While date and time arithmetic is supported, the focus of the implementation is on efficient member extraction for output formatting and manipulation. The module also supports objects that are timezone aware. >>> # dates are easily constructed and formatted >>> from datetime import date >>> now = date.today() >>> now datetime.date(2003, 12, 2) >>> now.strftime("%m-%d-%y. %d %b %Y is a %A on the %d day of %B.") '12-02-03. 02 Dec 2003 is a Tuesday on the 02 day of December.' >>> # dates support calendar arithmetic >>> birthday = date(1964, 7, 31) >>> age = now - birthday >>> age.days 14368 Common data archiving and compression formats are directly supported by modules including: zlib, gzip, bz2, Some Python users develop a deep interest in knowing the relative performance of different approaches to the same problem. Python provides a measurement tool that answers those questions immediately. For example, it may be tempting to use the tuple packing and unpacking feature instead of the traditional approach to swapping arguments. The timeit module quickly demonstrates a modest performance advantage: >>> from timeit import Timer >>> Timer('t=a; a=b; b=t', 'a=1; b=2').timeit() 0.57535828626024577 >>> Timer('a,b = b,a', 'a=1; b=2').timeit() 0.54962537085770791 In contrast to timeit‘s fine level of granularity, the profile and pstats modules provide tools for identifying time critical sections in larger blocks of code. One approach for developing high quality software is to write tests for each function as it is developed and to run those tests frequently during the development process. The doctest module provides a tool for scanning a module and validating tests embedded in a program’s docstrings. Test construction is as simple as cutting-and-pasting a typical call along with its results into the docstring. This improves the documentation by providing the user with an example and it allows the doctest module to make sure the code remains true to the documentation: def average(values): """Computes the arithmetic mean of a list of numbers. >>> print(average([20, 30, 70])) 40.0 """ return sum(values) / len(values) import doctest doctest.testmod() # automatically validate the embedded tests The unittest module is not as effortless as the doctest module, but it allows a more comprehensive set of tests to be maintained in a separate file: Python has a “batteries included” philosophy. This is best seen through the sophisticated and robust capabilities of its larger packages. For example:
http://docs.python.org/release/3.1.4/tutorial/stdlib.html
2013-05-18T10:22:20
CC-MAIN-2013-20
1368696382261
[]
docs.python.org
This Attorney-Client Disengagement Letter Template clearly establishes that an attorney no longer represents the client. This letter is necessary to end the attorney-client relationship and ensure that the attorney does not expose himself or herself to unnecessary liability and possible problems with the State Bar. This template can be modified to add specific terms to make sure that the parties understandings are clearly set forth. This template can be used by attorneys and small business clients that want to terminate an attorney-client relationship. Get Unlimited Access to Our Complete Business Library Plus Would you be interested in taking a longer survey for a chance to win a 1-month free subscription to Docstoc Premium?
http://premium.docstoc.com/docs/115859787/Attorney-and-Client-Disengagement-Letter
2013-05-18T10:25:45
CC-MAIN-2013-20
1368696382261
[]
premium.docstoc.com
JBoss.orgCommunity Documentation One of the goals of jBPM is to allow users to extend the default process constructs with domain-specific extensions that simplify development in a particular application domain. This tutorial describes how to take your first steps towards domain-specific processes. Note that you don't need to be a jBPM expert to define your own domain-specific nodes, this should be considered integration code that a normal developer with some experience in jBPM can do himself. Most process languages offer some generic action (node) construct that allows plugging in custum user actions. However, these actions are usually low-level, where the user is required to write custom code to implement the work that should be incorporated in the process. The code is also closely linked to a specific target environment, making it difficult to reuse the process in different contexts. Domain-specific languages are targeted to one particular application domain and therefore can offer constructs that are closely related to the problem the user is trying to solve. This makes the processes and easier to understand and self-documenting. We will show you how to define domain-specific work items (also called service nodes), which represent atomic units of work that need to be executed. These service nodes specify the work that should be executed in the context of a process in a declarative manner, i.e. specifying what should be executed (and not how) on a higher level (no code) and hiding implementation details. So we want service nodes that are: domain-specific declarative (what, not how) high-level (no code) customizable to the context Users can easily define their own set of domain-specific service nodes and integrate them in our process language. For example, the next figure shows an example of a process in a healthcare context. The process includes domain-specific service nodes for ordering nursing tasks (e.g. measuring blood pressure), prescribing medication and notifying care providers. Let's start by showing you how to include a simple work item for sending notifications. A work item represent an atomic unit of work in a declarative way. It is defined by a unique name and additional parameters that can be used to describe the work in more detail. Work items can also return information after they have been executed, specified as results. Our notification work item could thus be defined using a work definition with four parameters and no results: Name: "Notification" Parameters From [String] To [String] Message [String] Priority [String] All work definitions must be specified in one or more configuration files in the project classpath, where all the properties are specified as name-value pairs. Parameters and results are maps where each parameter name is also mapped to the expected data type. Note that this configuration file also includes some additional user interface information, like the icon and the display name of the work item. In our example we will use MVEL for reading in the configuration file, which allows us to do more advanced configuration files. This file must be placed in the project classpath in a directory called META-INF. Our MyWorkDefinitions.wid file looks like this:" ] ] The project directory structure could then look something like this: project/src/main/resources/META-INF/MyWorkDefinitions.wid You might now want to create your own icons to go along with your new work definition. To add these you will need .gif or .png images with a pixel size of 16x16. Place them in a directory outside of the META-INF directory, for example as follows: project/src/main/resources/icons/notification.gif The configuration API can be used to register work definition files for your project using the drools.workDefinitions property, which represents a list of files containing work definitions (separated usings spaces). For example, include a drools.rulebase.conf file in the META-INF directory of your project and add the following line: drools.workDefinitions = MyWorkDefinitions.wid This will replace the default domain specific node types EMAIL and LOG with the newly defined NOTIFICATION node in the process editor. Should you wish to just add a newly created node definition to the existing palette nodes, adjust the drools.workDefinitions property as follows including the default set configuration file: drools.workDefinitions = MyWorkDefinitions.conf WorkDefinitions.conf Once our work definition has been created and registered, we can start using it in our processes. The process editor contains a separate section in the palette where the different service nodes that have been defined for the project appear. Using drag and drop, a notification node can be created inside your process. The properties can be filled in using the properties view. Apart from the properties defined by for this work item, all work items also have these three properties: Parameter Mapping: Allows you map the value of a variable in the process to a parameter of the work item. This allows you to customize the work item based on the current state of the actual process instance (for example, the priority of the notification could be dependent of some process-specific information). Result Mapping: Allows you to map a result (returned once a work item has been executed) to a variable of the process. This allows you to use results in the remainder of the process. Wait for completion: By default, the process waits until the requested work item has been completed before continuing with the process. It is also possible to continue immediately after the work item has been requested (and not waiting for the results) by setting "wait for completion" to false. Here is an example that creates a domain specific node to execute Java, asking for the class and method parameters. It includes a custom java.gif icon and consists of the following files and resulting screenshot:.gif" ] ] // located in: project/src/main/resources/META-INF/drools.rulebase.conf // drools.workDefinitions = JavaNodeDefinition.conf WorkDefinitions.conf // icon for java.gif located in: // project/src/main/resources/icons/java.gif The jBPM engine contains a WorkItemManager that is responsible for executing work items whenever necessary. The WorkItemManager is responsible for delegating the work items to WorkItemHandlers that execute the work item and notify the WorkItemManager when the work item has been completed. For executing notification work items, a NotificationWorkItemHandler should be created (implementing the WorkItemHandler interface): package com.sample; import org.drools.runtime.process.WorkItem; import org.drools.runtime.process.WorkItemHandler; import org.drools.runtime.process.WorkItemManager; public class NotificationWorkItemHandler implements WorkItemHandler { public void executeWorkItem(WorkItem workItem, WorkItemManager manager) { // extract parameters String from = (String) workItem.getParameter("From"); String to = (String) workItem.getParameter("To"); String message = (String) workItem.getParameter("Message"); String priority = (String) workItem.getParameter("Priority"); // send email service.sendEmail(from, to, "Notification", message); // notify manager that work item has been completed manager.completeWorkItem(workItem.getId(), null); } public void abortWorkItem(WorkItem workItem, WorkItemManager manager) { // Do nothing, notifications cannot be aborted } } This WorkItemHandler sends a notification as an email and then immediate notifies the WorkItemManager that the work item has been completed. Note that not all work items can be completed directly. In cases where executing a work item takes some time, execution can continue asynchronously and the work item manager can be notified later. In these situations, it might also be possible that a work item is being aborted before it has been completed. The abort method can be used to specify how to abort such work items. WorkItemHandlers should be registered at the WorkItemManager, using the following API: ksession.getWorkItemManager().registerWorkItemHandler( "Notification", new NotificationWorkItemHandler()); Decoupling the execution of work items from the process itself has the following advantages: The process is more declarative, specifying what should be executed, not how. Changes to the environment can be implemented by adapting the work item handler. The process itself should not be changed. It is also possible to use the same process in different environments, where the work item handler is responsible for integrating with the right services. It is easy to share work item handlers across processes and projects (which would be more difficult if the code would be embedded in the process itself). Different work item handlers could be used depending on the context. For example, during testing or simulation, it might not be necessary to actually execute the work items. In this case specialized dummy work item handlers could be used during testing.
http://docs.jboss.org/jbpm/v5.1/userguide/ch13.html
2013-05-18T10:55:20
CC-MAIN-2013-20
1368696382261
[]
docs.jboss.org
All public logs Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive). - 13:52, 12 May 2013 JoomlaWikiBot (Talk | contribs) marked revision 98320 of page API15:JTable/store patrolled - 17:16, 22 March 2010 Doxiki (Talk | contribs) marked revision 23254 of page API15:JTable/store patrolled
http://docs.joomla.org/index.php?title=Special:Log&page=API15%3AJTable%2Fstore
2013-05-18T10:33:06
CC-MAIN-2013-20
1368696382261
[]
docs.joomla.org
At the time of this writing Mac OS X had just been released as a Public Beta. Efforts are under way to bring MacPython to Mac OS X. The MacPython release 2.11.5.2c1 runs quite well within the ``Classic'' environment. A ``Carbon'' port of the MacPython code is being prepared for release, and several people have made a command line version available to the ``Darwin'' layer (which is accessible via Terminal.app). See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://docs.python.org/release/2.1/mac/node23.html
2013-05-18T10:21:19
CC-MAIN-2013-20
1368696382261
[]
docs.python.org
Until Py_BuildValue("i", numargs); } static PyMethodDef EmbMethods[] = { {"numargs", emb_numargs, METH_VARARGS, "Return the number of arguments received by the process."}, {NULL, NULL, 0, NULL} }; Insert the above code just above the main() function. Also, insert the following two statements directly after Py_Initialize(): numargs = argc; Py_InitModule("emb", EmbMethods);. See About this document... for information on suggesting changes.See About this document... for information on suggesting changes.
http://docs.python.org/release/2.5.2/ext/extending-with-embedding.html
2013-05-18T10:12:46
CC-MAIN-2013-20
1368696382261
[]
docs.python.org
This module provides a duplicate interface to the _thread module. It is meant to be imported when the _thread module is not provided on a platform. Suggested usage is: try: import _thread except ImportError: import dummy_thread as _thread Be careful to not use this module where deadlock might occur from a thread being created that blocks waiting for another thread to be created. This often occurs with blocking I/O.
http://docs.python.org/release/3.1/library/_dummy_thread.html
2013-05-18T10:22:36
CC-MAIN-2013-20
1368696382261
[]
docs.python.org
Who can see the Assistant? To restrict who can see the Assistant, first log in to elevio and click ‘Assistant settings’ in the left hand panel. To only show elevio to users who are logged in to your site, enable the setting for ‘show only to logged in users’. As long as you are passing through user information when loading elevio, the Assistant will only show in your backend.
https://docs.elevio.help/en/articles/81508-who-can-see-the-assistant
2019-12-05T17:27:21
CC-MAIN-2019-51
1575540481281.1
[]
docs.elevio.help
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Add-CWEResourceTag-ResourceARN <String>-Tag <Tag[]>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter> TagResourceaction with a rule that already has tags. If you specify a new tag key for the rule, this tag is appended to the list of tags associated with the rule. If you specify a tag key that is already associated with the rule, the new tag value that you specify replaces the previous value for that tag. You can associate as many as 50 tags with a
https://docs.aws.amazon.com/powershell/latest/reference/items/Add-CWEResourceTag.html
2019-12-05T18:04:31
CC-MAIN-2019-51
1575540481281.1
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Remove-DOCDBCluster-DBClusterIdentifier <String>-FinalDBSnapshotIdentifier <String>-SkipFinalSnapshot <Boolean>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter> DBClusterIdentifier. SkipFinalSnapshotis set to false. Specifying this parameter and also setting the SkipFinalShapshotparameter to trueresults in an error. Constraints: trueis specified, no DB cluster snapshot is created. If falseis specified, a DB cluster snapshot is created before the DB cluster is deleted. If SkipFinalSnapshotis false, you must specify a FinalDBSnapshotIdentifierparameter
https://docs.aws.amazon.com/powershell/latest/reference/items/Remove-DOCDBCluster.html
2019-12-05T17:47:30
CC-MAIN-2019-51
1575540481281.1
[]
docs.aws.amazon.com
Who Are We? We're a team from Melbourne, Australia who believe the best path to customer success is through the intelligent transfer of knowledge from company to customer. We’ve seen the problem of disjointed support and painful user experiences first hand, and we decided that something needed to be done about it. That's why we built Elevio. Looking for "What is Elevio?"
https://docs.elevio.help/en/articles/81541-who-are-we
2019-12-05T17:13:31
CC-MAIN-2019-51
1575540481281.1
[]
docs.elevio.help
Update Documentation for New Releases¶ Once a new TYPO3 release comes out, the main documentation (e.g. TYPO3 Explained, TCA Reference etc.) must be updated. Here, we describe some best practices for updating the official documentation for a new TYPO3 release. We stick to the core conventions as much as possible because that makes it easier for everyone to contribute to documentation and core. Hint These are not strict rules, but rather recommendations. If a better method is found and used, please update this page. How to Handle Deprecations and Breaking Changes¶ Important We used to follow the conventions, that deprecated features were entirely removed from the documentation as soon as they are deprecated. This no longer applies: We recommend to add some information about the deprecation where this may be helpful. This has the disadvantage that the documentation must be modified twice: once to point out the documentation, and finally to remove it. But, on the other hand, we have found that it is more user friendly to document the deprecation and the alternative to make migration easier. This mean, we do not (yet) remove the deprecated information entirely. This gives people more time to adjust to the changes. Also, deprecated features may still be used, but if the documentation were removed entirely, a search for documentation would direct everyone to a previous version where the feature is still documented without mentioning the deprecation. While we recommend for developers to read the Changelogs we should not make this a necessary requirement. It should be possible to get enough information from reading the main docs. Here are some examples, how you can point out deprecations: .. warning:: The hook shown here is deprecated since TYPO3 v9 - use a custom :ref:`PSR-15 middleware<request-handling>` instead. .. note:: Starting with TYPO3 v10 hooks and signals have been replaced by a PSR-14 based event dispatching system. The symfony expression language has been implemented for TypoScript conditions in both frontend and backend since TYPO3 9.4. see Commit Messages¶ The commit message can point out the releases to which the change should apply (as in the core commits), e.g. Releases: master, 9.5, see Commit Messages. Applying Changes to Several Releases¶ Sometimes a necessary change applies to several major versions. Example: A change in the documentation is necessary in current master (10) and also in 9.5 branch. If this is the case, it is recommended to: - apply the change to the lower version (9.5 in our example) first, and then create another PR for the higher version making necessary additional changes. This is the reverse order of what is being used in the core! - The person merging the commit should take care of merging into other branches as well (in case that is necessary). This is the same convention as in the core. - The changes can be bundled into one commit and the commit / PR can have a subject such as: [TASK] Update with changes from 9.5.3 This makes it easier to find related changes and check for which version a branch was last updated. How to Mark What State a Manual is in¶ In order to keep track of which changes have already been applied and give readers hints about how up to date a manual is, you can optionally add a “Status” to the start page ( Documentation/Index.rst). For example: :Status: ok (Fully reviewed for TYPO3 9.5.9 on July 22, 2019) If the manual has not been fully reviewed, but all changelogs up to 9.5.9 have been applied, you might use: :Status: needs review (All changelogs <= TYPO3 9.5.9 have been applied) See Getting Started Tutorial. Work in Progress¶ Several suggestions have been made to improve the process but these still require more work or a decision, e.g.
https://docs.typo3.org/m/typo3/docs-how-to-document/master/en-us/GeneralConventions/HowToUpdateDocs.html
2019-12-05T17:52:54
CC-MAIN-2019-51
1575540481281.1
[]
docs.typo3.org
(PHP 7 >= 7.4.0) Objects of this class are created by the factory methods FFI::cdef(), FFI::load() or FFI::scope(). Defined C variables are made available as properties of the FFI instance, and defined C functions are made available as methods of the FFI instance. Declared C types can be used to create new C data structures using FFI::new() and FFI::type(). FFI definition parsing and shared library loading may take significant time. It is not useful to do it on each HTTP request in a Web environment. However, it is possible to preload FFI definitions and libraries at PHP startup, and to instantiate FFI objects when necessary. Header files may be extended with special FFI_SCOPE defines (e.g. #define FFI_SCOPE "foo"”"; the default scope is "C") and then loaded by FFI::load() during preloading. This leads to the creation of a persistent binding, that will be available to all the following requests through FFI::scope(). Refer to the complete PHP/FFI/preloading example for details. It is possible to preload more than one C header file into the same scope.
http://docs.php.net/manual/it/class.ffi.php
2019-12-05T17:23:24
CC-MAIN-2019-51
1575540481281.1
[]
docs.php.net
How to configure nFactor authentication The primary entity used for nFactor authentication is called a login schema. A login schema specifies an authentication schema XML file that defines the manner in which the login form will be rendered. Considering the interaction that the user must have when logging in to the application, you can create a single file for multiple factors or different files for different factors. View sample XML file. Single file for multiple factors. User will be provided a single form in which to provide credentials for multiple authentication factors. Different files for different factors. User will be provided a different form for each authentication factor. Next, you must associate the XML file(s) with login schema(s). You can also specify expressions to extract the user name and the password from the login form. Tip You can configure an authentication factor to be pass-through. This means that the user is not required to provide credentials explicitly and there is no login form for that factor. The credentials are either taken from the previous factor or the user name and/or password are dynamically extracted by using the username/password expressions that are configured for that login schema. You must set the login schema to “NOSCHEMA”, instead of an XML file. Now that the login schemas are configured, you must specify the manner in which they must be invoked. A login schema can be invoked by using either a login schema policy or an authentication policy label. The decision depends on the following: - Login schema policy. - Specifies the condition on which the login form must be presented to the user. - Must be bound to an authentication virtual server. - In an authentication virtual server that has multiple login schema policies, the policy with the highest priority that evaluates to true is executed. That is, the login form associated with that policy is presented to the user. - The login schema policy is only used to present the first login form. - Authentication policy label. - Specifies a collection of authentication policies for a particular factor. Each policy label corresponds to a single factor. - Specifies the login form that must be presented to the user. - Must be bound as the next factor of an authentication policy or of another authentication policy label. - Typically, a policy label includes authentication policies for a specific authentication mechanism. However, you can also have a policy label that has authentication policies for different authentication mechanisms. To summarize, the configurations you must perform to set up nFactor authentication are as follows: - Create the authentication schema XML files. - Associate each XML file with a login schema. - Associate each login schema with a login schema policy or authentication policy label. - Bind login schema policy to an authentication virtual server. - Bind authentication policy label, as next factor, to an authentication policy.
https://docs.citrix.com/en-us/citrix-adc/13/aaa-tm/multi-factor-nfactor-authentication/nfactor-authentication-configuration-basics.html
2019-12-05T17:27:51
CC-MAIN-2019-51
1575540481281.1
[]
docs.citrix.com
Backup and restore in Azure Database for MariaDB Azure Database for MariaDB automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion. Backups Azure Database for MariaDB takes full, differential, and transaction log backups. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days. You can optionally configure it up to 35 days. All backups are encrypted using AES 256-bit encryption. Backup frequency Generally, full backups occur weekly, differential backups occur twice a day, and transaction log backups occur every five minutes. The first full backup is scheduled immediately after a server is created. The initial backup can take longer on a large restored server. The earliest point in time that a new server can be restored to is the time at which the initial full backup is complete. Backup redundancy options Azure Database for MariaDB provides the flexibility to choose between locally redundant or geo-redundant backup storage in the General Purpose and Memory Optimized tiers. When the backups are stored in geo-redundant backup storage, they are not only stored within the region in which your server is hosted, but are also replicated to a paired data center. This provides better protection and ability to restore your server in a different region in the event of a disaster. The Basic tier only offers locally redundant backup storage. Important Configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. Backup storage cost Azure Database for MariaDB provides up to 100% of your provisioned server storage as backup storage at no additional cost. Typically, this is suitable for a backup retention of seven days. Any additional backup storage used is charged in GB-month. For example, if you have provisioned a server with 250 GB, you have 250 GB of backup storage at no additional charge. Storage in excess of 250 GB is charged. For more information on backup storage cost, visit the MariaDB pricing page. Restore In Azure Database for MariaDB, a different region. The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time is usually less than 12 hours. Important Deleted servers cannot be restored. If you delete the server, all databases that belong to the server are also deleted and cannot be recovered.To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage management locks. Point-in-time restore Independent of your backup redundancy option, you can perform a restore to any point in time within your backup retention period. A new server is created in the same Azure region as the original server. It is created with the original server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option. Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect. You may need to wait for the next transaction log backup to be taken before you can restore to a point in time within the last five minutes. Geo-restore You can restore a server to another Azure region where the service is available if you have configured your server for geo-redundant backups. compute generation, vCore, backup retention period, and backup redundancy options. Changing pricing tier (Basic, General Purpose, or Memory Optimized) or storage size during geo-restore is not supported. Perform post-restore tasks After a restore from either recovery mechanism, you should perform the following tasks to get your users and applications back up and running: - If the new server is meant to replace the original server, redirect clients and client applications to the new server - Ensure appropriate server-level firewall rules are in place for users to connect - Ensure appropriate logins and database level permissions are in place - Configure alerts, as appropriate Next steps - To learn more about business continuity, see the business continuity overview. - To restore to a point in time using the Azure portal, see restore database to a point in time using the Azure portal. Feedback
https://docs.microsoft.com/en-gb/azure/mariadb/concepts-backup
2019-12-05T18:32:46
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Detecting a Failed Segment Detecting synchronizing state and continues logging database changes, so the mirror can be synchronized without performing a full copy of data from the primary to the mirror. Running the gpstate utility with the -e option displays any issues with a primary or mirror segment instances. Other gpstate options that display information about all primary or mirror segment instances such as -m (mirror instance information) and -c (primary and mirror configuration information) also display information about primary and mirror issues. You can also can see the mode: s (synchronizing) or n (not synchronizing) for each segment instance, as well as the status u (up) or d (down), in the gp_segment_configuration table. The gprecoverseg utility is used to bring up a mirror that is down. By default, gprecoverseg performs an incremental recovery, placing the mirror into synchronizing. After a segment instance has been recovered, the gpstate -e command might list primary and mirror segment instances that are switched. This indicates that the system is not balanced (the primary and mirror instances are not in their originally configured roles). If a system is not balanced, there might be skew resulting from the number of active primary segment instances on segment host systems. The gp_segment_configuration table has columns role and preferred_role. These can have values of either p for primary or m for mirror. The role column shows the segment instance current role and the preferred_role shows the original role of the segment instance. In a balanced system, the role and preferred_role matches for all segment instances. When they do not match the system is not balanced. To rebalance the cluster and bring all the segments into their preferred role, run the gprecoverseg command with the -r option. Simple Failover and Recovery Example Consider a single primary-mirror segment instance pair where the primary segment has failed over to the mirror. The following table shows the segment instance preferred role, role, mode, and status from gp_segment_configuration table before beginning recovery of the failed primary segment. You can also run gpstate -e to display any issues with a primary or mirror segment instances. The segment instance roles are not in their preferred roles, and the primary is down. The mirror is up, the role is now primary, and it is not synchronizing because its mirror, the failed primary, is down. After fixing issues with the segment host and primary segment instance, you use gprecoverseg to prepare failed segment instances for recovery and initiate synchronization between the primary and mirror instances. Once gprecoverseg has completed, the segments are in the states shown in the following table where the primary-mirror segment pair is up with the primary and mirror roles reversed from their preferred roles. The gprecoverseg -r command rebalances the system by returning the segment roles to their preferred roles..
http://docs.greenplum.org/6-1/admin_guide/highavail/topics/g-detecting-a-failed-segment.html
2019-12-05T16:44:34
CC-MAIN-2019-51
1575540481281.1
[]
docs.greenplum.org
Configure pgAdmin 4 NOTE: This section assumes that you have downloaded and installed pgAdmin 4. pgAdmin is the most popular and feature-rich platform for administration and development of PostgreSQL databases. Check the pgAdmin official page for more information. To connect to your remote PostgreSQL database server using pgAdmin 4, follow these steps: Make sure that you have your cloud server’s IP address and application credentials (instructions). Open port 5432 in the server firewall (instructions).. Connect to your cloud server using PuTTY or another SSH client (instructions). At the server console, edit the file /opt/bitnami/postgresql/data/pg_hba.conf and add the following at the end, then save the file: host all all all md5 Edit the file /opt/bitnami/postgresql/data/postgresql.conf and replace this line listen_address='127.0.0.1' with: listen_addresses = '*' Save the file. Restart the PostgreSQL server: sudo /opt/bitnami/ctlscript.sh restart postgresql Your PostgreSQL server is now configured to accept remote connections, and you can connect to it using pgAdmin 4. Follow these steps: Launch pgAdmin 4. Go to the “Dashboard” tab. In the “Quick Link” section, click “Add New Server” to add a new connection. Select the “Connection” tab in the “Create-Server” window. Then, configure the connection as follows: Enter your server’s IP address in the “Hostname/Address” field. Specify the “Port” as “5432”. Enter the name of the database in the “Database Maintenance” field. Enter your username as postgres and password (use the same password you used when previously configuring the server to accept remote connections) for the database. Click “Save” to apply the configuration. Check that the connection between pgAdmin 4 and the PostgreSQL database server is active. Navigate to the “Dashboard” tab and find the state of the server in the “Server activity” section:
https://docs.bitnami.com/oci/apps/discourse/administration/configure-pgadmin/
2019-12-05T16:57:58
CC-MAIN-2019-51
1575540481281.1
[]
docs.bitnami.com
애드온(add-on)¶ 중요 Work In Progress Add-ons Category Listings¶ - About: - This section lists the add-ons categories in the same order they appear in Blender 2.81. - Each sub section contains the documentation files for the related add-ons. - Note that only add-ons released in Blender are included in this section. - Documentation might be outdated and on some pages images, videos, and links aren't added yet.
https://docs.blender.org/manual/ko/dev/addons/index.html
2019-12-05T17:43:30
CC-MAIN-2019-51
1575540481281.1
[]
docs.blender.org
Whenever you have a new table/collection in your database, you will have to create file to declare it. Here is a template example for a companies table: /forest/companies.jsmodule.exports = (sequelize, DataTypes) => {const { Sequelize } = sequelize;const Company = sequelize.define('companies', {name: {type: DataTypes.STRING,},createdAt: {type: DataTypes.DATE,},...}, {tableName: 'companies',underscored: true,schema: process.env.DATABASE_SCHEMA,});Company.associate = (models) => {};return Company;}; Fields within that model should match your table's fields as shown in next section. New relationships may be added there: Company.associate = (models) => {}; You can learn more about relationships on this dedicated page. /forest/companies.jsconst mongoose = require('mongoose');const schema = mongoose.Schema({'name': String,'createdAt': Date,...}, {timestamps: false,});module.exports = mongoose.model('companies', schema, 'companies'); Fields within that model should match your collection's fields as shown in next section. New relationships are to be added as properties: 'orders': [{ type: mongoose.Schema.Types.ObjectId, ref: 'orders' }],'customer_id': { type: mongoose.Schema.Types.ObjectId, ref: 'customers' }, You can learn more about relationships on this dedicated page. Any new field must be added manually within the corresponding model of your /forest folder. Fields are declared as follows: createdAt: {type: DataTypes.DATE,}, An exhaustive list of DataTypes can be found in Sequelize documentation. You can see how that snippet fits into your code in the model example above. Fields are declared as follows: 'createdAt': Date, An exhaustive list of SchemaTypes can be found in Mongoose documentation. You can see how that snippet fits into your code in the model example above. Validation: /models/customers.jsmodule.exports = (sequelize, DataTypes) => : /models/customers.js..: /models/customers.js...const schema = mongoose.Schema({'createdAt': Date,'email': {'type': String,'match': [/^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,3})+$/, 'Invalid email']},'firstname': String,...} A better yet solution would be to rely on an external library called validator.js which provides many build-in validators: /models/customers.jsimport {: /models/customers.jsmodule.exports = (sequelize, DataTypes) => {const Customer = sequelize.define('customers', {...'firstname': {'type': DataTypes.STRING,'defaultValue': 'Marc'},...},...return Customer;}; /models/customers.js..: /models/orders.jsmodule: /models/customers.jsschema.
https://docs.forestadmin.com/documentation/reference-guide/models/enrich-your-models
2019-12-05T17:17:57
CC-MAIN-2019-51
1575540481281.1
[]
docs.forestadmin.com
Find in Files and Replace in Files are the two powerful Hex Editor Neo commands that allow you to perform batch find and replace operations. They can be instructed to operate on all files in a folder or the whole folder tree. The Find in Files Window is used to enter the find and/or replace patterns and specify the list of folders to search in. You can also narrow the search by entering the mask (for example, “*.txt”) to search only in files that match the mask. Several masks may be specified (separate them with a semicolon). Below is an example that can be used to search within C/C++ source files: *.c;*.cpp;*.cxx;*.cc;*.h;*.hpp;*.hxx;*.idl All matched documents are displayed in the Find in Files Tool Window. In several operation modes, matched files are immediately opened in the editor, in others, you may open them by double-clicking on the file's item in a result list. The implementation of Find in Files and Replace in Files commands scales well, that is, it works faster on multi-core or multi-processor computers. Several dedicated threads of execution are launched on such computers and perform searching and replacing in parallel. Note that the real performance boost is achieved only if you have a fast disk system or access files over a fast network connection. Replace in Files function, operating in Replace all occurrences and save mode may potentially be harmful to your data. Hex Editor Neo automatically detects if you are trying to execute this operation and have Always create backups Option turned off. It then warns you about the possible data loses and offers several actions to continue: If you have Always create backup Option turned on, then each modified file is first backed up before modification. Both Find in Files and Replace in Files functions fully support regular expressions.
https://docs.hhdsoftware.com/hex/definitive-guide/find-and-replace/find-in-files-&-replace-in-files/overview.html
2019-12-05T17:05:14
CC-MAIN-2019-51
1575540481281.1
[]
docs.hhdsoftware.com
first download the driver for the hard disk. Obtain the driver from the manufacturer, and then save it to removable media, such as a USB flash drive. Insert the removable media in. Important After you finish this step, you cannot change the partition on which you install the operating system.... Note You cannot change the Destination Server name or the internal domain name after you finish this step....
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-essentials-sbs/gg563800(v=ws.11)?redirectedfrom=MSDN
2019-12-05T18:10:14
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Traditionally, all ESI includes in Varnish were fetched in sequential order, one after the other as they are required for delivery. The improved ESI delivery implementation will seek out all the include fragments and issue backend fetches for all of them concurrently, which in turn significantly reduces the load times for ESI content. Parallel ESI is available in Varnish Cache Plus 4.1.4r5 and later. Parallel ESI is built into supported Varnish Cache Plus versions, and does not require any additional installation steps. The parallel fetch behavior replaces the previous sequential fetch implementation. No parameter settings are required in order to enable it.
https://docs.varnish-software.com/varnish-cache-plus/features/pesi/
2019-12-05T18:37:34
CC-MAIN-2019-51
1575540481281.1
[]
docs.varnish-software.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Write-ASScalingPolicy-AutoScalingGroupName <String>-PolicyName <String>-AdjustmentType <String>-Cooldown <Int32>-CustomizedMetricSpecification_Dimension <MetricDimension[]>-TargetTrackingConfiguration_DisableScaleIn <Boolean>-EstimatedInstanceWarmup <Int32>-MetricAggregationType <String>-CustomizedMetricSpecification_MetricName <String>-MinAdjustmentMagnitude <Int32>-MinAdjustmentStep <Int32>-CustomizedMetricSpecification_Namespace <String>-PolicyType <String>-PredefinedMetricSpecification_PredefinedMetricType <MetricType>-PredefinedMetricSpecification_ResourceLabel <String>-ScalingAdjustment <Int32>-CustomizedMetricSpecification_Statistic <MetricStatistic>-StepAdjustment <StepAdjustment[]>-TargetTrackingConfiguration_TargetValue <Double>-CustomizedMetricSpecification_Unit <String>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter> ScalingAdjustmentparameter is an absolute number or a percentage of the current capacity. The valid values are ChangeInCapacity, ExactCapacity, and PercentChangeInCapacity.Valid only if the policy type is StepScalingor SimpleScaling. For more information, see Scaling Adjustment Types in the Amazon EC2 Auto Scaling User Guide. SimpleScaling. For more information, see Scaling Cooldowns in the Amazon EC2 Auto Scaling User Guide. StepScalingor TargetTrackingScaling. Minimum, Maximum, and Average. If the aggregation type is null, the value is treated as Average.Valid only if the policy type is StepScaling. AdjustmentTypeis PercentChangeInCapacity, the scaling policy changes the DesiredCapacityof the Auto Scaling group by at least this many instances. Otherwise, the error is ValidationError.This property replaces the MinAdjustmentStepproperty. For example, suppose that you create a step scaling policy to scale out an Auto Scaling group by 25 percent and you specify a MinAdjustmentMagnitudeof 2. If the group has 4 instances and the scaling policy is performed, 25 percent of 4 is 1. However, because you specified a MinAdjustmentMagnitudeof 2, Amazon EC2 Auto Scaling scales out the group by 2 instances.Valid only if the policy type is SimpleScalingor StepScaling. SimpleScaling, StepScaling, and TargetTrackingScaling. If the policy type is null, the value is treated as SimpleScaling.. ALBRequestCountPerTargetand there is a target group attached to the Auto Scaling group.The format is app/load-balancer-name/load-balancer-id/targetgroup/target-group-name/target-group-id, where app/load-balancer-name/load-balancer-idis the final portion of the load balancer ARN, and targetgroup/target-group-name/target-group-idis the final portion of the target group ARN. AdjustmentTypeparameter (either an absolute number or a percentage). A positive value adds to the current capacity and a negative value subtracts from the current capacity. For exact capacity, you must specify a positive value.Conditional: If you specify SimpleScalingfor the policy type, you must specify this parameter. (Not used with any other policy type.) StepScalingfor the policy type, you must specify this parameter. (Not used with any other policy type.)-ASScalingPolicy -AutoScalingGroupName my-asg -AdjustmentType "ChangeInCapacity" -PolicyName "myScaleInPolicy" -ScalingAdjustment -1 arn:aws:autoscaling:us-west-2:123456789012:scalingPolicy:aa3836ab-5462-42c7-adab-e1d769fc24ef:autoScalingGroupName/my-asg :policyName/myScaleInPolicyThis example adds the specified policy to the specified Auto Scaling group. The specified adjustment type determines how to interpret the ScalingAdjustment parameter. With 'ChangeInCapacity', a positive value increases the capacity by the specified number of instances and a negative value decreases the capacity by the specified number of instances. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Write-ASScalingPolicy.html
2019-12-05T17:02:34
CC-MAIN-2019-51
1575540481281.1
[]
docs.aws.amazon.com
# How do I add an assessment to a course? > [!Alert] Please be aware that not all functionality covered in this and linked articles may be available to you. An assessment is a multiple-choice exam or test to determine a user's comprehension of material. Assessments are added as course activities and can be mixed in among other activities. If you are interested in creating your own assessments for your courses, please submit a Support ticket at [****](). one of your assessments to your course: 1. On the **Create Course** or **Edit Course** page, click **Activities**. 1. Click **Add Assessment**. 1. In the **Choose Assessment** dialog, search for and select the assessment(s) you want to add to the course and click **OK**. Each assessment activity has the following fields you can set for more control over it: - **Duration** - sets the estimated amount of time the assessment should take. This may affect the overall duration of the class. - **Availability** - limits who can view and launch the video activity. It defaults to Everyone but you can limit it to Instructors only if the activity is intended only for instructors. - **Required for course completion** - requires a student to launch the video for the course assignment or class enrollment to be marked **Complete**. - **Available Instructor-Led** - makes the activity visible in classes and class enrollments. - **Available Self-Paced** - makes the activity visible in course assignments. - **Allow Retakes** - controls whether the student is able to launch the assessment again after completing it. - **Allow Review** - sets whether a student can review the results. ## Related Articles For more information regarding assessments, please see: - [Do I want to use a survey or an assessment?](/tms/tms-administrators/miscellaneous/use-survey-or-assessment.md) - [How do assessments work?](/tms/tms-administrators/miscellaneous/assessments.md)
https://docs.learnondemandsystems.com/tms/tms-administrators/courses-and-activities/other-activities/add-assessment.md
2019-12-05T18:23:30
CC-MAIN-2019-51
1575540481281.1
[]
docs.learnondemandsystems.com
Hybrid deployment prerequisites Summary: What your Exchange environment needs before you can set up a hybrid deployment. Before you create and configure a hybrid deployment using the Hybrid Configuration wizard, your existing on-premises Exchange organization needs to meet certain requirements. If you don't meet these requirements, you won't be able to complete the steps within the Hybrid Configuration wizard and you won't be able to configure a hybrid deployment between your on-premises Exchange organization and Exchange Online. Prerequisites for hybrid deployment The following prerequisites are required for configuring a hybrid deployment: On-premises Exchange organization: Hybrid. Exchange 2013: At least one server with the Mailbox and Client Access server roles installed. While it's possible to install the Mailbox and Client Access roles on separate servers, we strongly recommend that you install both roles on each server to provide additional reliability and improved performance. Exchange 2016 and newer: At least one server that has the Mailbox server role installed. Hybrid deployments also support Exchange servers running the Edge Transport server role. Edge Transport servers also need to be updated to the latest cumulative update or update rollup available for the version of Exchange you've installed. We strongly recommend that you deploy Edge Transport servers in a perimeter network. Mailbox and Client Access servers can't be deployed in a perimeter network. Office 365: Hybrid deployments are supported in all Office 365 plans that support Azure Active Directory synchronization. All Office 365 Business Premium, Business Essentials, Enterprise, Government, Academic and Midsize plans support hybrid deployments. Office 365 Business and Home plans don't support hybrid deployments. Connect tool to enable Active Directory synchronization with your on-premises organization. Learn more at Azure AD Connect User Sign-on options. Autodiscover DNS records: Configure the Autodiscover public DNS records for your existing SMTP domains to point to an on-premises Exchange 2010/2013 Client Access server or Exchange 2016/2019 Mailbox. Learn more at Hybrid management in Exchange.. Microsoft .NET Framework: 4.6.2 or later is required to install Hybrid Configuration Wizard.. Lync Server 2010, Lync Server 2013, or Skype for Business Server 2015 or later integrated with your on-premises telephony system or Skype for Business Online integrated with your on-premises telephony system or A traditional on-premises PBX or IP-PBX solution. Microsoft Microsoft Remote Connectivity Analyzer. Single sign-on: Single sign-on enables users to access both the on-premises and Exchange Online organizations with a single user name and password. It provides users with a familiar sign-on experience and allows administrators to easily control account policies for Exchange Online organization mailboxes by using on-premises Active Directory management tools. You have a couple of options when deploying single sign-on: password synchronization and Active Directory Federation Services. Both options are provided by Azure Active Directory Connect. Password synchronization enables almost any organization, no matter the size, to easily implement single sign-on. For this reason, and because the user experience in a hybrid deployment is significantly better with single sign-on enabled, we strongly recommend implement it. For very large organizations, such as those with multiple Active Directory forests that need to join the hybrid deployment, Active Directory Federation Services is required. Learn more at Single sign-on with hybrid deployments. Feedback
https://docs.microsoft.com/en-gb/exchange/hybrid-deployment-prerequisites
2019-12-05T17:56:10
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Skype for Business Online features If you are an admin, you can find detailed instructions for setting up Skype for Business Online features in Set up Skype for Business Online. Clients for Skype for Business Online Important the Skype for Business desktop and web clients, see Skype for Business Online client comparison tables. For a detailed comparison of the Skype for Business mobile clients, see the Mobile client comparison tables. To download the client for your mobile device, PC, or Mac, go to Download Skype for Business across all your devices. Skype for Business provides support for the conference room devices listed here. For additional information, work with your account team or call Office 365 support. To get a local number, you can choose your locale from the drop-down list. Instant messaging, presence, and contacts text is encrypted for enhanced security. Configure how their own Skype for Business contact card appears to other people. You can read more about instant messaging, presence, and contacts in Send an IM in Skype for Business. Skype-to-Skype audio, video, and media. See Set audio device options in Skype for Business to find out how to set audio device options in Skype for Business. Federation and public IM connectivity Skype for Business external connectivity (federation) lets Skype for Business users. Microsoft organization can see presence and communicate with users in the other organization. Federation in Office 365 is only supported between other Skype for Business environments, with appropriately configured Access Proxy or Edge servers. To learn more about Edge server configuration, see Components required for external user access in Lync Server 2013. Skype for Business Online meetings lets users connect through high quality video sessions. Both person-to-person and multiparty (three or more users) sessions are supported. Active speaker video is available only for multiparty sessions. With Skype for Business, users can easily schedule an online meeting with video or seamlessly escalate an IM session to a video call. To find out more about Skype for Business online meetings, see Start using Skype for Business for IM and online meetings. Important Multiparty Skype for Business audio and video capabilities might not be available in certain countries due to regulatory restrictions. For details, see About license restrictions. Security and archiving Microsoft Office 365 traffic (both signal and media traffic) is encrypted using the Transport Layer Security (TLS) protocol. Anyone who intercepts a communication sees only encrypted text. For example, if a user accesses Skype for Business Online IM, calls, and presentations while using a public Wi-Fi network, such as at an airport, the user's communications are encrypted to potential interception by network "sniffers." Skype for Business provides archiving of peer-to-peer instant messages, multiparty instant messages, and content upload activities in meetings. The archiving capability requires Exchange and is controlled by the. Exchange and SharePoint interoperability. Enable Outlook on the web to provide. Skype for Business Online administration and management Although Microsoft directly controls all Skype for Business 365 admin center Skype for Business admin center Windows PowerShell To see the latest Skype for Business Online Admin help topics and how-to articles, see Skype for Business Online in Office 365 - Admin Help. Audio Conferencing in Office 365). You only need to set up dial-in conferencing for users who plan to schedule or lead meetings. Unless the organizer has locked the meeting, anyone who has the dial-in number and conference ID can join the meeting. For details, which you can purchase Audio Conferencing, see Where can you get Audio Conferencing?. Calling Plans in Office 365 Skype for Business includes calling capabilities found on the public switched telephone devices. They also can control their calls through mute/unmute, hold/resume, call transfers, and call forwarding features, and if necessary, make emergency calls. For information about available Calling Plans, go to Calling Plans for Office 365. For more information and to set up a Calling Plan, see Which Calling Plan is right for you? Phone System in Office 365 The Phone System in Office 365 lets you use Skype for Business and either your organization's existing phone lines or the Phone System for inbound and outbound calls. With the Phone System in Office 365, your users can use Skype for Business to complete basic tasks such as placing, receiving, transferring, and muting or unmuting calls, from nearly deployment that takes advantage of the Phone System while keeping some functionality on your premises. Skype Meeting Broadcast Skype Meeting Broadcast lets Office 365 users. Note Currently, Skype Meeting Broadcast isn't available to educational or non-profit organizations. For more information, go to What is a Skype Meeting Broadcast?. The Skype Meeting Broadcast portal can be found at. Feedback
https://docs.microsoft.com/en-us/office365/servicedescriptions/skype-for-business-online-service-description/skype-for-business-online-features?redirectedfrom=MSDN
2019-12-05T16:50:34
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
ToolboxNew Products for IT Pros Greg Steen More Powerful Scripting PowerShell Analyzer There is a lot of buzz about Windows PowerShell™, and with good reason. If you haven't had a chance to play with this next generation scripting language, you really should check it out. And those of you who have upgraded to Microsoft® Exchange 2007 will quickly realize that the new Exchange Management Shell is built upon Windows PowerShell, allowing you to do everything the Exchange Management Console can but from the command line, which is great for automating repetitive tasks. Unfortunately, there can be a bit of a learning curve to get the syntax and semantics down. And here is where PowerShell Analyzer, from ShellTools, comes into play. The application is basically an IDE for writing and debugging Windows PowerShell scripts. But unlike with a traditional write-compile-test cycle, the application keeps the spirit of the admin at the keyboard, giving you real-time interactivity with a Windows PowerShell runspace. PowerShell Analyzer allows you to either enter commands line-by-line or build, edit, and run Windows PowerShell scripts in an editor in the lower part of the UI. Both options give you basic code completion, which is a great help for guiding you on the many parameters of the cmdlets. (In case you're new to Windows PowerShell, cmdlets are abstract, task-oriented, parameterized commands.) For even more help, PowerShell Analyzer provides a quick-access tab that shows you detailed descriptions, syntax, parameter explanations, and examples for each of the cmdlets. In addition, there are references for the Windows PowerShell Providers, the available built-in help files, and the cmdlet short aliases. When you run commands from Windows PowerShell, you are actually returned Microsoft .NET Framework objects rather than just the apparent text results on the screen. PowerShell Analyzer allows you to capture the properties of the returned results and pass them to a number of "visualizers." These visualizers allow you to interact with the objects—such as offering a tree-like exploration of XML data or a sortable and groupable hierarchal table view of the data—as well as generate a number of different charts to visually represent the returned dataset. Of course, not all these options are available for every command or script you run, but, when applicable, they do give you a nice set of representations of your data that you can then save or export for reuse. PowerShell Analyzer also allows you to work with multiple runspaces simultaneously, letting you easily switch between tasks. And the editor window gives you code-outlining and customizable code-highlighting styles to help make your scripts more readable. All in all, if you are going to be working with Windows PowerShell, a tool like PowerShell Analyzer will make creating, editing, and debugging your scripts, and visualizing your data much easier. Price: $129 (direct) for a single license. Figure powershell (Click the image for a larger view) Defrag Disks Auslogics Disk Defrag The more you use your hard drive, the more fragmented the files on that disk will become. Creating and deleting files, installing and removing applications, and even generating temporary disk caches and files all contribute to fragmentation. And the rate of fragmentation can increase dramatically when the free space on the drive is limited, as new files will be spread across the available "holes" in a mostly filled space. All this fragmentation will lead to slower system performance. Everyone knows, of course, that current versions of Windows® include built-in defragmentation tools. But if you are looking for a dedicated option or if you have recently upgraded to Windows Vista® and miss the control and UI presentation offered by earlier versions of the defragmentation tool, there's a very interesting alternative you might want to try: Disk Defrag, from Auslogics. Disk Defrag works with Windows XP, Windows 2000, Windows Server® 2003, and Windows Vista. And, as you would likely expect, it can defragment FAT16, FAT32, and NTFS volumes. The application is lightweight and installation takes just a few quick clicks. Running the application is just as easy. All you have to do is simply pick the drive you wish to defragment, click Next, and watch it go. Once it has completed running, Disk Defrag gives you a quick overview of the results, showing the total number of files, directories, fragmented files, defragmented files, and skipped files, as well as the percentage decrease in fragmentation. If you like, you can also click Disk Defrag's Display Report button to have the application generate an HTML version of the defragmentation report. This report gives you some additional useful information regarding your disk, such as a list of skipped files with their location, the number of fragments, the time elapsed for the run, and the cluster information for your disk. Price: Free. Disk Defrag is easy to install and use (Click the image for a larger view) Transfer Files FileZilla filezilla.sourceforge.net There's nothing like a good SFTP client to make your job easier—this is especially true when the client is solidly written and free, which is the case with the FileZilla client. FileZilla is an open-source project, hosted on SourceForge.net, that was started by Tim Kosse and a couple of his cohorts back in 2001 for a computer science class project. FileZilla runs on all flavors of Windows from Windows NT®4.0 to Windows Vista. The project has grown substantially over the past six years, but Kosse is still the project leader for the app. In addition to standard FTP connections, the client supports connections via SFTP with Secure Shell version 2 (SSH2), FTP over TLS, and FTP over SSL with implicit and explicit encryption, so you'll be able to connect to a variety of servers. FileZilla can accommodate local firewalls by letting you limit the local ports used as well as set a specific IP binding for non-passive transfers for both IPv4 and IPv6. The client also supports a number of proxy configurations, including SOCKS4/5, HTTP1.1, and FTP proxies. In terms of server authentication for your FTP connections, FileZilla supports anonymous, normal user name/password, and account-based configurations. Also, you can enable Kerberos Generic Security Services (GSS) support (which requires you to have Kerberos for Windows installed on your system) and create a list of servers that are GSS-enabled. Of course, you'll need a valid Kerberos v5 ticket before GSS will work for you. You can also enable support for your Ident server for connecting to servers that require it as a means of identifying your client. FileZilla easily manages a large collection of FTP sites and allows you to organize your connections into a tree-like structure. For each connection that you set up, you can also set default local and remote directories, specify the port to which you should connect, choose to bypass your configured proxy, explicitly choose active or passive transfer modes, and even set a server time zone offset. In addition to the saved connections, FileZilla also has a Quickconnect feature for those one time transfers so you can just type the address, user name, password, and port, and then click a button to connect. Once you have all your settings and connections configured, you can export them to XML for backup or reuse on other systems. The UI of the application, which is similar to the Windows Explorer interface, allows you to easily drag and drop files between the local and remote systems. For file transfers, you can specify a default file overwrite setting in addition to the option to limit your download and upload speeds based on a predefined set of rules or by a constant kBs speed limit. By default, FileZilla is configured to use MODE Z compressed file transfers on servers that support it, but you can adjust the compression level or turn the feature off. The client also has a transfer queue feature that lets you queue up a set of files to transfer and then import and export that transfer list for reuse. This is great for repetitive system administration tasks. Price: Free. Edit Executables PE Explorer For those of you who really like to get into the nitty-gritty of your applications, you might want to check out PE Explorer from HeavenTools. This application lets you delve into the internals of your Portable Executable (PE) files, which are used for executable binaries for Windows applications. Simply put, PE Explorer provides a UI for exploring and editing the contents of these executables. PE Explorer can open a variety of file types ranging from the common, such as EXEs and DLLs, to the less familiar types, such as DPL and CPL files. When you first choose to explore an executable, PE Explorer shows you information about the headers of the file, such as the number of code sections, the size of the image, the application subsystem, and the stack size information. Another view offers you an overview of the section headers in the executable—double-clicking a section header brings up a window gives you the ability to explore the contents of each section. Definitely not for the faint of heart, the application lets you extract, recalculate, and delete sections of your loaded PE.. PE Explorer also contains a built-in quick disassemble, which lets you look at the assembly code of your executable. And it supports the common Intel x86 instruction sets along with extensions such as MMX, 3D Now!, and SSE/2/3. The disassembler also extracts ASCII text strings from the data portion of your PE. Another feature is the Dependency Scanner, which scans all the modules that your PE file links to statically and those that are delay-loaded, and it then displays them in a hierarchal tree structure, showing where the PE reaches to. One of the more fun parts of PE Explorer, in my opinion, is the Resource Editor. This feature allows you to view, extract, replace, edit, and delete the resources of a specified executable. The UI shows you a directory-like structure of the embedded resources, such as images, sounds, dialogs, menus, XML data, HTML data, and toolbars. Not all of those resources support direct editing, but for most you can replace and edit them. This would allow you, for instance, to add your own custom branding to an app, change dialog messages, customize toolbar actions, and so on without having access to the actual source code. PE Explorer is a very handy tool for those who want to dive into executables. But if Resource Editor is the only feature you really want, take a look at the company's much cheaper Resource Tuner application. Price: $129 (direct) for a personal license; $199 (direct) for a business license. Figure peexplorer (Click the image for a larger view) Book Review Endpoint Security Adding a rock-solid traditional network perimeter to your internal infrastructure is an important step in protecting your assets. But this isn't a silver bullet. With the increased mobility and connectivity of today's hardware and software, there is no real perimeter on your corporate infrastructure—instead, your infrastructure extends to those remote endpoints. In fact, while you guard yourself against attack from outside your environment, it is more likely that an attack will be launched accidentally by a user who brings his laptop in from home and hops on your uncontrolled corporate network. If you're looking to get up to speed on network access control and secure the endpoints of your infrastructure, you might want to check out Mark Kadrich's book, Endpoint Security (Addison-Wesley Professional, 2007). The book starts by defining what exactly endpoints are and then gives you a taste of the many different variations that exist—Windows clients, Linux clients, embedded devices, mobile phones, and PDAs. Kadrich sees the network as a "control problem." Rejecting the typical representation of a complex network with some organic analogy, he instead describes it as something that can be delineated and visualized in a useful, manageable manner. The book delves into the basics of Network Access Control (NAC) and establishing a base level of trust. This discussion includes how to put together a secure baseline for your endpoint systems by securing and controlling your source software and build environment. An overview of tools that can help keep endpoints secure and reliable covers the familiar, such as using a firewall and antivirus software, and points to the need for proactive patch management, and, finally, more advanced tools, such as intrusion detection and prevention systems, host integrity checkers, and encryption. The second half of the book explores the details of securing the operating systems of various client types, including Microsoft Windows, Apple OS X, and Linux. Each of these chapters gives you an overview of how to perform an Initial Health Check for your endpoint system. And there are chapters dedicated to PDAs, Smartphones, and embedded devices that discuss how these devices have become a serious threat to your infrastructure and how to secure various types of communication on them. Finally, Kadrich looks at four case studies and identifies in each how the endpoints were compromised and how failure could have been mitigated. Overall, Endpoint Security is loaded with useful information for the security professional, offering a vendor-agnostic view of securing your network from the vantage of an endpoint perimeter. Price: $54.99 list. Greg Steen is a technology professional, entrepreneur, and enthusiast on the hunt for new tools and references to help make operations and development easier for IT professionals. Have a suggestion? Let him know at [email protected] © 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.
https://docs.microsoft.com/en-us/previous-versions/technet-magazine/cc137786(v=msdn.10)?redirectedfrom=MSDN
2019-12-05T16:50:59
CC-MAIN-2019-51
1575540481281.1
[array(['images/cc137786.powershell.gif', 'Figure powershell'], dtype=object) array(['images/cc137786.auslogics.gif', 'Disk Defrag is easy to install and use'], dtype=object) array(['images/cc137786.peexplorer.gif', 'Figure peexplorer'], dtype=object) array(['images/cc137786.endpoint.gif', None], dtype=object)]
docs.microsoft.com
Undo Undo and redo operations are supported in two ways. Most of the support is for the design mode (editing networks in TouchDesigner), with limited support in perform mode. In design mode where you are editing netwoks, user interactions such as creating or deleting nodes, changing node paramters etc, can be reverted via the Edit->Undo menu or Ctrl-Z. Ctrl-Y or Edit->Redo will redo any undos. In perform mode where you are interacting with control panels authored in design mode, you may want to turn off undo and redo for performance reasons, although interacting with panels do not trigger undo creation. There is only one undo/redo queue because of the interdependencies in TouchDesigner. You can undo a node creation if you have open up the Animation Editor, or undo a keyframe change while you are in the network editor. Contents Undo in Scripts and Textports[edit] By default, undo creation in scripts is turned off. If you want to enable undo in a script, you will have to use the undo Command to change the undo creation state. Undo states are not inherited in scripts, so each script will have to set its own state to on. Textports do inherit the global undo state, which is on by default, but the states do not extend to scripts executed in the Textport. For example, running the opparm Command in the textport will trigger a parameter change undo, but running a script in a DAT that contains the same command will not. Undo creation in the Textport can be changed with the undo Command as well. Supported Operations[edit] - Node operations - create, delete, placement, flag changes, wiring and un-wiring, renaming. - Node parameter changes via user interaction or scripting. (including multi-node parameter changes via selecting nodes) - Copy and paste - including duplicating nodes and text editing - DAT text and table editing, via DAT editors or external editors. - Keyframe animation editing. Unsupported Operations[edit] - Undo in geometry editing is pending. - Changing node selection is undoable. Local Undo[edit] Parameter fields and Field COMPs have built in one off undos. You can right click in a parameter field to toggle between the current and last value changed. In a Field COMP, ctrl-z will do the same. These undos are not tied to the main undo/redo queue. Perform Mode[edit] Undo in perform mode extends only to the local undo in a Field COMP, and scripts that have their undo turned on. See Also. Mode where the network editing window is not open, your performance interface window(s) is open and you are running your application as fast as possible. See Designer Mode.
https://docs.derivative.ca/Undo
2019-02-16T06:26:14
CC-MAIN-2019-09
1550247479885.8
[]
docs.derivative.ca
Contents Performance Analytics and Reporting Previous Topic Next Topic Create a workbench process widget Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a workbench process widget Create a workbench widget to monitor a process using multiple indicators. Before you begin Familiarize yourself with the structure of the workbench widget Decide which main and supporting indicators to include Role required: pa_admin, pa_power_user, or admin Procedure Navigate to Performance Anaytics > Widgets and click New. A new widget record appears. Name the widget. In the Type field select Workbench. Right-click the form header and select Save. The Main Widget Indicators related list appears. Add a main indicator to the workbench widget. Main indicators appear on the top of the widget. The maximum number of indicators you can add is specified in com.snc.pa.widget.max_widget_indicators. The default maximum number of widget indicators is eight. Click New in the Main Widget Indicators related list. Select an Indicator. Set the Order to define where the indicator appears (from let to right). Fill in other fields, as appropriate. Table 1. Additional indicator configuration options Field Description Breakdown and Element A breakdown element filters the data that appears in the indicator. If you select a breakdown you must select an element. For example, if your indicator is Number of open Incidents and you select Breakdown for State and Active for Element, only scores for incidents in the active state are included in the widget. 2nd Breakdown and Element Adds a second breakdown element that filters the data that appears in indicator. If you select a 2nd breakdown you must select an element. For example, imagine your indicator is Number of open incidents and the first breakdown filters for active state. You then select Category for 2nd Breakdown and Software for Element. The indicator will now display only scores for open incidents that are active and in the software category. Time series Adds the specified time period and aggregation to the widget's trend visualization. Follow element Specifies that a breakdown element applied to the dashboard where the widget is added also applies to the indicator. If you specify a 2nd Breakdown, Follow element is ignored. Followed breakdown Specifies that only this breakdown applies to the indicator as a Follow element. All other breakdowns applied to a dashboard where the widget has been added will be ignored. If you do not specify a Followed breakdown all breakdowns applied to the dashboard will apply to the indicator. Label Specifies the name of the indicator on the widget. If you do not specify a Label, the name of the indicator is used. Right-click the form header and select Save. The Supporting Widget indicators list appears. (Optional) Add supporting indicators. When you click a main indicator, its supporting indicators appear in the middle of the widget. You can add an unlimited number of supporting indicators. Click New in the Supporting Widget Indicators related list. Select an Indicator. Set the Order to define where indicator appears (from left to right). Fill in other fields, as appropriate. You can configure supporting indicators the same way as main indicators. See step 5 for configuration options. Click Submit to return to the Main Indicator record. Repeat step 6 until you have added all supporting indicators. Click Update to return to the widget record. Repeat steps 5 - 7 until you have added all indicators. (Optional) Select one of the main indicators as the Default indicator. This default indicator appears automatically when a user views the widget. If you do not specify a default indicator, the widget displays the main indicator with the lowest Order value first. Click Update to save the widget. What to do nextReview the widget to ensure that the new indicators are correct. If you have not already, add the widget to a dashboard to view it. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-performance-analytics-and-reporting/page/use/performance-analytics/task/t_CreateWorkbenchProcessWidget.html
2019-02-16T05:54:59
CC-MAIN-2019-09
1550247479885.8
[]
docs.servicenow.com
attach-session-statistics Section: settings Default Value: none Valid Values: all, fired, none Changes Take Effect: Immediately Enables calculating various statistics and attaching them to the interaction's user data at the end of the chat session. Possible values are: - none: Do not attach anything. - all: Attach all possible statistics (both encountered and non-encountered). - fired: Attach only statistics that were encountered during the course of the chat session. Example of JSON file for BGS reporting event { "cbs_endTime": "2018-05-03T08:27:42Z", "schema_info": { "version": 1.0 }, "chatBot_info": { "chatBot_name": "Translator 3000", "chatBot_category": "Service", "chatBot_function": "Agent advisory" }, "session_info": { "attr_itx_id": "itx0000081b", "attr_itx_submitted_at": "2018-05-03T08:27:33Z" "cbs_id": "0002EaCD00K80030", "attr_itx_tenant_id": 101, "attr_itx_media_type": "email", "cbs_startTime": "2018-05-03T08:27:34Z", "cbs_rejectedToStart": 0 }, "session_stats": { "cbs_endedAbnormally": 0, "cbs_duration": 8500, "cbs_messagesSent": 15, "cbs_messagesReceived": 10, "cbs_endedBy": "CPB", "cbs_endReason": "ALL_PARTICIPANTS_LEFT", "cbs_endResult": "Success" }, "attr_extension": { "customAttributeName": "customAttributeValue" } } enable-chat AGT_CHAT_STATS table. To have RAA exclude chat data, remove this option from this section. enable-bgs Bot Gateway Server (BGS) table: AGT_BGS_SESSION. To have RAA exclude BGS data, remove this option from this section. g:topic:<topic-name> Section: kafka-<cluster: bootstrap.servers Section: kafka-<cluster-name> (). kafka-topic-name Section: channel-chatbot Default Value: chat-bots-reporting Valid Values: Any string Changes Take Effect: Immediately Specifies the name of Kafka topic to store the data. kafka-zookeeper-nodes Section: channel-chatbot Default Value: Valid Values: Any string Changes Take Effect: Immediately Specifies a set of Zookeeper nodes in form of host:port separated by semicolon or comma. This option is used only if kafka-cluster-brokers is not specified (in other words empty). kafka-cluster-brokers Section: channel-chatbot Default Value: Valid Values: Any string Changes Take Effect: Immediately Specifies a set of Kafka brokers in form of host:port separated by semicolon or comma. Contents Integrating BGS with Genesys Historical Reporting For Bot Gateway Server (BGS), historical reporting on chat bot activity supplements the chat session reporting available in eServices premise deployments that include the Genesys Reporting & Analytics offering. Chat bot reporting is supported only for chat bots that are run by BGS. This page describes the component and configuration requirements to enable historical reporting on BGS-managed chat bot activity in your deployment. Overview: BGS reporting process - After a BGS session is finished or when an attempt to launch a bot session is rejected, BGS produces a reporting event, which it stores in a Kafka database. For more information about the reporting event attributes, see Reporting event attributes, below. - The BGS reporting event is separate from the Interaction Server reporting event that is generated at the end of the Chat Server session. The chat session reporting event includes some bot-related statistics that are processed as part of chat session reporting (see Integrating Chat Server with Genesys Historical Reporting in the Chat Server Administrator's Guide). - On a regular schedule, Genesys Info Mart extracts the BGS data from the Kafka database and transforms it into the BGS_SESSION_FACT table and a Bot Dashboard. For more information, see Chat reports in the GCXI User's Guide. Enabling historical reporting on BGS activity Prerequisites The following table summarizes the minimum release requirements for the Genesys and third-party components that enable chat bot historical reporting. Setting up historical reporting - Ensure that your deployment has been configured as required for Genesys Info Mart to support chat session reporting. For more information, see Integrating Chat Server with Genesys Historical Reporting in the Chat Server Administrator's Guide. If you have not already done so, configure Interaction Concentrator (ICON) to store the user data KVPs listed below (see Chat Server reporting data). - Configure BGS to report bot metrics. By default, BGS captures the minimum attributes required in the reporting event to enable historical reporting out-of-box. However, the default ESP methods do not populate all the parameters that are useful for reports. For meaningful reporting, Genesys strongly recommends that you populate the chatBot_category and chatBot_function attributes, in particular. There are two ways you can populate the category and function attributes: - Through the API (in BotCreationAttributes) during createChatBot - By specifying ChatBotCategory and ChatBotFunction in the parameters of the ESP StartBot method - Enable the storage of BGS reporting metrics in Kafka. - Deploy Kafka version 2.0. - Configure BGS to output reporting data into Kafka by configuring the following options in the channel-chatbot configuration section: - kafka-cluster-brokers or kafka-zookeeper-nodes - kafka-topic-name. The default value is chat-bots-reporting. - Configure Genesys Info Mart to extract the BGS reporting data from Kafka. - On the Options tab of the Genesys Info Mart application object, create a new configuration section, kafka-<cluster-name>. The <cluster-name> can be any string you use to identify the cluster—for example, kafka-1. - In the new section, add the following options: - bootstrap.servers—The value must match the value of the BGS kafka-cluster-brokers or kafka-zookeeper-nodes option (see step 3). - g:topic:<topic-name>—The <topic-name> must match the value of the BGS kafka-topic-name option—for example, g:topic:chat-bots-reporting. The value of the option must be BGS_K. - (Optional, but recommended) Set an alarm on log message 55-20049, which identifies that a transformation job error has occurred because of a Kafka exception, such as a complete loss of connection to the cluster. - Enable aggregation of bot-related data. (Required for GCXI reporting or other applications that use RAA aggregation.) In the [agg-feature] section on the Genesys Info Mart application object, specify the enable-bgs option. If you haven't already done so, also specify the enable-chat option. There are two mechanisms by which Genesys Info Mart receives bot-related reporting data: BGS application data After a BGS session is terminated or rejected, BGS generates a reporting event for that session and stores the data in the Kafka database. There might be multiple BGS sessions within a single chat session. JSON Example The following is an example of a BGS reporting event serialized as a JSON file for Kafka storage. See BGS reporting event attributes for the meaning of the attributes. BGS reporting event attributes The following table describes the attributes included in the BGS. Chat Server reporting data When the chat session is finished, Chat Server attaches reporting statistics to the user data of the interaction in Interaction Server. The following table describes the bot-related reporting statistics that Chat Server includes in the user data if any BGS-managed chat bots participated in the chat session. The "Info Mart Database Target" column indicates the Info Mart database table and column to which the user data KVP is mapped. (For information about the rest of the chat session KVPs that Chat Server sends, see Chat Server reporting data in the Chat Server Administration Guide.) Feedback Comment on this article:
https://docs.genesys.com/Documentation/BGS/Current/BGSQS/BGSReporting
2019-02-16T05:07:54
CC-MAIN-2019-09
1550247479885.8
[]
docs.genesys.com
oc expose svc/frontend --hostname= There are several ways you can expose your service after you deploy it on OpenShift. The following sections describe the various methods and when to use them. Minishift Quickstart section. In case the service you want to expose is not HTTP based, you can create a NodePort service. In this case, each OpenShift node will proxy that port into your service. To access this port on your Minishift Minishift: $.
https://docs.okd.io/latest/minishift/openshift/exposing-services.html
2019-02-16T05:19:08
CC-MAIN-2019-09
1550247479885.8
[]
docs.okd.io
Contents Now Platform Capabilities Previous Topic Next Topic Set up a gating approval via an approval rule Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Set up a gating approval via an approval rule You can set up a gating approval via an approval rule. Before you beginRole required: admin Procedure From the left navigation pane, select System Policy > Approval Rules. Click New. Table 1. Approval rules Field Description Name Name of this rule. Table Task table to which this rule applies. For most service catalog approvals, select Request.Note: The list shows only tables and database views that are in the same scope as the approval rule. Active Indicator of whether the rule is active (defaults to true). Run Rule Before Indicator of whether the rule runs before or after the request record is saved. For most approvals, select this check box. User User who must approve this request (can be empty). Group Group that must approve this request (can be empty). Set State Value of the approval field on the task in after this rule runs. Usually, select Requested. Condition Condition under which the rule applies. Script An optional server script to programmatically specify who the approver should be. For example, for the one-line script current.requested_for.manager, ServiceNow checks the requested_for reference field on the current record. It then locates the manager field on the referenced record and assigns that person as the approver. For other examples, see the Script field on approval rules provided by ServiceNow. Notes and limitations: You can have as many rules as you want on a given table. If more than one rule applies, you get more than one approver. You cannot get duplicate approvers, for example, if two rules both want Fred Luddy to approve a particular request, the system only creates one approval entry for him. By default all requests start out in a Not yet requested approval state. Approval notifications do not go out until the approval state of the request is set to Requested. You can do that manually, or you can do it in script, but the easiest way to do it is to use the Set State field to automatically set the request to Requested. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/service-administration/task/t_SetUpAGatingApprViaApprovalRule.html
2019-02-16T05:50:09
CC-MAIN-2019-09
1550247479885.8
[]
docs.servicenow.com
Message-ID: <33364474.21099.1394182798847.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_21098_1151621555.1394182798846" ------=_Part_21098_1151621555.1394182798846 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: A few notes, as of September 2010, on the future of Java/JDK 7, = and even 8.=20 Mark Reinhold's post on a possible roadmap for Java 7/8: Jo Darcy on the Project Coin proposals: And this link provides the best overview of what's to expect for each ve= rsion: ures/=20 The Project Coin proposals retained and presented at JavaOne: javaone_2010=20 Latest notes from Mark Reinhold, after the JCP meeting in Germany, post = JavaOne: s=20 Some concrete examples of Project Coin usage: -java-7/
http://docs.codehaus.org/exportword?pageId=178159778
2014-03-07T08:59:58
CC-MAIN-2014-10
1393999639602
[]
docs.codehaus.org
Description / Features This plugin adds basic support of C++ language into Sonar. Current feature list: - Basic size metrics: - Files (number of) - Lines - Lines of comments - Lines of code - Lines of commented out code - Static code checking via virtually any static analyser tool - Dynamic checking for memory management problems via Valgrind - Cyclomatic (McCabe) complexity metrics including: - Projects complexity - Average function/method complexity - Average file complexity - Complexity distributions - Code coverage metrics including: - Line coverage - Branch coverage - IT line coverage - Overall. - Pc-Lint. Static analyser from Gimpel - GCC, gcov, gcovr, Bullseye and Python for coverage determination. Install Python and place the script somewhere on the PATH. Installation - Via Update Center - Or copy the jar-archive into <sonar home> /extensions/plugins/directory - Restart the Sonar web server Quick Configuration Guide Cxx plugin uses the following properties during analysis. See here for the ways how to pass them to the plugin. Note: Report paths are always relative to projects path. Know limitations - Some analyzers (RATS, most notably) may have issues and crash occasionally. - Valgrind is only available on a subset of UNIX platforms. Changelog Roadmap Following items are in the queue (more or less...): - Use SSLR technology - Integrate compiler warnings Implement the dependency analysis (package tangle index metric)
http://docs.codehaus.org/pages/viewpage.action?pageId=230398232
2014-03-07T08:59:56
CC-MAIN-2014-10
1393999639602
[array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif', None], dtype=object) ]
docs.codehaus.org
Newsletter Config: Paid Subscriptions« General. Admin Notification On Order This is set to “Yes” by default and a notification will be sent to the administration email address when an order is placed. 4. Payment method At present there are two payment options. PayPal and 2Checkout. PayPal is selected by default.
http://docs.tribulant.com/wordpress-mailing-list-plugin/1612
2014-03-07T08:59:55
CC-MAIN-2014-10
1393999639602
[array(['http://docs.tribulant.com/wp-content/uploads/2012-05-30_0908.png', 'Paid Subscriptions'], dtype=object) ]
docs.tribulant.com
Groovy Mixin Proposal Take the following classes. The Wombat class as defined simply defines a few things. By defining those things it makes it possible for it to mixin other things – notable Mobile and FighterType. Now, how to mix these things together... Static Mixing At compile time mixin information can be provided, in which case all instances of the class receiving the mixin have the mixed traits This says that all instances of Wombat are Mobile. An alternate for this is to put it on the class declaration line: Which borrows the keyword/syntax from Scala. I lean towards the first because Java allows you to cast to things in the type declaration, and you specifically cannot cast to Mobile in this case. The mixin(Wombat) would be treated as an initializer block All methods defined in Mobile would be added to Wombat. The type information is not. If a method is already defined in Wombat (such as getTurningSpeed()) the mixin method is not mixed in. Similarly, if multiple mixins are specified, the first time a method is defined is the one that is used. Instance variables do not cross declarations. That means that Wombat cannot directly access heading, it must go through getHeadin(). Similarly, the mixed in class cannot access instance variables on the class it is being mixed into. Mixins can define their own internal instance variables, only accessible to the mixed in class itself (heading, locationX, locationY). "this" refers to the same instance whether used in the mixed in class or the mixed class. Methods the mixed in instance requires from the host instance can be declared abstract. It is worth considering if they can also be left undeclared and then invoked via dynamic dispatch – this is open for discussion. It is useful to document the methods the mixin requires, though, so I used the abstract declaration here. Dynamic Mixing or adds the FighterType mixin to the Wombat instance referenced by George. FighterType is only available on that particular instance, so: Singleton Method Mixin Adding a single method at runtime has a bit of syntactic sugar, as follows: Paramaterized Mixing The mixin class is allowed to use a constructor which takes args, so: which would initialize the mixin state via the ctor giving it anArg as a lone ctor arg
http://docs.codehaus.org/pages/viewpage.action?pageId=24576157
2014-03-07T09:03:14
CC-MAIN-2014-10
1393999639602
[]
docs.codehaus.org
DNS deployment options define the deployment of Address Manager DNS services. Address Manager supports most of the options used by both BIND and Microsoft DNS. For deployment options that take an IP address, ACL name, or TSIG key as a parameter, select the type of parameter you want to add from the drop-down list presented when defining the option. - To specify an IP address or name, select IP Address or name. Type the address or name in the text field and click Add. - To specify a TSIG key, select Key. A drop-down list appears and lists the TSIG keys available in the Address Manager configuration. Select a key from the list and click Add. - To specify an ACL, select ACL. A drop-down list showing pre-defined and custom DNS ACLs available in the current Address Manager configuration appears. Select an ACL from the drop-down menu and click Add. You can set DNS Deployment options for the following Address Manager objects: - Configurations - Server Groups - Servers - DNS Views - DNS Zones - IPv4 Blocks - IPv4 Networks - IPv6 Blocks - IPv6 Networks When configuring deployment options, any options set at the server group level will override options set at the configuration level. Any deployment options set at the server level will override options set at the server group level and configuration level.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/DNS-deployment-options/9.4.0
2022-08-08T07:16:47
CC-MAIN-2022-33
1659882570767.11
[]
docs.bluecatnetworks.com
onos-proxy ONOS side-car proxy for various subsystems, e.g. E2T The main purpose of the sidecar proxy is to absorb the complexity of interfacing with various µONOS subsystems. This allows relatively easy implementations of the SDK in various languages, without re-implementing the complex algorithms for each language. Presently, the proxy implements only E2T service load-balancing and routing, but in future may be extended to accommodate sophisticated interactions with other parts of teh µONOS platform. Deployment The proxy is intended to be deployed as a sidecar container as part of an application pod. Such deployment can be arranged explicitly by including the proxy container details in the application Helm chart, but an easier way is to include the following metadata annotation as part of the deployment.yaml file. annotations: proxy.onosproject.org/inject: "true" This annotation will be detected by the onos-app-operator via its admission hook which will augment the deployment descriptor to include the proxy container as part of the application pod automatically. E2 Services The proxy container exposes a locally accessible port on localhost:5151 where it hosts the following services: E2 Control Service - allows issuing control requests to E2 nodes E2 Subscription Service - allows issuing subscribe and unsubscribe requests to E2 nodes The E2 proxy tracks the E2T and E2 node mastership state via onos-topo information and appropriately forwards gRPC requests to the E2T instance which is presently the master for the given target E2 node. The target E2 node ID is extracted from the E2AP request headers The mastership information is derived from the MastershipState aspect of the E2 node topology entities and from the controls topology relations setup between the E2T and E2 node topology entities. The proxy does not manipulate the messages passed between the application and the E2T instances in any manner. SDK Versions The onos-ric-sdk-go version 0.7.30 or greater and onos-ric-sdk-py version 0.1.6 or greater expect the sidecar proxy to be deployed to work correctly.
https://docs.sd-ran.org/master/onos-proxy/README.html
2022-08-08T06:32:14
CC-MAIN-2022-33
1659882570767.11
[]
docs.sd-ran.org
Not Implemented Exception Class Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. The exception that is thrown when a requested method or operation is not implemented. public ref class NotImplementedException : Exception public ref class NotImplementedException : SystemException public class NotImplementedException : Exception public class NotImplementedException : SystemException [System.Serializable] public class NotImplementedException : SystemException [System.Serializable] [System.Runtime.InteropServices.ComVisible(true)] public class NotImplementedException : SystemException type NotImplementedException = class inherit Exception type NotImplementedException = class inherit SystemException [<System.Serializable>] type NotImplementedException = class inherit SystemException [<System.Serializable>] [<System.Runtime.InteropServices.ComVisible(true)>] type NotImplementedException = class inherit SystemException Public Class NotImplementedException Inherits Exception Public Class NotImplementedException Inherits SystemException - Inheritance - -(); } open System let futureFeature () = // Not developed yet. raise (NotImplementedException()) [<EntryPoint>] let main _ = try futureFeature () with :? NotImplementedException as notImp -> printfn $"{notImp.Message}" 0 Sub Main() Try FutureFeature() Catch NotImp As NotImplementedException Console.WriteLine(NotImp.Message) End Try End Sub Sub FutureFeature() ' not developed yet. Throw New NotImplementedException() End Sub Remarks. Throwing the exception." Handling the exception. NotImplementedException and other exception types The .NET Framework also includes two other exception types, NotSupportedException and PlatformNotSupportedException, that indicate that no implementation exists for a particular member of a type. You.
https://docs.microsoft.com/en-us/dotnet/api/system.notimplementedexception?view=net-5.0
2022-08-08T09:06:21
CC-MAIN-2022-33
1659882570767.11
[]
docs.microsoft.com
< class template. Therefore, type designates the instantiation ratio<num, den>. For convenience, the header defines ratios for the standard SI prefixes:; See also Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/cpp/standard-library/ratio?redirectedfrom=MSDN&view=msvc-170
2022-08-08T07:43:37
CC-MAIN-2022-33
1659882570767.11
[]
docs.microsoft.com
18.1.1. OpenSHMEM Wrapper Compilers oshcc, oshcxx, oshc++, oshfort, shmemcc, shmemcxx, shmemc++, shmemfort – OpenSHMEM wrapper compilers 18.1.1.1. SYNTAX oshcc [--showme | --showme:compile | --showme:link] ... oshcxx [--showme | --showme:compile | --showme:link] ... oshc++ [--showme | --showme:compile | --showme:link] ... oshfort [--showme | --showme:compile | --showme:link] ... shmemcc [--showme | --showme:compile | --showme:link] ... shmemcxx [--showme | --showme:compile | --showme:link] ... shmemc++ [--showme | --showme:compile | --showme:link] ... shmemfort [--showme | --showme:compile | --showme:link] ... 18.1.1.2. OPTIONS The options below apply to all of the wrapper compilers: --showme: This option comes in several different variants (see below). None of the variants invokes the underlying compiler; they all provide information on how the underlying compiler would have been invoked had --showmenot been used. The basic --showmeoption outputs the command line that would be executed to compile the program. Note If a non-filename argument is passed on the command line, the --showmeoption will not display any additional flags. For example, both "oshcc --showmeand oshcc --showme my_source.cwill show all the wrapper-supplied flags. But oshcc --showme -vwill only show the underlying compiler name and -v. --showme:compile: Output the compiler flags that would have been supplied to the underlying compiler. --showme:link: Output the linker flags that would have been supplied to the underlying compiler. --showme:command: Outputs the underlying compiler command (which may be one or more tokens). --showme:incdirs: Outputs a space-delimited (but otherwise undecorated) list of directories that the wrapper compiler would have provided to the underlyingOutputs a space-delimited (but otherwise undecorated) list of library names that the wrapper compiler would have used to link an application. For example: mpi open-pal util. --showme:version: Outputs the version number of Open MPI. --showme:help: Output a brief usage help message. See the man page for your underlying compiler for other options that can be passed through oshcc. 18.1.1.3. DESCRIPTION Conceptually, the role of these commands is quite simple: transparently add relevant compiler and linker flags to the user’s command line that are necessary to compile / link OpenSHMEM programs, and then invoke the underlying compiler to actually perform the command. As such, these commands are frequently referred to as “wrapper” compilers because they do not actually compile or link applications themselves; they only add in command line flags and invoke the back-end compiler. 18.1.1.4. Background Open MPI provides wrapper compilers for several languages: oshcc, shmemcc: C oshc++, oshcxx, shmemc++, shmemcxx`:: C++ oshfort, shmemfort: Fortran The wrapper compilers for each of the languages are identical; they can be use interchangeably. The different names are provided solely for backwards compatibility. 18.1.1.5. Fortran Notes The Fortran wrapper compiler for OpenSHMEM ( oshfort and shmemfort) can compile and link OpenSHMEM applications that use any/all of the OpenSHMEM Fortran bindings. oshfort will be inoperative and will return an error on use if Fortran support was not built into the OpenSHMEM layer. 18.1.1.6. Overview oshcc and shmemcc are convenience wrappers for the underlying C compiler. Translation of an OpenSHMEM program requires the linkage of the OpenSHMEM-specific libraries which may not reside in one of the standard search directories of ld(1). It also often requires the inclusion of header files what may also not be found in a standard location. oshcc and shmemcc pass their arguments to the underlying C compiler along with the -I, -L and -l options required by OpenSHMEM programs. The same is true for all the other language wrapper compilers. The OpenSHMEM Team strongly encourages using the wrapper compilers instead of attempting to link to the OpenSHMEM libraries manually. This allows the specific implementation of OpenSHMEM to change without forcing changes to linker directives in users’ Makefiles. Indeed, the specific set of flags and libraries used by the wrapper compilers depends on how OpenSHMEM was configured and built; the values can change between different installations of the same version of OpenSHMEM. Indeed, since the wrappers are simply thin shells on top of an underlying compiler, there are very, very few compelling reasons not to use oshcc. When it is not possible to use the wrappers directly, the --showme:compile and --showme:link options should be used to determine what flags the wrappers would have used. For example: shell$ cc -c file1.c `shmemcc --showme:compile` shell$ cc -c file2.c `shmemcc --showme:compile` shell$ cc file1.o file2.o `shmemcc --showme:link` -o my_oshmem_program 18.1.1.7. NOTES here. 18.1.1.8. FILES The strings that the wrapper compilers insert into the command line before invoking the underlying compiler are stored in a text file created by OpenSHMEM and installed to $pkgdata/NAME-wrapper-data.txt, where: $pkgdatais typically $prefix/share/openmpi $prefixis the top installation directory of OpenSHMEM NAMEis the name of the wrapper compiler (e.g., $pkgdata/shmemcc-wrapper-data.txt) It is rarely necessary to edit these files, but they can be examined to gain insight into what flags the wrappers are placing on the command line. 18.1.1.9. ENVIRONMENT VARIABLES By default, the wrappers use the compilers that were selected when OpenSHMEM was configured. These compilers were either found automatically by Open MPI’s “configure” script, or were selected by the user in the CC, CXX,shmem_value. Valid value names are: CPPFLAGS: Flags added when invoking the preprocessor (C or C++)C: Fortran compiler FCFLAGS: Fortran compiler flags
https://docs.open-mpi.org/en/v5.0.x/man-openshmem/man1/oshmem-wrapper-compiler.1.html
2022-08-08T07:36:33
CC-MAIN-2022-33
1659882570767.11
[]
docs.open-mpi.org
Visualization¶ PySwarms implements tools for visualizing the behavior of your swarm. These are built on top of matplotlib, thus rendering charts that are easy to use and highly-customizable. In this example, we will demonstrate three plotting methods available on PySwarms: - plot_cost_history: for plotting the cost history of a swarm given a matrix - plot_contour: for plotting swarm trajectories of a 2D-swarm in two-dimensional space - plot_surface: for plotting swarm trajectories of a 2D-swarm in three-dimensional space [1]: # Import modules import matplotlib.pyplot as plt import numpy as np from IPython.display import Image # Import PySwarms import pyswarms as ps from pyswarms.utils.functions import single_obj as fx from pyswarms.utils.plotters import (plot_cost_history, plot_contour, plot_surface) The first step is to create an optimizer. Here, we’re going to use Global-best PSO to find the minima of a sphere function. As usual, we simply create an instance of its class pyswarms.single.GlobalBestPSO by passing the required parameters that we will use. Then, we’ll call the optimize() method for 100 iterations. [2]: options = {'c1':0.5, 'c2':0.3, 'w':0.9} optimizer = ps.single.GlobalBestPSO(n_particles=50, dimensions=2, options=options) cost, pos = optimizer.optimize(fx.sphere, iters=100) 2019-05-18 16:04:30,391 - pyswarms.single.global_best - INFO - Optimize for 100 iters with {'c1': 0.5, 'c2': 0.3, 'w': 0.9} pyswarms.single.global_best: 100%|██████████|100/100, best_cost=3.82e-8 2019-05-18 16:04:31,656 - pyswarms.single.global_best - INFO - Optimization finished | best cost: 3.821571688965892e-08, best pos: [ 1.68014465e-04 -9.99342611e-05] Plotting the cost history¶ To plot the cost history, we simply obtain the cost_history from the optimizer class and pass it to the cost_history function. Furthermore, this method also accepts a keyword argument **kwargs similar to matplotlib. This enables us to further customize various artists and elements in the plot. In addition, we can obtain the following histories from the same class: - mean_neighbor_history: average local best history of all neighbors throughout optimization - mean_pbest_history: average personal best of the particles throughout optimization [3]: plot_cost_history(cost_history=optimizer.cost_history) plt.show() Animating swarms¶ The plotters module offers two methods to perform animation, plot_contour() and plot_surface(). As its name suggests, these methods plot the particles in a 2-D or 3-D space. Each animation method returns a matplotlib.animation.Animation class that still needs to be animated by a Writer class (thus necessitating the installation of a writer module). For the proceeding examples, we will convert the animations into a JS script. In such case, we need to invoke some extra methods to do just that. Lastly, it would be nice to add meshes in our swarm to plot the sphere function. This enables us to visually recognize where the particles are with respect to our objective function. We can accomplish that using the Mesher class. [4]: from pyswarms.utils.plotters.formatters import Mesher [5]: # Initialize mesher with sphere function m = Mesher(func=fx.sphere) There are different formatters available in the pyswarms.utils.plotters.formatters module to customize your plots and visualizations. Aside from Mesher, there is a Designer class for customizing font sizes, figure sizes, etc. and an Animator class to set delays and repeats during animation. Plotting in 2-D space¶ We can obtain the swarm’s position history using the pos_history attribute from the optimizer instance. To plot a 2D-contour, simply pass this together with the Mesher to the plot_contour() function. In addition, we can also mark the global minima of the sphere function, (0,0), to visualize the swarm’s “target”. [6]: %%capture # Make animation animation = plot_contour(pos_history=optimizer.pos_history, mesher=m, mark=(0,0)) [7]: # Enables us to view it in a Jupyter notebook animation.save('plot0.gif', writer='imagemagick', fps=10) Image(url='plot0.gif') 2019-05-18 16:04:34,422 - matplotlib.animation - INFO - Animation.save using <class 'matplotlib.animation.ImageMagickWriter'> 2019-05-18 16:04:34,425 - matplotlib.animation - INFO - MovieWriter.run: running command: ['convert', '-size', '720x576', '-depth', '8', '-delay', '10.0', '-loop', '0', 'rgba:-', 'plot0.gif'] [7]: Plotting in 3-D space¶ To plot in 3D space, we need a position-fitness matrix with shape (iterations, n_particles, 3). The first two columns indicate the x-y position of the particles, while the third column is the fitness of that given position. You need to set this up on your own, but we have provided a helper function to compute this automatically [8]: # Obtain a position-fitness matrix using the Mesher.compute_history_3d() # method. It requires a cost history obtainable from the optimizer class pos_history_3d = m.compute_history_3d(optimizer.pos_history) [9]: # Make a designer and set the x,y,z limits to (-1,1), (-1,1) and (-0.1,1) respectively from pyswarms.utils.plotters.formatters import Designer d = Designer(limits=[(-1,1), (-1,1), (-0.1,1)], label=['x-axis', 'y-axis', 'z-axis']) [10]: %%capture # Make animation animation3d = plot_surface(pos_history=pos_history_3d, # Use the cost_history we computed mesher=m, designer=d, # Customizations mark=(0,0,0)) # Mark minima [11]: animation3d.save('plot1.gif', writer='imagemagick', fps=10) Image(url='plot1.gif') 2019-05-18 16:04:57,791 - matplotlib.animation - INFO - Animation.save using <class 'matplotlib.animation.ImageMagickWriter'> 2019-05-18 16:04:57,792 - matplotlib.animation - INFO - MovieWriter.run: running command: ['convert', '-size', '720x576', '-depth', '8', '-delay', '10.0', '-loop', '0', 'rgba:-', 'plot1.gif'] [11]:
https://pyswarms.readthedocs.io/en/latest/examples/tutorials/visualization.html
2022-08-08T07:22:40
CC-MAIN-2022-33
1659882570767.11
[array(['../../_images/examples_tutorials_visualization_6_0.png', '../../_images/examples_tutorials_visualization_6_0.png'], dtype=object) array(['plot0.gif', None], dtype=object) array(['plot1.gif', None], dtype=object)]
pyswarms.readthedocs.io
Service Health Class Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Describes the health of a service as returned by GetServiceHealthAsync(ServiceHealthQueryDescription). public sealed class ServiceHealth : System.Fabric.Health.EntityHealth type ServiceHealth = class inherit EntityHealth Public NotInheritable Class ServiceHealth Inherits EntityHealth - Inheritance - System.ObjectServiceHealth
https://docs.azure.cn/zh-cn/dotnet/api/system.fabric.health.servicehealth?view=azure-dotnet
2022-08-08T06:50:14
CC-MAIN-2022-33
1659882570767.11
[]
docs.azure.cn
Manage Groups In this section, manage and create ring groups based on the department, expertise or the location. This is useful where you have multiple people handling customer calls. For example, you can have two groups: Sales and Support. Extensions 101, 102, and 103 are part of Sales group. Extension 201, 202, and 203 are member of Support group. Groups can be configured as Sequential or Ring All (Simultaneous). Click on “Manage Groups” to see the list of existing groups in your account. Create Group: Use the form to create a new group. This will just create a group, members to groups are added later. 1. Group Name: Choose an easy to remember name. 2. Group Number: Every group in your account must be a unique number. It can be anything from 01 to 99. 3. Recording: This feature allows incoming calls to be recorded to this new group. 4. Order: Ring All: Will ring every employee extension or phone number in the group. 5. Order: Sequential: Will ring one employee at a time in a sequence. If the first representative does not answer the call, it will ring the next one. 6. Default VM: If no one in the group answers the call, the call can be forwarded to a voicemail. 7. Call Time Out: How long should the phone ring before the system decides to take the next action? Usually 15 seconds for Sequential and 30-40 second for Ring All. If one of the members is Cell Phone, call time out should be less than 30 seconds. Otherwise cell phone VM will be triggered in 30 seconds and the phone system will consider it a call pickup. 8. Ring Back: The audio file or music which the caller should hear while waiting for the receiver to pick up and answer the call. It can be default Music on hold or a customer audio file. 9. Announcement: If you want to play an announcement before the call connects, use the default recording or upload your own recordings. Example, “Please hold while we connect your call.” 10. Announcement File: File to be played when the announcement is enabled. 11. Whisper: Play a message to end user who is receiving the call. ‘Sales’ for example, will prepare the end user before the call connects. This message is only played to the end receiver, while the caller is still on Music Hold. This feature can be enabled or disabled as desired. 12. Whisper File: File to be played for whisper message before the call connects. Remember this file should only be one to two words. 13. Post Call Event: (For Advanced users) Phone system allows you to create post call events. For example, email all the calls coming to sales group to specific email address, or create an event in Google analytics for any call coming to this group. Click on “Create Group” to use this feature. Add members to Group: To add members to this group, click on Group name and a new window will pop up on the right. On the top you will see group name and group number. If you already have some members in the group, you will see the list here. You can delete or edit the list. To add a new member, click on “Add Ext” button in the new pop window. Group member can be an extension of 11-digit US phone number. Priority is useful for sequential dialing; person or staff member on zero. Priority is called first before the one on priority 1,2,3,4 or 5 subsequently. Once the changes are made, click on Save Changes . Manage Devices Manage Conferences Last modified 4mo ago Copy link
https://docs.cebodtelecom.com/new/manage-phone-system/manage-groups
2022-08-08T07:26:17
CC-MAIN-2022-33
1659882570767.11
[]
docs.cebodtelecom.com
An Act to amend 71.07 (5n) (a) 6. and 71.28 (5n) (a) 6.; and to create 71.07 (5n) (a) 6. a. and b. and 71.28 (5n) (a) 6. a. and b. of the statutes; Relating to: including crop insurance proceeds in the manufacturing and agricultural tax credit. (FE) 2019 Wisconsin Act 167 (PDF: ) 2019 Wisconsin Act 167: LC Act Memo Bill Text (PDF: ) Fiscal Estimates and Reports SB387 ROCP for Committee on Agriculture, Revenue and Financial Institutions On 10/24/2019 (PDF: ) LC Bill Hearing Materials Wisconsin Ethics Commission information 2019 Assembly Bill 430 - A - Ways and Means
https://docs.legis.wisconsin.gov/2019/proposals/sb387
2022-08-08T08:04:29
CC-MAIN-2022-33
1659882570767.11
[]
docs.legis.wisconsin.gov
LM-X License Manager version 4.9.1 includes the enhancements and fixes detailed below. The changes in this release were made primarily in response to customer feedback. For more information about how we incorporate customer feedback into our development process, see Customer-driven development. Enhancements LM-X v4.9.1 includes the following new enhancement. Fixes LM-X v4.9.1 includes the following fixes.
https://docs.x-formation.com/display/LMX/LM-X+License+Manager+v4.9.1+Release+Notes
2022-08-08T07:27:43
CC-MAIN-2022-33
1659882570767.11
[]
docs.x-formation.com
Chapter 22. Using Virtuoso with Tuxedo Abstract. The current document covers linkage between Virtuoso server and Tuxedo by ATMI (Application-to-Transaction Monitor Interface) only.. This document explains how to build support binaries for Tuxedo and Virtuoso and how to write services which use the Virtuoso as resource manager. Table of Contents - 22.1. Building the Transaction Manager Server - 22.2. Configuration - 22.3. Services - 22.3.1. Introduction - 22.3.2. VQL functions - 22.3.3. Services concept - 22.3.4. OPENINFO - 22.4. Clients - 22.5. Service example
http://docs.openlinksw.com/virtuoso/ch-xa/
2021-04-10T12:24:20
CC-MAIN-2021-17
1618038056869.3
[]
docs.openlinksw.com
public class EnumSetDeserializer extends StdDeserializer<java.util.EnumSet<?>> implements ContextualDeserializer EnumSets. Note: casting within this class is all messed up -- just could not figure out a way to properly deal with recursive definition of "EnumSet<K extends Enum<K>, V> JsonDeserializer.None getValueClass, getValueType, getValueType, handledType deserializeWithType, findBackReference, getDelegatee, getEmptyAccessPattern, getEmptyValue, getEmptyValue, getKnownPropertyNames, getNullAccessPattern, getNullValue, getNullValue, getObjectIdReader, replaceDelegatee, unwrappingDeserializer equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public EnumSetDeserializer(JavaType enumType, JsonDeserializer<?> deser) public EnumSetDeserializer withDeserializer(JsonDeserializer<?> deser) public EnumSetDeserializer withResolved(JsonDeserializer<?> deser, java.lang.Boolean unwrapSingle) public boolean isCachable() isCachablein class JsonDeserializer<java.util.EnumSet<?>>.util.EnumSet<?>>.util.EnumSet<?>.EnumSet<?>> p- Parsed used for reading JSON content ctxt- Context that can be used to access information about this deserialization activity. java.io.IOException public java.util.EnumSet<?> deserialize(JsonParser p, DeserializationContext ctxt, java.util.EnumSet<?>.EnumSet<?>> java.io.IOException public java.lang.Object deserializeWithType(JsonParser p, DeserializationContext ctxt, TypeDeserializer typeDeserializer) throws java.io.IOException, JsonProcessingException StdDeserializer deserializeWithTypein class StdDeserializer<java.util.EnumSet<?>> typeDeserializer- Deserializer to use for handling type information java.io.IOException JsonProcessingException
https://docs.adobe.com/content/help/en/experience-manager-cloud-service-javadoc/com/fasterxml/jackson/databind/deser/std/EnumSetDeserializer.html
2021-04-10T11:55:36
CC-MAIN-2021-17
1618038056869.3
[]
docs.adobe.com
Set up dynamic DNS on Your Amazon Linux instance When you launch an EC2 instance, it is assigned a public IP address and a public DNS (Domain Name System) name that you can use to reach it from the Internet. Because there are so many hosts in the Amazon Web Services domain, these public names must be quite long for each name to remain unique. A typical Amazon EC2 public DNS name looks something like this: ec2-12-34-56-78.us-west-2.compute.amazonaws.com, where the name consists of the Amazon Web Services domain, the service (in this case, compute), the region, and a form of the public IP address. Dynamic DNS services provide custom DNS host names within their domain area that can be easy to remember and that can also be more relevant to your host's use case; some of these services are also free of charge. You can use a dynamic DNS provider with Amazon EC2 and configure the instance to update the IP address associated with a public DNS name each time the instance starts. There are many different providers to choose from, and the specific details of choosing a provider and registering a name with them are outside the scope of this guide. This information applies to Amazon Linux. For information about other distributions, see their specific documentation. To use dynamic DNS with Amazon EC2 Sign up with a dynamic DNS service provider and register a public DNS name with their service. This procedure uses the free service from noip.com/free as an example. Configure the dynamic DNS update client. After you have a dynamic DNS service provider and a public DNS name registered with their service, point the DNS name to the IP address for your instance. Many providers (including noip.com ) allow you to do this manually from your account page on their website, but many also support software update clients. . Enable the Extra Packages for Enterprise Linux (EPEL) repository to gain access to the noip2 client. Note Amazon Linux instances have the GPG keys and repository information for the EPEL repository installed by default; however, Red Hat and CentOS instances must first install the epel-releasepackage before you can enable the EPEL repository. For more information and to download the latest version of this package, see . For Amazon Linux 2: [ec2-user ~]$ sudo yum install For Amazon Linux AMI: [ec2-user ~]$ sudo yum-config-manager --enable epel Install the noippackage. [ec2-user ~]$ sudo yum install -y noip Create the configuration file. Enter the login and password information when prompted and answer the subsequent questions to configure the client. [ec2-user ~]$ sudo noip2 -C Enable the noip service. For Amazon Linux 2: [ec2-user ~]$ sudo systemctl enable noip.service For Amazon Linux AMI: [ec2-user ~]$ sudo chkconfig noip on Start the noip service. For Amazon Linux 2: [ec2-user ~]$ sudo systemctl start noip.service For Amazon Linux AMI: [ec2-user ~]$ sudo service noip start This command starts the client, which reads the configuration file ( /etc/no-ip2.conf) that you created earlier and updates the IP address for the public DNS name that you chose. Verify that the update client has set the correct IP address for your dynamic DNS name. Allow a few minutes for the DNS records to update, and then try to connect to your instance using SSH with the public DNS name that you configured in this procedure.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dynamic-dns.html
2021-04-10T12:43:37
CC-MAIN-2021-17
1618038056869.3
[]
docs.aws.amazon.com
Examples¶ simplegan allows users to not only train generative models with few lines of code but also provdies some customizability options that let them train models on their own data or tweak the parameters of the default model. Have a look at the examples below that provides an overview of the functionality. Vanilla Autoencoder¶ In this example, we will use the mnist dataset to train a vanilla autoencoder and save the model. from simplegan.autoencoder import VanillaAutoencoder autoenc = VanillaAutoencoder() train_ds, test_ds = autoenc.load_data(use_mnist = True) autoenc.fit(train_ds, test_ds, epochs = 10, save_model = './') Using just three lines of code, we can train an Autoencoder model. DCGAN¶ In this example, let us build a DCGAN model with modified generator and discriminator architecture and train it on a custom local data directory. from simplegan.gan import DCGAN gan = DCGAN(dropout_rate = 0.5, kernel_size = (4,4), gen_channels = [128, 64, 32]) data = gan.load_data(data_dir = './', batch_size = 64, img_shape = (200, 200)) gan.fit(data, epochs = 50, gen_optimizer = 'RMSprop', disc_learning_rate = 2e-3) generated_samples = gan.generate_samples(n_samples = 5) Pix2Pix¶ In this example, let us build a Pix2Pix model which a U-Net generator and a patchGAN discriminator. We train the pix2pix network on facades dataset. from simplegan.gan import Pix2Pix gan = Pix2Pix() train_ds, test_ds = gan.load_data(use_facades = True, batch_size = 32) gan.fit(train_ds, test_ds, epochs = 200) Have a look at the examples directory which has notebooks to help you better understand on how to get started.
https://simplegan.readthedocs.io/en/latest/getting_started/examples.html
2021-04-10T11:56:01
CC-MAIN-2021-17
1618038056869.3
[]
simplegan.readthedocs.io
Affiliate Egg is another plugin our team offers for adding affiliate products to your website. The biggest advantages of Affiliate Egg: No API access required. Extracts data directly from store websites. Can create custom parsers for almost any store. Affiliate Egg parsers can be connected as separate Content Egg plugin modules. This will allow for product price updates, price comparisons, all templates, and other Content Egg features. So our clients prefer to use the integration of those two plugins to work with shops not available in the Content Egg by default. Affiliate Egg has a set list of supported stores. If you need a custom parser for a not-default store, we can create one for you. It's only $25 per parser. Each store requires a separate parser, and you can use it on any of your websites. E-mail us your list of stores. We'll check it and send you an invoice. It may take several days to create the parsers. To install them, simply copy the parser files to your server's dedicated wp-content/affegg-parsers directory. Custom parsers work absolutely the same as default parsers. We provide a 6-month guarantee for any custom products. If your parser needs any corrections during this time, we'll do it for free. To connect Affiliate Egg parsers as separate Content Egg plugin modules, do the following: Go to Content Egg > Affiliate Egg integration and select the stores you want to connect. 2. Go to Content Egg > Modules. You'll see a whole new block of the Affiliate Egg modules. 3. Activate the new module. 4. Work with the AE modules just as you would with standard Content Egg plugin modules. We recommend that you use direct product URL searches on top of keyword search since doing so requires fewer queries to the source website. 5. To make an affiliate link from direct product links, you need to set Affiliate link (Deeplink) for every module. You can change the affiliate link at any time and switch traffic from one affiliate network to another. Watch the video on Content Egg + Affiliate Egg integration: Affiliate Egg works as a web-parser. This means that for each product search or price update, the plugin must make a separate http request to source site. For various reasons, there are sometimes anti-bot mechanisms implemented on websites. Sometime plugin's bot will be blocked if it makes too many requests per second/hour/day. Sometimes there is a rate limit on how many requests per IP address. You will get 503 or 403 error if you have blocked the server's IP. Usually, it will be unblocked after 24 hours, but, in any case, you should avoid too many requests. Here is our recommendations: Search by direct product URL instead of keyword (important point). Don't set too often price updates in price update period in settings. Don't add too many products at once. A ban can also be permanent (for example, due to a bad history of your shared IP address). In this case, there is no other way than to change the server IP or use a proxy. Not all of the stores which have affiliate programs and which we want to work with have Product API. Especially it is true for little stores. The only way is to receive data directly from the site. We can also add a custom parser for you if the store is not yet in supported shops by our plugin. Everywhere when it is possible use Content Egg. For example, support of Amazon is represented in both plugins. But Amazon has perfect API, works quickly and stable unlike from web-parser. When the parsing is too intensive your IP might be blocked on amazon site.
https://ce-docs.keywordrush.com/modules/affiliate-egg-integration
2021-04-10T11:40:18
CC-MAIN-2021-17
1618038056869.3
[]
ce-docs.keywordrush.com
. How do I get services started for my child? Before Contact your school district’s special education office for information about these evaluations.. Your type of health insurance plan impacts how you advocate for a change in benefits, as well as how you appeal denials of coverage and to whom you file complaints if you are not satisfied with implementation of benefits. .
https://docs.autismspeaks.org/100-day-kit-school-age-children/choosing-the-right-treatment
2021-04-10T11:37:15
CC-MAIN-2021-17
1618038056869.3
[array(['http://cdn.instantmagazine.com/upload/25315/therapist_girl_and_boy_on_floor.ebbd2a239116.jpg', None], dtype=object) array(['https://assets.foleon.com/eu-west-2/uploads-7e3kk3/25315/laptop.2bf48397f8c5.jpg', None], dtype=object) ]
docs.autismspeaks.org
Implementing DataSerializable.writeString(street); out.writeInt(zipCode); out.writeString(city); out.writeString(state); } public void readData( ObjectDataInput in ) throws IOException { street = in.readString(); zipCode = in.readInt(); city = in.readString(); state = in.readString(); } } Reading and Writing and DataSerializable.writeString(firstName); out.writeString(lastName); out.writeInt(age); out.writeDouble (salary); address.writeData (out); } public void readData( ObjectDataInput in ) throws IOException { firstName = in.readString(); lastName = in.readString();. IdentifiedDataSerializable. getClassId and getFactoryId Methods IdentifiedDataSerializable extends DataSerializable and introduces the following methods: int getClassId(); int getFactoryId(); IdentifiedDataSerializable uses getClass. Implementing IdentifiedDataSerializableString(); } @Override public void writeData( ObjectDataOutput out ) throws IOException { out.writeString( surname ); } @Override public int getFactoryId() { return EmployeeDataSerializableFactory.FACTORY_ID; } @Override public int getClassId() { return EmployeeDataSerializableFactory.EMPLOYEE_TYPE; } @Override public String toString() { return String.format( "Employee(surname=%s)", surname ); } } The methods getClass. Registering EmployeeDataSerializableFactory As the last step, you need to register EmployeeDataSerializableFactory declaratively (declare in the configuration file hazelcast.xml/yaml)> hazelcast: serialization: data-serializable-factories: - factory-id: 1 class-name: EmployeeDataSerializableFactory:
https://docs.hazelcast.com/imdg/latest-dev/serialization/implementing-dataserializable.html
2021-04-10T12:03:46
CC-MAIN-2021-17
1618038056869.3
[]
docs.hazelcast.com
Initial Setup Step by Step Guide to Setting Up Link My Books A quick overview for getting your new Link My Books account up and running 1: Create an account with Link My Books How to sign up for a Link My Books account (with or without an existing Google account) 2: Connect to your accounts Connect your Amazon and eBay sales channels plus Xero or QuickBooks 3: Choose your preferred account and tax rates 4: Review and send your first Amazon payout summaries to Xero or QuickBooks How to send settlements to Xero or QuickBooks
https://docs.linkmybooks.com/category/initial-setup/
2021-04-10T12:13:55
CC-MAIN-2021-17
1618038056869.3
[]
docs.linkmybooks.com
What will change after the migration to Office 365 services in the new German datacenter regions Tenant migrations are designed to have minimal effect on administrators and users. However, there are considerations for each workload. Please review the following sections to have a better understanding of the migration experience for the workloads. Following are the key differences between Microsoft Cloud Deutschland and Office 365 services in the new German datacenter regions. Azure Active Directory What isn't changing: Tenant initial domain (such as contoso.onmicrosoft.de) with tenant ID (GUID) and custom domains will persist after the migration. Authentication requests for resources that are migrated to Office 365 services are granted by the Office 365 services Azure authentication service ( login.microsoftonline.com). During the migration, resources that remain still in Office 365 Germany are authenticated by the existing Germany Azure service ( login.microsoftonline.de). Considerations to note: For managed domain accounts, after copying of the initial Azure Active Directory (Azure AD) tenant is complete (which is the first step of Azure AD migration to the Office 365 services Azure AD service), password changes, self-service password reset (SSPR) changes, and password resets by administrators must be done from the Office 365 service portals. Requests to update passwords from the Germany service won't succeed at this point, because the Azure AD tenant has been migrated to Office 365 services. Resets of federated domain passwords aren't affected, because these are completed in the on-premises directory. Azure sign-ins are presented in the portal where the user attempts access. Audit logs are available from only the Office 365 services endpoint after transition. Before migration through to the completion of migration, you should save sign-in and audit logs from the Microsoft Cloud Deutschland portal. Password resets, password changes, password reset by an administrator for managed organizations (that are not using Active Directory Federation Services) must be performed via the Office 365 services portal. Attempts by users who access Microsoft Cloud Deutschland portals to reset passwords will fail. General Data Protection Regulation (GDPR) Data Subject Requests (DSRs) are executed from the Office 365 services Azure admin portal for future requests. Any legacy or non-customer diagnostic data that is resident in Microsoft Cloud Deutschland is deleted at or before 30 days. Subscriptions & Licenses Office 365 and Dynamics subscriptions from Microsoft Cloud Deutschland are transitioned to the German region with the Azure AD relocation. The organization is then updated to reflect new Office 365 services subscriptions. During the brief subscription transfer process, changes to subscriptions are blocked. As the tenant is transitioned to Office 365 services, its Germany-specific subscriptions and licenses are standardized with new Office 365 services offerings. Corresponding Office 365 services subscriptions are purchased for the transferred Germany subscriptions. Users who have Germany licenses will be assigned Office 365 services licenses. Upon completion, legacy Germany subscriptions are canceled and removed from the current Office 365 services tenant. After migration of the individual workloads, additional functionality is made available through the Office 365 services (such as Microsoft Planner and Microsoft Flow) because of the new Office 365 services subscriptions. If appropriate for your organization, the tenant or licensing administrator can disable new service plans as you plan for change management to introduce the new services. For guidance on how to disable service plans that are assigned to users' licenses, see Disable access to Microsoft 365 services while assigning user licenses. Exchange Online Exchange resource URLs transition from the legacy Germany endpoint outlook.office.deto the Office 365 services endpoint outlook.office365.comafter the migration. Your users may access their migrated mailbox by using the legacy URL until the migration completes. Customers should transition users to the new URL as soon as possible after Exchange migration begins to avoid affecting retirement of the Germany environment. The Office 365 services URLs for Outlook services become available only after Exchange migration begins. Mailboxes are migrated as a backend process. Users in your organization may be in either Microsoft Cloud Deutschland or the German region during the transition and are part of the same Exchange organization (in the same global address list). Users of the Outlook Web App who access the service by using a URL where their mailbox does not reside will see an extra authentication prompt. For example, if the user's mailbox is in the Office 365 services and the user's Outlook Web App connection uses the legacy endpoint outlook.office.de, the user will first authenticate to login.microsoftonline.de, and then to login.microsoftonline.com. When migration is complete, the user can access the new URL (), and they'll see only the single, expected sign-in request. Office Services Office Online services are accessible via office.de before and during the transition. After users' mailboxes are transitioned to the Office 365 services, users should begin to use Office 365 services URLs. As subsequent workloads migrate to Office 365 services, their interface from the office.com portal will begin to work. The most recently used (MRU) service in Office is a cutover from the Microsoft Cloud Deutschland to Office 365 Global services, not a migration. Only MRU links from the Office 365 Global services side will be visible after migration from the Office.com portal. MRU links from the Microsoft Cloud Deutschland aren't visible as MRU links in Office 365 Global services. In Office 365 Global services, MRU links are accessible only after the tenant migration has reached phase 9. Exchange Online Protection - Back-end Exchange Online Protection (EOP) features are copied to new Germany region. - Office 365 Security and Compliance Center users need to transition to using global URLs,, as part of the migration. Skype for Business Online Existing Skype for Business Online customers will transition to Microsoft Teams. For more information, see. Office 365 Video Office 365 Video is being retired on March 1, 2021, and Office 365 Video won't be supported after migration of SharePoint Online to the new German datacenter regions is completed. Content from Office 365 Video will be migrated as part of migrating SharePoint Online. However, videos in Office 365 Video won't play back in the Office 365 Video UI after the SharePoint migration. Learn more about the migration timeline on Office 365 Video transition to Microsoft Stream (classic) overview. Next step Understand migration phases actions and impacts More information Getting started: - Migration from Microsoft Cloud Deutschland to Office 365 services in the new German datacenter regions - Microsoft Cloud Deutschland Migration Assistance - How to opt-in for migration Moving through the transition: - Migration phases actions and impacts - Additional pre-work - Additional information for Azure AD, devices, experiences, and AD FS. Cloud apps:
https://docs.microsoft.com/en-us/microsoft-365/enterprise/ms-cloud-germany-transition-experience?view=o365-worldwide
2021-04-10T13:19:02
CC-MAIN-2021-17
1618038056869.3
[]
docs.microsoft.com
VALGRIND¶ Description¶ The Valgrind tool suite provides a number of debugging and profiling tools that help you make your programs faster and more correct. The most popular of these tools is called Memcheck which can detect many memory-related errors and memory leaks. Using Valgrind¶ Prepare Your Program¶¶ nersc$ module load valgrind Running Serial Programs¶ If you normally run your program like this: nersc$ ./myprog arg1 arg2 Use this command line: nersc$¶ In your batch script, simply 1. load the module; 2. add "valgrind" in front of your command. For example, your srun line will be replaced by the following: nersc$ module load valgrind nersc$ srun -n 24 valgrind --leak-check=yes ./myprog arg1 arg2 Unrecognized Instructions¶ When using Valgrind to debug your code, you may occasionally encounter error messages of the form: nersc$ performed¶ This page is based on the "Valgrind Quick Start Page". For more information about valgrind, please refer to. For questions on using Valgrind at NERSC contact NERSC Consulting.
https://docs.nersc.gov/development/performance-debugging-tools/valgrind/
2021-04-10T11:48:00
CC-MAIN-2021-17
1618038056869.3
[]
docs.nersc.gov
Scylla in-memory tables¶ New in version 2018.1.7: Scylla Enterprise Overview¶ In-memory table is a new feature for ScyllaDB Enterprise providing customers with a new solution for lowering their read latency. Caution Implement in-memory tables only in cases where the workload fits the proper use case for in-memory tables. See In-memory table use case. ScyllaDB in-memory solution uses memory locked SSTables which write data in memory (RAM) in addition to disk. This is different from the traditional read/write Scylla scenario where the MemTable (in RAM) is flushed to an SSTable on disk. In the traditional write data is held temporarily in a MemTable in RAM. When the MemTable (1) is flushed to disk the content is removed and is stored persistently on the disk. With the in-memory feature, when the MemTable is flushed (2), data is written persistently to RAM as well as to disk. The SSTable in RAM (A) is mirrored on the disk (B). When using the in-Memory feature, the real benefit is the speed at which a read is done. This is because the data is only read from RAM and not from disk. When the Scylla node boots, it loads the SSTables into a place in the RAM which is reserved for the SSTables. In order to keep the RAM consumption small you will have to compact more aggressively (in order to reduce space amplification as much as possible). In this manner, read latency is reduced and is more predictable. In-memory table use case¶ In-memory tables can be used either on a new table or on an existing table. In-memory tables are suitable for workloads which are primarily read workloads where the data remains static for a long period of time. An example for this could be a table containing monthly specials. The table would be changed on a monthly basis, but is read by customers 24/7. In-memory tables are not suitable for workloads with data that changes or grows, as this workload will fill the allocated in-memory space. Workloads such as time-series workloads (a table which records heart rates every minute, for example) are not suitable. It is very important to note that you must calculate carefully the sizes of the tables you want to keep in memory so that you do not over allocate your resources. Caution If you do run out of RAM for your in-memory SSTables, an I/O error occurs and the node stops. See Recommended memory limits. Recommended memory limits¶ It is recommended to keep in-memory tables memory used below 40% of the total amount allocated for in-memory storage size, leaving just more than 60% of the space for compactions. Maximum RAM allocated for in-memory ( in_memory_storage_size_mb) should leave at least 1 GB per shard for other usages, like Memtable and cache. Note Operations such as repair and rebuild may temporarily use large portions of the allocated memory. In-memory Metrics¶ The following Prometheus metrics can be used to calculate memory usage: in_memory_store_total_memory- this is the amount of RAM which is currently allocated to in-Memory in_memory_store_used_memory- this is the amount of RAM allocated to in-Memory which is currently in use. Subtracting these two values will give you the free in-Memory space. Both metrics are available in the latest Scylla Enterprise dashboard of the Monitoring Scylla stack. Enable in-memory¶ This procedure enables the in-memory strategy. Once enabled, it can be applied to any table. It does not change all new tables to in-memory tables. Configure the in-memory option in the scylla.yaml file: Open scylla.yamlfile (located under /etc/scylla/) on any node. Locate the in_memory_storage_size_mbparameter. It is currently disabled and commented out. Remove the comment mark (#) and change this parameter to a size (in MB) which will be allocated from RAM for your tables.. Create an in-memory table¶ This procedure creates an in-memory table. Repeat this procedure for each in-memory table you want to make. Confirm you have enough RAM. This is very important if this is not the first in-memory table you are creating. See Recommended memory limits and in-memory Metrics. Run a CQL command to create a new in-memory table. Set the compaction strategy to in-memory compaction strategy and to set the in_memory property to true. For example: CREATE TABLE keyspace1.standard1 ( key blob PRIMARY KEY, "C0" blob, "C1" blob, "C2" blob, "C3" blob, "C4" blob ) WITH compression = {} AND read_repair_chance = '0' AND speculative_retry = 'ALWAYS' AND in_memory = 'true' AND compaction = { 'class' : 'InMemoryCompactionStrategy' }; Repeat for additional tables. Change a table to an in-memory table¶ Use this procedure to convert an existing table to an in-memory table. Check the size of the table you want to convert. Confirm it is smaller than the size you set when you enabled in-memory (see Enable in-memory) and fits within the Recommended memory limits. nodetool cfstats <keyspaceName.tableName> The “Space used” parameter is the size of the table. If you already have a table in-Memory make sure to deduct that table’s size from the overall in_memory_storage_size_mb allocation and check that there is enough left to add the new table. Caution If the table you want to add is too large, do not convert it to an in-memory table. Over allocating the RAM creates an I/O error and stops the node. See size-example and Recommended memory limits. Convert the table by running the ALTER CQL command and add the InMemory compaction strategy and set the in_memory property to true. ALTER TABLE keyspace1.standard1 WITH in_memory='true' AND compaction = { 'class' : 'InMemoryCompactionStrategy' }; To convert additional tables repeat the process. Remember that the total space for all tables in-Memory cannot exceed the in_memory_storage_size_mb parameter. For example: nodetool cfstats keyspace1.standard1 Pending Flushes:0 SSTable count: 8 Space used (live): 7878555 Space used (total): 7878555 In this example, the table is taking up 788 MB. If your Memory allocation is not at least 1580 MB, it is not recommended to convert this table. Revert a table from RAM to disk¶ You can change a single table to use another strategy. On the table you want to revert change the table properties to change to a different compaction strategy and set the in_memory property to false. For example: ALTER TABLE keyspace1.standard1 WITH in_memory='false' AND compaction = { 'class' : ''LeveledCompactionStrategy'' }; Memory will be returned slowly. If you want to speed up the process, restart the scylla service ( systemctl restart scylla-server). Disable in-memory¶ Disables in-memory after all the tables have been reverted. Before you Begin Verify there are no in-memory tables currently in use. Run a DESCRIBE query on the keysapce(s) or table(s). For example: DESCRIBE TABLES If any table is listed as an in-memory table, change it using the ALTER method described in Revert a table from RAM to disk. For the server and each node, change the configuration in scylla.yaml file Edit the scylla.yaml file located in /etc/scylla/scylla.yaml. Change the in_memory_storage_size_mb parameter back to 0 (disabled)..
https://docs.scylladb.com/using-scylla/in-memory/
2021-04-10T12:10:56
CC-MAIN-2021-17
1618038056869.3
[array(['../../_images/inMemoryDiagram.png', '../../_images/inMemoryDiagram.png'], dtype=object)]
docs.scylladb.com
Unfortunately Amazon devices (Fire tablets and phones) run a modified version of Android that is not designed to work with third party launcher like Smart Launcher. Because of this, Smart Launcher is not officially supported by Amazon devices. Depending on your device model, you could be able to find some workaround this limitation, however since we never tested such methods we cannot provide you more details.
https://docs.smartlauncher.net/device-specific-issues/amazonfire
2021-04-10T11:51:32
CC-MAIN-2021-17
1618038056869.3
[]
docs.smartlauncher.net
The output angles for all joints are calculated from the YXZ Cardan angles derived by comparing the relative orientations of the segments proximal (parent) and distal (child) to the joint. The knee angles are calculated from the femur and the Untorsioned tibia segments, while the ankle joint angles are calculated from the Torsioned tibia and the foot segment. In the case of the feet, because they are defined in a different orientation to the tibia segments, an offset of 90 degrees is added to the flexion angle. This does not affect the Cardan angle calculation of the other angles because the flexion angle is the first in the rotation sequence. The progression angles of the feet, pelvis, thorax and head are the YXZ Cardan calculated from the rotation transformation of the subject's progression frame for the trial onto each segment orientation. The following topics provide further details. - Angle definitions - Plug-in Gait kinematic variables - Upper body angles as output from Plug-in Gait - Lower body angles as output from Plug-in Gait
https://docs.vicon.com/display/Nexus29/Plug-in+Gait+output+angles
2021-04-10T11:24:39
CC-MAIN-2021-17
1618038056869.3
[]
docs.vicon.com
The Search tool locates objects in Address Manager. The search returns a list of all matching items to which you have access. In order to avoid the return of numerous unrelated search results, use specific search criteria and select the type of object for which you want to search when trying to search particular results. Note: Searches performed by non-administrative users are limited by their user permissions. You only see objects to which you are granted access. To search for objects: - Click Advanced Search. - From the Category drop-down menu, select an object category to limit your search to specific types of Address Manager objects. For a list of object categories and items within those categories, refer to Reference: Object Search Categories. - From the Object drop-down menu, select a specific object item to further limit your search result. Object items appearing in this field will vary depending on the object category that you select in the Category drop-down menu. - From the Search Type drop-down menu, select the search type: - Common Fields—search by matching names and IP data such as IP addresses, DHCP ranges, blocks, and networks. - All Fields—search all fields including user-defined fields through the user interface. - Custom Search—search objects based on combinations of various properties (system-defined and/or user-defined) that you can specify as search criteria. The search will return only the objects matching all the criteria specified. - In the Search field, type the text for which you want to search. You can use the following wildcards in the search: Note: You cannot use the following characters in the search string: - ^—matches the beginning of a string. For example: ^ex matches example but not text. - $—matches the end of string. For example: ple$ matches example but not please. - *—matches zero or more characters within a string. For example: ex*t matches exit and excellent. - , an object in the results list to display the object.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Performing-Advanced-Searches/8.2.0
2021-04-10T12:28:26
CC-MAIN-2021-17
1618038056869.3
[]
docs.bluecatnetworks.com
View the history of DHCPv6 leases provided by your managed servers. You can view the DHCPv6 lease history for a specific IPv6 network or for all networks within an IPv6 block. To view IPv6 DHCP lease history: - From the configuration drop-down menu, select a configuration. - Select the IP Space tab. Tabs remember the page you last worked on, so select the tab again to ensure you're on the Configuration information page. - Click the IPv6 tab. In the IPv6 Blocks section, click the FC00::/6 or the 2003::/3 address space. - Click the DHCPv6 Leases History tab. To view the DHCPv6 lease history for a specific network, select an IPv6 network. On the network page, click the DHCPv6 Leases History tab.The DHCPv6. To view the details for an address, click an address in the Active IP Address column.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Viewing-DHCPv6-lease-history/8.2.0
2021-04-10T10:59:44
CC-MAIN-2021-17
1618038056869.3
[]
docs.bluecatnetworks.com
Downloading the installation files You obtain the product installation files by downloading them from the BMC Electronic Product Distribution (EPD) website. Your ability to access product pages on the EPD website is determined by the license entitlements purchased by your company. To understand which files you are entitled to download, access the license entitlements for your product. You can find information about service packs and patches under Release notes and notices. Tip If you sign up for Proactive Notification, you will be notified about new service packs or patches for applying the latest changes, as well as bulletins about critical fixes. Installation filesT. You might also be prompted to complete the Export Compliance Form. Where to go from here Carefully review the system requirements for your platform and other tasks necessary for setting up the installation environment. You must perform these tasks before you launch the installation program. - Download any Fix Packs - download the latest agent and server Fix Packs from the Electronic Product Download website. Review the readme of the Fix Packs to determine if there are any special instructions that override the documentation and when they should be applied for a new installation. The latest Fix Pack information can be viewed under Release notes and notices. - To view the lists of prerequisite tasks for installing the product, see Preparing for installation. - To view installation instructions, see Installing the core components of the product. - To view upgrade instructions, see Upgrading. After logging on, access and download the installation files from the Electronic Product Download website.
https://docs.bmc.com/docs/TMTM/81/downloading-the-installation-files-669650322.html
2021-04-10T12:20:28
CC-MAIN-2021-17
1618038056869.3
[]
docs.bmc.com
The Configipedia space is undergoing maintenance this weekend. The maintenance will be complete by Monday. Please login or sign up. You may also need to provide your support ID if you have not already done so. IBM Business Process Manager is a comprehensive BPM platform giving you visibility and insight to manage business processes.
https://docs.bmc.com/docs/display/Configipedia/IBM+Business+Process+Manager
2021-04-10T11:42:12
CC-MAIN-2021-17
1618038056869.3
[]
docs.bmc.com
IMAP Server PrerequisitesPrerequisites This chapter describes the prerequisites installation needed by plugins to run. Centreon PluginCentreon Plugin Install this plugin on each needed poller: yum install centreon-plugin-Applications-Protocol-Imap Remote serverRemote server The remote server must have an IMAP service running and available.: Monitor your IMAP Server with SSL or TLSMonitor your IMAP Server with SSL or TLS What you need to configureWhat you need to configure On your Host or Host template, please set the following macro :
https://docs.centreon.com/20.10/en/integrations/plugin-packs/procedures/applications-protocol-imap.html
2021-04-10T11:14:28
CC-MAIN-2021-17
1618038056869.3
[]
docs.centreon.com
Planning your Interana deployment This document offers guidelines for planning an Interana deployment that meets your company's current data analytics needs, as well as scaled growth for the future. Interana—behavioral analytics for your whole team Interana is full stack behavioral analytics software with a web-based visual interface and scalable distributed back-end datastore to process queries on event data. Interana Living Dashboard to explore the underlying data, change parameters, and drill down to the granular details of the summary. To plan an Interana deployment that is optimized for your company, review the following topics: High-level overview—how Interana works Interana is full stack behavioral analytics software that allows users to explore the activity of digital services. Interana includes both its own web-based visual interface and a highly scalable distributed back end database to store the data and process queries. Interana supports Ubuntu 14.04.x in cloud environments, as well as virtual machines or bare-metal systems. Interana enables you to ingest data in a variety of ways, including live data streams. The following image shows the flow of data into the Interna cluster, from imported files (single and batch) to live data streams from HTTP and Kafka sources. Ingested data is transformed and stored in the appropriate node, data or string, then processed in queries, with the results delivered to the requesting user. .jpg?revision=1) The basics—system requirements It is important to understand the hardware, software, and networking infrastructure requirements to properly plan your Interana deployment. Before making a resource investment, read through this entire document. The following sections explain sizing guidelines, data formats, and platform specific information needed to accurately estimate an Interana production system for your company. An Interana cluster consists of the following nodes that can be installed on a single server (single-node cluster), or across multiple servers (multi-node cluster). - config node — Node from which you administer the cluster. MySQL database (DB) is only installed on this node for storage of Interana metadata. Configure this node first. - api node — Serves the Interana application, merges query results from data and string nodes, and then presents those results to the user. Nginx is only installed on the api node. - ingest node — Connects to data repositories (cloud, live streaming, remote or local file system), streams live data,. Optional during installation. The following table outlines basic software, hardware, and network requirements for an Interana deployment. Guidelines for calculating the number of node type for your deployment are covered in the following sections. Guidelines for calculating the number of each type of node for your deployment are covered in the following sections. Production workflow—test, review, revise, and go It is strongly recommended that you first set up a sandbox test cluster with a sampling of your data, so you can make any necessary adjustments before deploying a production environment. This will allow you to assess the quality of your data and the better determine the appropriate size for a production cluster. Follow these steps: - Review the rest of this document to gain a preliminary assessment of the needs for your production cluster. - Install a single node cluster in a sandbox test environment, as described in Install single-node Interana. - Load a week's worth of data. - Modify your data formats until you get the desired results. - Go to the Resource Usage page: https://<cluster_ip_address>/?resourceusage -. - Install your Interana production environment and add your data. Cluster configurations—the big picture We recommend that you initially deploy a single-node sandbox cluster on which to test sample data. The sandbox test cluster will help you to accurately estimate the necessary size and capacity of a production cluster. This section provides an overview of a several typical cluster configurations: Stacked vs. single nodes The configuration of your cluster depends on the amount of data you have, its source, and projected growth. To validate the quality of your data and the analytics you wish to perform, we recommend you install a test sandbox single-node cluster prior to going into production. Stacked nodes are not recommended for large linearly to maintain your original performance. Sizing guidelines—for today and tomorrow Determining the appropriate Interana cluser Interana cluster: - Node configurations - Capacity planning - Planning the data tier - Planning the ingest tier - Planning the string tier - Cluster configuration guidelines Node configurations The following table provides guidelines for Interana cluster node configurations. Capacity planning Use the following guidelines to estimate the capacity for an Interana ingest tier Consider the following guidelines when planning a production Interana).. Given these guidelines, the following table lists some cluster sizing examples, based on the number of events stored in the cluster. If you want to store more than 80 Billion events, it is recommended that you build one of the following clusters and import data first, to be able to more accurately size your larger cluster. The AWS estimated costs in the following table are for backing up data and string nodes, and do not take into account organizational discounts. I also assume the use of 1-year partial upfront reserved instances to reduce costs. Data types and formats—consider your source It's important to consider the source of your data, as well the data type and how it's structured. Streaming ingest requires a special cluster configuration and installation (see Install multi-node Interana). Be aware that some data types may require transformation for optimum analytics. Data types Interana accepts the following data types: - JSON—Interana's preferred data format. The JSON format is a flat set of name-value pairs (without any nesting), which is easy for Interana to parse and interpret. If you use a different format, your data must be transformed into JSON format before it can be imported into Interana. - Apache log format—Log files generated using mod_log_config. For details, see. It's helpful to provide the mod_log_config format string used to generate the logs, as Interana can use that same format string to ingest the logs. - CSV—Interana Interana. Data sources Interana.
https://docs.scuba.io/2/Guides/Admin_Guide/Planning_your_Interana_deployment
2021-06-12T18:43:37
CC-MAIN-2021-25
1623487586239.2
[array(['https://docs.scuba.io/@api/deki/files/1966/ClusterArchitecture_(1', None], dtype=object) array(['https://docs.scuba.io/@api/deki/files/1892/ClusterConfig_NEW-single-node.png?revision=1&size=bestfit&width=400&height=251', None], dtype=object) array(['https://docs.scuba.io/@api/deki/files/1893/ClusterConfig_two-node-cluster.png?revision=1&size=bestfit&width=400&height=317', None], dtype=object) array(['https://docs.scuba.io/@api/deki/files/1891/ClusterConfig_NEW-5-node-cluster.png?revision=1&size=bestfit&width=394&height=254', None], dtype=object) array(['https://docs.scuba.io/@api/deki/files/1890/ClusterConfig_NEW-10-node-cluster.png?revision=1&size=bestfit&width=500&height=400', None], dtype=object) ]
docs.scuba.io
@Target(value=TYPE) @Retention(value=RUNTIME) @Documented @Import(value=org.springframework.context.annotation.AspectJAutoProxyRegistrar.class) public @interface EnableAspectJAutoProxy @Aspectannotation, similar to functionality found in Spring's <aop:aspectj-autoproxy>XML element. To be used on @ Configurationclasses as follows: @Configuration @EnableAspectJAutoProxy public class AppConfig { @Bean public FooService fooService() { return new FooService(); } @Bean public MyAspect myAspect() { return new MyAspect(); } }Where FooServiceis a typical POJO component and MyAspectis an @Aspect-style aspect: public class FooService { // various methods } @Aspect public class MyAspect { @Before("execution(* FooService+.*(..))") public void advice() { // advise FooService methods as appropriate } }In the scenario above, @EnableAspectJAutoProxyensures that MyAspectwill be properly processed and that FooServicewill be proxied mixing in the advice that it contributes. { // ... } Note that @Aspect beans may be component-scanned like any other. Simply mark the aspect with both @Aspect and @Component: package com.foo; @Component public class FooService { ... } @Aspect @Component public class MyAspect { ... }Then use the @ ComponentScanannotation to pick both up: @Configuration @ComponentScan("com.foo") @EnableAspectJAutoProxy public class AppConfig { // no explicit @Bean definitions required }Note: @EnableAspectJAutoProxyapplies to its local application context only, allowing for selective proxying of beans at different levels. Please redeclare @EnableAspectJAutoProxyin each individual context, e.g. the common root web application context and any separate DispatcherServletapplication contexts, if you need to apply its behavior at multiple levels. This feature requires the presence of aspectjweaver on the classpath. While that dependency is optional for spring-aop in general, it is required for @EnableAspectJAutoProxy and its underlying facilities. Aspect public abstract boolean proxyTargetClass false. public abstract boolean exposeProxy ThreadLocalfor retrieval via the AopContextclass. Off by default, i.e. no guarantees that AopContextaccess will work.
https://docs.spring.io/spring-framework/docs/5.1.8.RELEASE/javadoc-api/org/springframework/context/annotation/EnableAspectJAutoProxy.html
2021-06-12T19:03:50
CC-MAIN-2021-25
1623487586239.2
[]
docs.spring.io
Trezor User Guide This guide will show you how to use your Trezor Model T hardware wallet with Binance Chain and Binance DEX. Please follow best security practices when using any hardware wallet to store cryptocurrency. Please note that the Trezor Model One is not supported yet. Requirements App Installation Instructions - Install Trezor Bridge Please make sure that you have bridge installed on your computer. - Install the firmware Please make sure your Trezor has the latest firmware installed. - Generate and backup your address Please make sure that you have seed phrase backed up. Setup/Login Instructions - Open Binance Chain web wallet , then go to unlock page. You should see that Trezor is already an option for you. - Allow access to your device: - Export Binance address for account #1 - Get your address for this device - Confirm your address on Trezor Please note that Trezor only supports the Binance Chain mainnet 1) Click on the “Balances” navigation button in the Trading Interface to view your account balances. 2) Your account balances are displayed. How to send Binance Chain crypto assets 1) Click on the “Balances” navigation button in the Trading Interface to view your account balances. 2) Then click on the “Send” button for the asset that you would like to send. 3) Confirm to give permission 4) Check details about this transaction 5) You’ll see this pop-up to ask to hold the screen to sign the transaction on your Trezor. How to trade Binance Chain crypto assets 1) You can place your order on trading page 2) Confirm to give permission 3) Check details about this transaction 4) You’ll see this pop-up to ask to hold the screen to sign the transaction on your Trezor. Reference:
https://docs.binance.org/wallets/tutorial/trezor-model-t-user-guide.html
2021-06-12T17:26:03
CC-MAIN-2021-25
1623487586239.2
[]
docs.binance.org
Welcome to the Next Tech documentation website! On the sidebar you'll see several sections: Resources: Links to our product changelog, feature request / bug report site, and product roadmap. Support: Information on how to get support as an enterprise customer or user. Account Management: How to manage your user, team, or company account. Creator: Information on how to create interactive content and embed it. Sandboxes: Information on our coding sandboxes. API: Information about our application programming interface. Technical Requirements: Details regarding the requirements for using our software. Legal: Our Terms of Service and Privacy Policy. Webinars: A video collection of webinars we have conducted. If you don't see what you're looking for, don't hesitate to send us a message!
https://docs.next.tech/
2021-06-12T17:05:39
CC-MAIN-2021-25
1623487586239.2
[]
docs.next.tech
To). If you've already created a team, navigate to the Teams dashboard and then click on your team name. Once inside, click "Manage team members" in the header. From there, you can invite new members to your existing team. You will only be able to invite as many people as seats you paid for. The ability to invite will be locked once you reach your limit.
https://docs.replit.com/pro/teamManagement
2021-06-12T17:41:21
CC-MAIN-2021-25
1623487586239.2
[]
docs.replit.com
Configuring DQS on Microsoft Exchange¶ This details how to use Data Query Service (DQS) with Microsoft Exchange, configuring it to reject at the SMTP level. These instructions apply only to Exchange 2010 and above. Exchange only provides support for DNSBL lookups against the connecting IP, so that’s all you can do from it; anything else needs to be demanded to an external filtering software. Conventions¶ Configuration¶ Run an Exchange Powershell with administrator privileges and then type the following: add-IPBlockListProvider -Name 'Spamhaus ZEN' -LookupDomain 'your_DQS_key.zen.dq.spamhaus.net' -Enabled $true -BitmaskMatch $null -IPAddressesMatch '127.0.0.2','127.0.0.3','127.0.0.4','127.0.0.9','127.0.0.10','127.0.0.11' -Priority '1' -AnyMatch $false -RejectionResponse 'Connecting IP address {0} has been blocked by Spamhaus ZEN. See{0} for further details.'
https://docs.spamhaus.com/datasets/docs/source/40-real-world-usage/MTAs/040-Exchange.html
2021-06-12T16:55:38
CC-MAIN-2021-25
1623487586239.2
[]
docs.spamhaus.com
[−][src]Crate rs_password_utils rs-password-utils This library contains password utilities and is written in Rust. Currently it offers: - Checking passwords against pwned passwords list. The check implementation leverages the k-anonimity API and therefore, the password is not used in the API calls in any form (not even hashed). Usage The utility can be used as a library, or as an executable: Library extern crate rs_password_utils; use std::result::Result; #[tokio::main] async fn main() -> Result<(), rs_password_utils::PasswordUtilsError> { let is_pwned = rs_password_utils::pwned::is_pwned("test").await?; println!("The password is pwned: {}", is_pwned); Ok(()) } Executable Having rust installed, you may install rs-password-utils using cargo: From crates.io: cargo install rs-password-utils --features executable From gitlab: cargo install --git --features executable When the installation completes, issuing the following command: rs-password-utils --help should give you an output like: Password utilities, written in Rust. Usage: rs-password-utils pwned rs-password-utils dice [(-w <count>)] rs-password-utils [-h] Options: -w --words The amount of words that should comprise the passphrase -h --help Show this screen. License At your option, under: - Apache License, Version 2.0, () - MIT license ()
https://docs.rs/rs-password-utils/0.1.0/rs_password_utils/
2021-06-12T13:39:33
CC-MAIN-2021-25
1623487584018.1
[]
docs.rs
Getting started with compliance The following topics introduce you to the concept of compliance management in BMC: Related topics To examine the process of using the BladeLogic Portal to run a Compliance operations that checks whether servers adhere to payment card industry (PCI) standards, see Example of checking servers for PCI compliance in the BMC Bladelogic Portal online technical documentation.
https://docs.bmc.com/docs/ServerAutomation/87/getting-started/getting-started-with-compliance
2021-06-12T13:58:19
CC-MAIN-2021-25
1623487584018.1
[]
docs.bmc.com
18.05.001 : Patch 1 for version 18.05 Note A more recent patch, 18.05.003: Patch 3 for version 18.05 is released and it is a cumulative patch. Please ignore this patch and directly apply 18.05.003: Patch 3 for version 18.05. This patch includes fixes done in the ITSM for supporting Smart IT functionality and does not impact the ITSM behavior. You can ignore this patch if you are not using Smart IT. For more information, see Smart IT Known and corrected issues . Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/itsm1805/18-05-001-patch-1-for-version-18-05-823448533.html
2021-06-12T14:34:22
CC-MAIN-2021-25
1623487584018.1
[]
docs.bmc.com
Set user name conflict message Specify the message displayed if a user name conflict is detected during login. This message is displayed if there is a local user with the same user name but a different UID than the Active Directory user logging on. When the message is displayed, the %s token in the message string is replaced with the name of the conflicting local account. The message string you define must contain exactly one %s token, and no other string replacement ( %) characters. For example: Account with conflicting name (%s) exists locally This group policy modifies the pam.account.conflict.name.mesg setting in the agent configuration file. For information about what to do when local conflicts are detected, see Set UID conflict resolution.
https://docs.centrify.com/Content/config-gp/PAMSetUserNameConflictMsg.htm
2021-06-12T14:25:05
CC-MAIN-2021-25
1623487584018.1
[]
docs.centrify.com
ROS and other packages may be configured and built using catkin. Every catkin package must include package.xml and CMakeLists.txt files in its top-level directory. Your package must contain an XML file named package.xml, as specified by REP-0140. These components are all required: <package format="2"> <name>your_package</name> <version>1.2.4</version> <description> This package adds extra features to rosawesome. </description> <maintainer email="[email protected]">Your Name</maintainer> <license>BSD</license> <buildtool_depend>catkin</buildtool_depend> </package> Substitute your name, e-mail and the actual name of your package, and please write a better description. The maintainer is who releases the package, not necessarily the original author. You should generally add one or more <author> tags, giving appropriate credit: <author>Dennis Richey</author> <author>Ken Thompson</author> Also, please provide some URL tags to help users find documentation and report problems: <url type="website"></url> <url type="repository"></url> <url type="bugtracker"></url> These are special-purpose catkin packages for grouping other packages. Users who install a metapackage binary will also get all packages directly or indirectly included in that group. Metapackages must not install any code or other files, the package.xml gets installed automatically. They can depend on other metapackages, if desired, but regular catkin packages may not. Metapackages can be used to resolve stack dependencies declared by legacy rosbuild packages not yet converted to catkin. Catkin packages should depend directly on the packages they use, not on any metapackages. A good use for metapackages is to group the major components of your robot and then provide a comprehensive grouping for your whole system. In addition to the XML elements mentioned above, a metapackage package.xml must contain this: <export> <metapackage/> <architecture_independent/> </export> In addition to the required <buildtool_depend> for catkin, metapackages list the packages in the group using <exec_depend> tags: <exec_depend>your_custom_msgs</exec_depend> <exec_depend>your_server_node</exec_depend> <exec_depend>your_utils</exec_depend> Metapackages must not include any other package.xml elements. But, a CMakeLists.txt is required, as shown below. Catkin CMakeLists.txt files mostly contain ordinary CMake commands, plus a few catkin-specific ones. They begin like this: cmake_minimum_required(VERSION 2.8.3) project(your_package) Substitute the actual name of your package in the project() command. Metapackage CMakeLists.txt files should contain only these two additional lines: find_package(catkin REQUIRED) catkin_metapackage() Regular catkin packages generally provide additional information for dependencies, building targets, installing files and running tests. They are required to use these two commands, usually with additional arguments: find_package(catkin REQUIRED COMPONENTS ...) ... catkin_package(...) Package format 2 (recommended) pages describe those tasks in detail. As you follow them, observe the usual command order:
https://docs.ros.org/en/jade/api/catkin/html/howto/format2/catkin_overview.html
2021-06-12T13:49:29
CC-MAIN-2021-25
1623487584018.1
[]
docs.ros.org
Crate reproto_semver [−] [src] reproto_semver::Version; assert!(Version::parse("1.2.3") == Ok(Version { major: 1, minor: 2, patch: 3, pre: vec!(), build: vec!(), })); If you have multiple Versions, you can use the usual comparison operators to compare them: use reproto_semver::Version; assert!(Version::parse("1.2.3-alpha") != Version::parse("1.2.3-beta")); assert!(Version::parse("1.2.3-alpha2") > Version::parse("1.2.0")); If you explicitly need to modify a Version, SemVer also allows you to increment the major, minor, and patch numbers in accordance with the spec. Please note that in order to do this, you must use a mutable Version: use reproto reproto reproto_semver::Version; let mut chrome_release = Version::parse("41.5.5377").unwrap(); chrome_release.increment_major(); assert_eq!(Ok(chrome_release), Version::parse("42.0.0")); Requirements The semver crate also provides the ability to compare requirements, which are more complex comparisons. For example, creating a requirement that only matches versions greater than or equal to 1.0.0: use reproto_semver::Version; use reproto_semver::Range; let r = Range:
https://docs.rs/reproto-semver/0.3.36/reproto_semver/
2021-06-12T15:03:56
CC-MAIN-2021-25
1623487584018.1
[]
docs.rs
Gecko Logging¶ A minimal C++ logging framework is provided for use in core Gecko code. It is enabled for all builds and is thread-safe. Declaring a Log Module¶"); Example Usage¶ ¶. Warning A sandboxed content process cannot write to stderr or any file. The easiest way to log these processes is to disable the content sandbox by setting the preference security.sandbox.content.level to 0. On Windows, you can still see child process messages by using DOS (not the MOZ_LOG_FILE variable defined below) to redirect output to a file. For example: MOZ_LOG=”CameraChild:5” mach run >& my_log_file.txt will include debug messages from the camera’s child actor that lives in a (sandboxed) content process. Redirecting logging output to a file¶ Logging output can be redirected to a file by passing its path via an environment variable. Note. Logging Rust¶" Note For Linux/MacOS users, you need to use export rather than set. Note Sometimes it can be useful to only log child processes and ignore the parent process. In Firefox 57 and later, you can use RUST_LOG_CHILD instead of RUST_LOG to specify log settings that will only apply to child processes. The log crate lists the available log levels: It is common for debug and trace to be disabled at compile time in release builds, so you may need a debug build if you want logs from those levels. Check the env_logger docs for more details on logging options.
https://firefox-source-docs.mozilla.org/xpcom/logging.html
2021-06-12T15:20:48
CC-MAIN-2021-25
1623487584018.1
[]
firefox-source-docs.mozilla.org
Advantages of OQL The following list describes some of the advantages of using an OQL-based querying language: - You can query on any arbitrary object - You can navigate object collections - You can invoke methods and access the behavior of objects - Data mapping is supported - You are not required to declare types. Since you do not need type definitions, you can work across multiple languages - You are not constrained by a schema
https://gemfire.docs.pivotal.io/92/geode/developing/querying_basics/oql_compared_to_sql.html
2021-06-12T14:19:49
CC-MAIN-2021-25
1623487584018.1
[]
gemfire.docs.pivotal.io
Configuring screens The different views under the Screen Configuration menu enable you to customize the layout of the ticket profiles. You can specify the fields that are required during ticket creation. You can also specify the property of all fields, and place the fields in the appropriate sections of a ticket profile. You can add custom fields to any section or subsection you want. For example, in the Incident view, the default Record section consists of Affected Service, Affected Asset, Record Summary, and Categorization subsections. You can move fields within these subsections or to another section of the Incident view, such as the Assignment section. You can move widgets from a section to another section. You can also remove a widget, and replace some/all of its member fields as individual fields. You can also configure the title bar of the ticket profiles to add fields and widgets that best suit your business needs. Note Currently, you can configure the screens for the following ticket types only: Incident, Change Request, Work Order, and Task. Configuring the layout of a ticket profile You can add and move fields and widgets among sections and subsections in a ticket profile. However, you cannot add a single field or widget to more than one section or subsection. For more information, see Smart IT screens, panels, and fields. - On the Dashboard, select the Configuration > Screen Configuration. - Go to the Incident View and select Record > Affected Asset. - Click X to remove the field or the widget from the Affected Asset section. - Click Save. - Go to the Incident View and select Record > Record Summary. - In the Available Fields section, select the field or the widget that was removed from the Affected Asset section and add it under the Seleced Fields area. Click Save. Note Smart IT does not support printing the customized ticket profile. Updating the property of fields As an administrator, you can customize the properties of the out-of-the-box (OOTB) and custom fields to make them behave in way that you can define by using expressions for Incident, Change Requests, Work Order, and Task tickets. You can define expressions only on Universal Client (UC), and the result is implemented on both UC and mobile devices. If you do not want to add expressions, you can select Always from the menu to apply a set property always on the selected field or widget. To make a field read-only, you can select the Read Only option. For example, to make the Company field always required, you must select Always for the Required property. If you want to make the property conditional, then you must specify expressions. To do so, you must follow these steps: - On the Dashboard, select Configuration > Screen Configuration. - Navigate to the ticket view and select the section in which the field is available. - Double-click a field (OOTB or custom field). Options to define the field property are displayed. - Select the check box for a property; for example, Required and then select Meet a Condition. - Build an expression. - Click Save. Adding expression to the fields You can build expressions to dynamically change the following field properties: For more information, see Configuration details of expression. Notes For dates, use milliseconds since epoch for all expressions and for selection, use index for custom fields and label values for out-of-the-box fields. Associating actions to the fields You can add URL and Provider actions to the fields of Incident, Change, Work Order and Task tickets. For more information, see Configuring actions in Smart IT and Configuring provider actions in Smart IT. Configuring widgets A widget is a set of member fields that are closely related to each other. You can disassociate an existing widget, and add fields of the widget separately. You must first remove the widget from the Selected Fields. When the widget is removed, the fields in the widget can be added separetely to the ticket profile. You cannot add a widget after removing the widget and adding the member fields. A widget appears disabled if it is removed. Scenario The Priority widget is available in the Incident, Change, Work Order and Task view. The widget has Priority, Impact and Urgency fields. To break this widget, follow these steps: - In the Header section, click the x to remove the Priority widget. In the Available Fields section, the Priority, Impact and Urgency fields are now available for selection. - You can select any or all these fields and add them as individual fields. Click Save. Notes - The member fields become available for selection if you set the fields in View in ITSM. - If you remove a widget and add the member fields, the dependent fields should be added in a group so that you can work with the field values properly. Configuring Google Map widget You can configure the Google Map widget from Configuration > Screen Configuration. This widget appears enabled in the Available Fields section. You can move this widget as per your requirements. The Google Map widget is driven by Google Map API Key. Configuring the title bar of the tickets You can configure the title bar of an Incident and Change Request views. By default, the Incident ID field and Priority widget are displayed in the title bar. You can add a maximum of 5 fields or widgets in the title bar. You can remove the Incident ID field and the Priority widget and add to another section. Except the Priority widget, you cannot add any other widget in the title bar of the Incident View. - On the Dashboard, select the Configuration > Screen Configuration. - Go to the Incident View and click Header. - From the Available Fields, add fields and widgets to the Selected Fields area. For example, you can add the following fields and widgets: - Custom fields—Entry ID and Reported Date - A widget—Category Company Widget - Click Save. Similarly, you can configure the title bar of change request profile. On the title bar of change request profile three widgets are available by default—Risk Level, Priority, and Change Class (non editable). You must note that the Change ID is non-editable in the Change View mode. If a field is present in all required ITSM forms, the field is listed with an Add icon ("+" sign) to the left of the field label, and you can add it to Smart IT views. However, if the field is missing on any of the required ITSM forms, the field is displayed inactive, and you cannot add it to the view. You can make a custom field required in Smart IT, even if the field is optional in Remedy ITSM, by selecting the Required property in Smart IT Screen Configuration. The field is then indicated as required in the Smart IT UI and must be completed for every new ticket. In the mid-tier UI, the field continues to be shown as optional. For standard Remedy ITSM fields that are displayed out-of-the-box in Smart IT, such as Product Categorization Tier 1 - 3, you can control whether they are required or optional in the Smart IT UI by customizing the field properties in Remedy ITSM (via BMC Developer Studio). For example, if you make the Product Categorization Tier 1 field required in Remedy ITSM, then it behaves as a required field in Smart IT. If you make a field mandatory in Remedy ITSM by using filter workflow, it is not automatically displayed as a required field in Smart IT. Be sure to run BMC Developer Studio in Best Practice Customization mode when creating customizations, to ensure that you are using overlays. When you add fields to a view, they are displayed vertically. You can edit the custom field inline like all fields on the default view, by clicking the editicon. However, fields in the Assignment area are displayed on the Update Assignment panel for edit. For the Create Incident View and the Create Change View, if you configure a field in the Developers Studio, Smart IT does not reflect the configured field. You need to configure the same field in the Screen Configuration of Smart IT. For example, if you configure a field in the Developers Studio as Required, then again you need to mark this field as Required in the Screen Configuration of Smart IT. If the field is already appearing in the Selected Field column, then you need to configure it again. Otherwise, you can remove the field from the Selected Fields column and add it back from the Available Fields to the Selected Fields column. Note You cannot make custom fields optional in Smart IT, if they are required in Remedy ITSM. Alert message if ITSM (System Required) fields are removed With the screen configuration capability, if you remove a ITSM required field from the Selected Fields section of a screen view (Configuration > Screen Configuration), Smart IT asks you to confirm the action as displayed in the following image: If you confirm removal of the field, the view continues to display a red mark to remind you to add the field back to the view: Notes - System generated fields, like Entry ID, are read-only fields. If system generated fields appear editable, as Smart IT Administrator, make the field read-only from the Screen Configuration. In the task screens, a few read-only ITSM fields, cannot be updated. - The Save button remains disabled if the required fields are missing and shows a ‘<count> missing required fields’ message next to the Save button. You can click this link to check the missing field.
https://docs.bmc.com/docs/smartit1805/configuring-screens-804706953.html
2021-06-12T14:11:22
CC-MAIN-2021-25
1623487584018.1
[]
docs.bmc.com
Setting Object Gateway Server Properties Properties of the ObjectGateway class specify the settings used by the Service class (see “Running an Object Gateway Server”) to access and monitor an instance of the Object Gateway Server. This chapter covers the following topics: Using the New Object Gateway Form — The simplest way to define a set of server properties is to use the New Object Gateway form in the Management Portal, which allows you to create and store a predefined ObjectGateway object. Defining Server Properties Programmatically — It is also possible to set the properties of a ObjectGateway object at runtime. Object Gateway Server Versions — Different versions of the .NET Object Gateway Server executable are provided for .NET 2.0, 4.0, and 4.5, and for 32-bit and 64-bit systems. Using the New Object Gateway Form While it is possible to specify the settings for an Object Gateway Server session entirely in ObjectScript code, it is usually simpler to use the New Object Gateway form to create and store a persistent %Net.Remote.ObjectGateway Opens in a new window object. The following steps summarize the configuration procedure: In the Management Portal, go to System Administration > Configuration > Connectivity > Object Gateways.The Object Gateways Page You can use the options on this page to view an Object Gateway's log of recent activity, start or stop an Object Gateway, and create, edit, or delete an Object Gateway Server definition. On the Object Gateways page, click Create New Gateway. The New Object Gateway form is displayed. Only the first three fields are required. The following example leaves the default values in place for all fields except Gateway Name, Port, and Log file (File path is set to the value that would be used if the field were left blank):The New Object Gateway Form Fill out the form and click Save. The following properties of ObjectGateway can be set: Name — Required. A name that you assign to this Object Gateway definition. Server — Required. IP address or name of the machine where the Object Gateway server executable is located. The default is "127.0.0.1". Port — Required. TCP port number for communication between the Object Gateway server and the proxy classes in InterSystems IRIS®. Several methods of class Service (see “Running an Object Gateway Server”) accept the port number as an optional parameter with a default value of 55000. PassPhrase — If this property is checked, the Object Gateway will require a passphrase to connect. LogFile — Full pathname of the file used to log Object Gateway messages. These messages include acknowledgment of opening and closing connections to the server, and any difficulties encountered in mapping .NET classes to InterSystems IRIS proxy classes. This optional property should only be used when debugging. AllowedIPAddresses — IP address of the local network adapter to which the server listens. Specify 0.0.0.0to listen on all IP adapters on the machine (including 127.0.0.1, VPNs, and others) rather than just one adapter. You may not enter more than one address; you must either specify one local IP address, or listen on all of them. You must provide a value for this argument if you define the LogFile property. FilePath — Specifies the full path of the directory where the Object Gateway server executable is located. This is used to find the target executable and assemble the command to start the Object Gateway on a local server, and is required only when you want to use an executable that is not in the default location. If you do not specify this setting, the form will use the appropriate default executable for your system and your specified .NET Version. Exec64 — (applies only to Object Gateways on 64-bit platforms) If this property is checked, the Object Gateway server will be executed as 64-bit. Defaults to 32-bit execution. DotNetVersion — Specifies the .NET version (2.0, 4.0, or 4.5) to be used (see “Object Gateway Server Versions”). Defaults to .NET 2.0. Advanced settings (hidden by default) HeartbeatInterval — Number of seconds between each communication with the Object Gateway to check whether it is active. A value of 0 disables this feature. When enabled, valid values are from 1 to 3600 seconds (1 hour). The default is 10 seconds. HeartbeatFailureTimeout — Number of seconds to wait for a heartbeat response before deciding that the Object Gateway is in failure state. If this value is smaller than the HeartbeatInterval property, the Object Gateway is in failure state every time the Object Gateway communication check fails. The maximum value is 86400 seconds (1 day). The default is 30 seconds. HeartbeatFailureAction — Action to take if the Object Gateway goes into a failure state. Valid values are "R" (Restart) or "N"(None). The default action is Restart, which causes the Object Gateway to restart. HeartbeatFailureRetry — Number of seconds to wait before retrying the HeartbeatFailureAction if the Object Gateway server goes into failure state, and stays in failure state. A value of 0 disables this feature, meaning that once there is a failure that cannot be immediately recovered, there are no attempts at automatic recovery. Valid values when enabled are from 1 to 86400 (24 hours). The default is 300 seconds (5 minutes). InitializationTimeout — Number of seconds to wait for a response during initialization of the Object Gateway server. Valid values are from 2 to 300 (5 minutes). The default is 5 seconds. ConnectionTimeout — Number of seconds to wait for a connection to be established with the Object Gateway server. Valid values are 2 through 300 (5 minutes). The default is 5 seconds. After saving the form, go back to the Object Gateways page. Your new Object Gateway Server definition should now be listed there. Defining Server Properties Programmatically The ObjectGateway server properties can also be set programmatically. In addition to the properties defined by the New Object Gateway form (as shown in the previous section), the Type property must also be set to "2". This defines the server as a .NET Object Gateway (rather than an Object Gateway for Java). The following example creates a server definition identical to the one generated by the New Object Gateway form described in the previous section: // Create the object and define it as a .NET Object Gateway set gw = ##class(%Net.Remote.ObjectGateway).%New() set gw.Type = "2" // an Object Gateway for .NET, not Java // Set the properties set gw.Name = "GatewayTwo" set gw.Server = "127.0.0.1" set gw.Port = "55000" set gw.PassPhrase = 1 set gw.LogFile = "C:\Temp\GatewayTwo.log" set gw.AllowedIPAddresses = "0.0.0.0" set gw.FilePath = "C:\Intersystems\IRIS\dev\dotnet\bin\v2.0.50727" set gw.Exec64 = 1 set gw.DotNetVersion = "2.0" set gw.HeartbeatInterval = "10" set gw.HeartbeatFailureTimeout = "30" set gw.HeartbeatFailureAction = "R" set gw.HeartbeatFailureRetry = "300" set gw.InitializationTimeout = "5" set gw.ConnectionTimeout = "5" // Save the object set status = gw.%Save() The call to %Save() is only necessary if you wish to add the object to persistent storage, making it available on the Object Gateways page of the Management Portal (as described in the previous section). Object Gateway Server Versions Different versions of the Object Gateway Server assembly are provided for .NET 2.0, 4.0, and 4.5. The following versions are shipped (where install-dir is the path that $SYSTEM.Util.InstallDirectory() returns on your system). In some applications, these gateways may be used to load unmanaged code libraries. Since a 64-bit process can only load 64-bit DLLs and a 32-bit process can only load 32-bit DLLs, both 32-bit and 64-bit assemblies are provided for each supported version of .NET. This makes it possible to create gateway applications for 64-bit Windows that can load 32-bit libraries into the gateway. The appropriate version of the .NET Framework must be installed on your system in order to use these assemblies. The InterSystems IRIS installation procedure does not install or upgrade any version of the .NET Framework.
https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=BGNT_MAKEGATEWAY
2021-06-12T14:48:02
CC-MAIN-2021-25
1623487584018.1
[]
docs.intersystems.com
Advanced knowledge Debugging Print bindings You can easily print bindings with println(di.Instance(erased()). Erased parameterized generic types When using the erased function or using erased by default (either by choice on the JVM or by necessity elsewhere), you cannot represent a generic type. For example, erased<Set<String>> will yield a TypeToken representing Set<*>. Kodein)) // Represents a Triple<Int, String, Int> Bind the same type to different factories Yeah, when I said earlier that "you can have multiple bindings of the same type, as long as they are bound with different tags", I lied. Because each binding is actually a factory, the binding tuples are not ([BindType], [Tag]) but actually ([ContextType], [BindType], [ArgType], [Tag]) (note that providers and singletons are bound as ([BindType], Unit, [Tag])). This means that any combination of these three information can be bound to it’s own factory, which in turns means that you can bind the same type without tagging to different factories. Hack the container! The.
https://docs.kodein.org/kodein-di/7.1/core/advanced.html
2021-06-12T14:03:18
CC-MAIN-2021-25
1623487584018.1
[]
docs.kodein.org
Bookcover: Generating PDFs for book covers. This has a few advantages: If your book’s page count changes, you can re-run the program and your cover will be adjusted automatically. You can keep your book’s cover under version control and track changes more easily (useful when the book itself is also a program). Your book cover has access to a complete programming environment. Whether it’s getting values from a SQL database or using procedurally generated fractal art (Faber Finds generative book covers, anyone?): if it can be done with code, it can be very easily placed on your book’s cover. This library/language does nothing very magical; it’s just a thin layer on top of the pict and racket/draw libraries that come with Racket. What it does do is abstract away almost all of the tedious math and setup involved with using those libraries for this particular purpose. I’ve used it successfully on one book, so I’m reasonably sure it will work for you too. If you’re new to Racket, you would do very well to read Quick: An Introduction to Racket with Pictures first. Not only will you learn the basics of writing Racket programs, but many of the functions used in that tutorial are the same ones you’ll be snapping together with the ones in this library to create your book covers. NB: This is my first ever Racket module, and it has had very little testing outside of my own small projects. For these reasons, it should for now be considered unstable: the functions it provides may change. After I’ve gathered and incorporated some feedback, I will solidify things a bit and make a 1.0.0 version. The source for this package is at.
https://docs.racket-lang.org/bookcover/index.html
2021-06-12T15:10:03
CC-MAIN-2021-25
1623487584018.1
[]
docs.racket-lang.org
How Retool works Retool has 4 fundamental pieces to it: - Connect your data sources, like PostgreSQL, Salesforce, Firebase, and 20+ more - Build your queries and logic in SQL or Javascript - Connect your queries and logic to prebuilt components like tables, text inputs, and buttons - Organize and connect your components into an app Retool isn't just a front end, though – we take care of a lot of the pesky logic that internal tools tend to require, like scheduling queries, updating and writing data, and triggers. Retool apps are easy to share with your teammates and stakeholders, and we offer granular access management and audit logs to keep things secure. Getting started resources A few resources to help you get started: - A 5 minute quickstart to get your app up and running - Connecting your data to Retool - An overview of writing queries and Javascript in Retool - Components / API reference - A video tutorial: building your first app in Retool To see what other people are building and collaborate on problems, head over to the Retool community. Integrations and support Retool integrates with most major data sources that you'd need to build your internal tools, from databases like PostgreSQL and MySQL, to internal REST APIs and GraphQL, as well as external APIs like Stripe, Firebase, and Github. Check out our Integrations Overview for more information. Our docs won't be able to cover everything, so if you have any issues, don't hesitate to reach out to support through Intercom on the bottom right, and engage with the community to see what other Retool users are building and troubleshooting. Updated 7 months ago
https://docs.retool.com/docs/whats-retool
2021-06-12T14:36:36
CC-MAIN-2021-25
1623487584018.1
[array(['https://files.readme.io/ba98046-docs.gif', 'docs.gif'], dtype=object) array(['https://files.readme.io/ba98046-docs.gif', 'Click to close...'], dtype=object) ]
docs.retool.com
Mapping projects Needed for configuration of mappings for "Issue is a task" task mapping strategy The configuration substep ( of the configuration step (3) — provide mappings for the selected task mapping strategy) — map your Jira-project (or Jira Issue Filter) to the FreshBooks-project. The Jira FreshBooks Connector plugins' main responsibility is to synchronise Jira workflow activities with the ones in FreshBooks. In order to do this, the plugin has to know, what project (Or set of Issues, defined with Jira Issue Filter) in Jira corresponds to what project in FreshBooks. The Jira-project (or Jira Issue Filter) to FreshBooks-project mapping is a collection of project-to-project correspondences. Each map in this mapping defines particularly workflow information about what Issues are going to be sent to FreshBooks, (or updated, if they already exist there). It is important to know, that for every project map, defined in the project mapping, the Jira FreshBooks Connector will find all the issues, and for each, it will create a task in the mapped FreshBooks-project.. If a worklog is removed from an issue, then the corresponding time entry will be removed from each task in the FreshBooks-project-project (or the name of a Jira Issue Filter for your Jira-project), that you want to synchronise with the FreshBooks-project into the corresponding field (Figure 18) - Select the FreshBooks-project, the previously selected Jira-project (or Jira Issue Filter) should be synchronised with, inside the corresponding selector (combobox) (Figure 19) - Press the Add button (Figure 20) Figure 3 (Log In, as an administrator) Figure 4 (Go to the Administration section) Figure 9 (Go to the Plugins menu, choose the FreshBooks Configuration menu item) Figure 18 (Type the name of your Jira-project into the corresponding field) Figure 19 (Select the FreshBooks-project inside the corresponding selector) Figure 20 (Press the Add button) After a map is added, a new line appears giving you an ability to map other projects. You can: - Map a single Jira-project to a single FreshBooks-project (Figure 18-20) - Map multiple Jira-projects to a single FreshBooks-project (Figure 21-23) - Map a single Jira-project to multiple FreshBooks-projects (Figure 24-26) All together this gives an ability to map multiple Jira-projects to multiple FreshBooks-projects. Figure 21 (Type the name of your Jira-project into the corresponding field) Figure 22 (Select the FreshBooks-project inside the corresponding selector) Figure 23 (Press the Add button) Figure 24 (Type the name of your Jira-project into the corresponding field) Figure 25 (Select the FreshBooks-project inside the corresponding selector) Figure 26 (Press the Add button) So, now the Jira FreshBooks Connector plugin knows, what project (Or set of Issues, defined with Jira Issue Filter) in Jira corresponds to what project in FreshBooks. It is almost the whole information needed to start synchronisation. 3. 3. Mapping Jira Issue Filters to FreshBooks-projects Jira Issue Filters are mapped exactly as Jira-projects are.
https://docs.rozdoum.com/display/FCFJD/Mapping+projects
2021-06-12T14:35:52
CC-MAIN-2021-25
1623487584018.1
[]
docs.rozdoum.com