content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
After installation, you might want to create a new Apache certificate. By default, CloudBees Build Acceleration generates a temporary self-signed certificate during installation. This certificate is used whenever a browser makes an HTTPS connection to the Apache server. During CloudBees Build Acceleration installation, Apache is configured to look for a private key file named $HOSTNAME.key and a certificate named $HOSTNAME.crt. These files are in $DATADIR/apache/conf/ssl.key and $DATADIR/apache/conf/ssl.crt respectively. $DATADIR is the directory where CloudBees Build Acceleration data files were installed. On Windows, these files are in C:\ECloud\i686_win32. Because the certificate is self-signed, browsers complain that it is an untrusted certificate. Most organizations will want to generate a new certificate signed by a recognized certificate authority (CA) to avoid the browser warnings. The following list summarizes the process: Generate a new certificate and private key Send the request to the CA Install the signed certificate Generating a new certificate and private key Locate opensslbinary and openssl.cnfin $DATADIR/64/bin. Copy openssl.cnfinto a temporary directory. Generate a new private key and certificate. Enter the appropriate information for your organization when prompted. The most important field is the Common Name, which is the fully qualified name of the host running the Apache server where you want the certificate. This name must match the host portion of the URL used to connect to the Cluster Manager. $ openssl req -config openssl.cnf -new -out $HOSTNAME.csrLoading 'screen' into random state - doneGenerating remain blank.-----Country Name (2 letter code) []:USState or Province Name (full name) []:CaliforniaLocality Name (eg, city) []:SunnyvaleOrganization Name (eg, company) []:CloudBeesOrganizational Unit Name (eg, section) []:Common Name (eg, your websites domain name) []:myserver.mycompany.com Please enter the following 'extra' attributes to be sent with your certificate request.A challenge password []: This information generates a new private key in privkey.pem and a signing request in $HOSTNAME.csr. If you want to use the private key without having to enter a challenge password each time the server starts, issue the following command to strip out the password: $ openssl rsa -in privkey.pem -out $HOSTNAME.keyEnter pass phrase for privkey.pem:writing RSA key This creates a PEM encoded private key file named $HOSTNAME.key without the password. Sending the request to the CA The $HOSTNAME.csr file generated in the previous section is a request for a certificate authority to sign the certificate. When you send this file to the CA, the CA verifies the information inside and sends you a signed certificate in response. The signed certificate includes the original certificate and the signature of the CA. Name the signed certificate ' $HOSTNAME.crt '. Installing the key and signed certificate Copy the two files, $HOSTNAME.keyand $HOSTNAME.crt, into $DATADIR/apache/conf/ssl.keyand $DATADIR/apache/conf/ssl.crt. Restart the Apache server. Ensure the $HOSTNAME.key file is readable only by the user running the Apache server process. Delete the contents of the temporary directory you created because this directory contains the cryptographic information used to generate the key.
https://docs.cloudbees.com/docs/cloudbees-build-acceleration/latest/configuration-guide/installing-apache-certificate
2021-10-16T00:03:15
CC-MAIN-2021-43
1634323583087.95
[]
docs.cloudbees.com
Troubleshoot error 403 Access Denied Error This article applies to: - OS: All supported operating systems - Product edition: inSync Cloud, Phoenix, and Druva Cloud Platform (DCP) Problem description When trying to access any of the DCP controls, the below error message is displayed. "403, Access Denied” “You don't have permission to access the page you are trying to view. Cause The administrators trying to access DCP controls does not have the Druva cloud administrator rights. Resolution Provide the DCP cloud administrator permissions to the user using the following steps: - Login to the DCP console. - Click the Druva logo and from the menu, click Manage Administrators. - Select the administrator and click Edit. - Change the role to Druva Cloud Administrator. - Click Save. The user's role is now updated to Druva cloud administrator and can perform the administrator tasks. See also Manage Druva Administrators
https://docs.druva.com/Knowledge_Base/Druva_Cloud_Platform_(DCP)_Console/Troubleshooting/Troubleshoot_error_403_Access_Denied_Error
2021-10-15T23:00:14
CC-MAIN-2021-43
1634323583087.95
[array(['https://docs.druva.com/@api/deki/files/51593/DCPAccessDeniedError.png?revision=1', 'DCPAccessDeniedError.png'], dtype=object) ]
docs.druva.com
Before creating a VMDK resource, register the ESXi host information. Follow the steps below: - Execute the following command on the console screen. # /opt/LifeKeeper/lkadm/subsys/scsi/vmdk/bin/esxi_register -a <ESXi host name> - When you execute the command, you will be asked for the username and password. Enter the username and password used to log in to the - - A list of registered hosts will be provided. - Register the ESXi host in the same way on all nodes in the cluster. For details of the esxi_register command, see VMDK Maintenance. Feedback Thanks for your feedback. Post your comment on this topic.
https://docs.us.sios.com/spslinux/9.5.1/en/topic/register-esxi-host
2021-10-16T00:19:03
CC-MAIN-2021-43
1634323583087.95
[]
docs.us.sios.com
WooCommerce Store Catalog PDF Download allows shop owners to attach their own ready-made PDF catalog of products for customers to download. Customers can also download page-specific PDFs or a single product in detail. This feature is useful for customers that need to do offline viewing or printing or save info for future reference. > Settings > Products > Store Catalog PDF. Custom PDF ↑ Back to top Upload a pre-made PDF catalog of your site/products and then allow customers to download it. It can be inserted anywhere you like with a shortcode. [wc-store-catalog-pdf]. Please review the shortcode section below for more information. Company Logo ↑ Back to top Upload a custom logo that will be displayed at the top of the PDF document. This logo will also serve as a link back to your site when clicked. Be sure the setting Show Header is enabled to display the logo. PDF Header ↑ Back to top Enable to show verbiage/text describing your company or other information in the header of the PDF. PDF Footer ↑ Back to top Enable to show verbiage/text, such as copyright or footnotes in the footer of the PDF. PDF Layout Format ↑ Back to top Choose which PDF layout customers will get when downloading the PDF. Grid format shows less information but displays more on a page; List format shows more information but less on a page. Download Link Label ↑ Back to top Set what the download button text will be. Save changes. Usage ↑ Back to top The Catalog PDF download button displays on product archive pages and on product single detail pages. Shortcode ↑ Back to top Using the shortcode [wc-store-catalog-pdf] to display a custom PDF download button anywhere shortcodes are allowed. Be sure you have uploaded a custom PDF in settings to use this shortcode. Optionally, you can set the download text to whatever you like with the shortcode parameter of link_label="Custom PDF Download" Customization ↑ Back to top Template Override ↑ Back to top There are default templates you can use that generate the PDF. If you would like to modify it to your needs, you can do that by copying the template file you want to modify from This way any changes you make are not overwritten when the plugin is updated. However you should take that one step further and create a child theme and put the template file in it so your changes are not lost when updating software. Learn how to create child themes at: Create Child Theme. Hooks ↑ Back to top This plugin comes with hooks you can use to manipulate for your requirements. While this section is mostly for developers, you can learn how to use hooks here Hooks API. Below is a list of hooks and a short description of what they do. - do_action( ‘wc_store_catalog_pdf_download_before_product’, $product ) – fires right before the output of the product information in all layouts. - do_action( ‘wc_store_catalog_pdf_download_after_product’, $product ) – fires right after the output of the product information in all layouts. - do_action( ‘wc_store_catalog_pdf_download_product_attr’, $product ) – fires after the product’s meta has been outputted. - apply_filters( ‘wc_store_catalog_pdf_download_orientation’, string ) – sets orientation of the generated PDF ( portrait / landscape ). - apply_filters( ‘wc_store_catalog_pdf_download_size’, string ) – sets the size of the PDF ( letter / A4 / legal ). - apply_filters( ‘wc_store_catalog_pdf_download_filename’, string ) – sets the filename that is generated. - apply_filters( ‘wc_store_catalog_pdf_download_view_only’, string ) – for custom PDF download button, this sets whether the PDF will open up for view only or download it straight away. - apply_filters( ‘wc_store_catalog_pdf_download_grid_image_size’, array ) – sets the image size for grid layout. - apply_filters( ‘wc_store_catalog_pdf_download_grid_columns’, int ) – sets the number of columns you want displayed in grid layout. - apply_filters( ‘wc_store_catalog_pdf_download_show_product_image’, html ) – outputs the product image. - apply_filters( ‘wc_store_catalog_pdf_download_show_product_title’, html ) – outputs the product title. - apply_filters( ‘wc_store_catalog_pdf_download_show_product_price’, html ) – outputs the product price. - apply_filters( ‘wc_store_catalog_pdf_download_list_image_size’, array ) – sets the image size for list layout. - apply_filters( ‘wc_store_catalog_pdf_download_description’, html ) – outputs the product description. - apply_filters( ‘wc_store_catalog_pdf_download_product_meta’, html ) – outputs the product meta. - apply_filters( ‘wc_store_catalog_pdf_download_single_image_size’, array ) – sets the image size for single layout. Languages ↑ Back to top Fully translatable, the POT is located within the plugin’s folder/languages. Place your translated MO file in Troubleshooting ↑ Back to top Get the system status by going to WooCommerce >System Status. Check the status of your server for a green “yes.” Frequently Asked Questions ↑ Back to top Does WooCommerce Store Catalog PDF Download work with variable products? ↑ Back to top Yes, variable products will show all available attributes. Does this extension work with Composite Products? ↑ Back to top This does not work with Composite Products (separate purchase) at this time. Does this generate a PDF of the entire store from the Admin dashboard to make a ready-made catalog? ↑ Back to top Not at this time. You can vote for this feature at the Ideas Forum. Questions and Feedback ↑ Back to top Have a question before you buy? Please fill out this pre-sales form. Already purchased and need some assistance? Get in touch with a Happiness Engineer via the Help Desk.
https://docs.woocommerce.com/document/woocommerce-store-catalog-pdf-download/
2021-10-15T22:31:04
CC-MAIN-2021-43
1634323583087.95
[array(['https://docs.woocommerce.com/wp-content/uploads/2015/03/storecatalogpdf-setup.png', None], dtype=object) array(['http://docs.woocommerce.com/wp-content/uploads/2015/02/store-catalog-pdf-download-button.jpg', 'store-catalog-pdf-download-button'], dtype=object) array(['http://docs.woocommerce.com/wp-content/uploads/2015/02/store-catalog-pdf-download-single-button.jpg', 'store-catalog-pdf-download-button'], dtype=object) array(['http://docs.woocommerce.com/wp-content/uploads/2015/02/store-catalog-pdf-download-custom-button.jpg', 'store-catalog-pdf-download-button'], dtype=object) ]
docs.woocommerce.com
Settings Overview for Advertising > Settings This is the Settings Overview for the Advertising > Settings page. You need to have the Advertising extension installed and activated in order to access these settings. Advertising Settings - Cart - Choose the cart to use for payments. We strongly recommend using GetPaid. - Dashboard Page - Choose the Advertisers' Dashboard page. - Tracking Link Base Slug - Specify the slug variable for tracking click links. For example: click so tracking link will be[ID]/ - Delete Data on Uninstall - If checked, all plugin data will be deleted upon uninstallation. - Paid Ads Status - Select the status to assign to an ad after it has been paid for. You can choose to automatically publish it, or save it as pending for manual review later on. - HTML Ads - Choose whether or not to allow HTML ads.
https://docs.wpgetpaid.com/article/628-settings-overview-for-advertising-settings
2021-10-16T00:43:56
CC-MAIN-2021-43
1634323583087.95
[]
docs.wpgetpaid.com
Alerts enable you to receive notifications on changes or special business cases in your data. Alerts can be used in two business cases: - To monitor the system (operational) - To indicate a business change When you define an alert, you will receive an email containing a set of reports whenever a condition you define applies, according to the schedule you specify. The email graphically displays the report charts and provides an attached CSV file containing the raw data of the reports. Defining an alert To define a new alert: - From the main menu, choose Project – Alerts. This shows a list of all previously defined alerts in the project. From here you can also update and delete existing alerts. - Click the Add + button at the top right corner. - Define your alert: - Name: Give your alert a name – it will be included in the email subject and body. - CQL: add a CQL query that will trigger the alert. The query must return a single numeric value. For example, the default query returns the number of events received during the previous day: SELECT count(*) FROM Cooladata WHERE date_range (yesterday) - Condition: define a limit value for the query response, above/below which an alert will be sent. For example, the default condition will send an alert if the query returns a result of less than 1, i.e. if no events were received during the previous day. Notice that the alert will refer to null values as 0. - Check Condition: you can check the query and condition by clicking Check Condition – the query response and trigger result will be shown (if the query is valid). - Frequency: specify the schedule for checking/sending the alert: - Daily: specify the hour at which the report should be sent (UTC) - Weekly: specify the day of the week and hour at which the report should be sent (UTC) - Monthly: specify the day of the month and hour at which the report should be sent (UTC) - CRON Expression: see for a description of CRON expression syntax. - Recipients List: select one or more CoolaData users and/or enter any email address (can also add emails not registered to CoolaData). Add your own email as a recipient to get a copy of the alert when it is sent. - Text: you can add a rich HTML text to the email body. This is not mandatory. - Reports to include: Select one or more reports from the project to include in the email body. This is not mandatory. - Click Save. - Run Now: Once saved, you can also run the alert manually by clicking Run Now – if the condition is triggered, the alert email will be sent to the list of recipients you defined. Alert Builder Another way to define alerts is to use CoolaData’s alert builder. To access the builder, first select the ‘Alerts’ tab in your settings menu. After selecting the ‘Alerts’ tab you will be transferred to the alert list, where your different alerts will be concentrated. On the top right-hand corner of the list you can find the following menu, from which you can access the alert builder: The alert builder is comprised of two sections: The top section defines the alert’s trigger settings. You may use your existing KPIs, in addition to condition of your choice to choose the alert’s trigger. In this example the alert will be triggered if the user count has increased by 10% compared to the previous day. The button ‘Check Condition’ runs a test in which the trigger is checked, when pressed message will appear next to it detailing whether the condition is met at the time of the manual check. The bottom section defines the alert’s running settings. If the condition set in the top section is met, an alert will be sent out via email. This section allows you to define the sent alert’s settings, as well as it’s checking frequency. Recipients, text and email attached reports are also defined in this section. In this example the test will run daily, and if it’s conditions are met an alert will be sent to ‘[email protected]’, with the text ‘User count increased by 10%’. In this case there will be no added report, but you may choose to add any of your predefined reports to the alert message. If you are interested in viewing the CQL code components of your query, you may select the ‘Options’ tab on the top right-hand corner of your screen, under which you will find the ‘Show CQL’ option. After naming and defining your alert you will be able to save it. Once saved an alert will run a trigger check according to the selected frequency. You may also select the ‘Manual’ frequency in order to save an alert without running it on a scheduled time. Alert History To see when an alert was last run or sent, select History from the list item menu. Here you can also see which reports were sent and the number of recipients. Click each row to see more details.
https://docs.cooladata.com/alerts/
2021-10-15T23:18:56
CC-MAIN-2021-43
1634323583087.95
[array(['https://docs.cooladata.com/wp-content/uploads/2018/06/alert-114x300.png', None], dtype=object) array(['https://docs.cooladata.com/wp-content/uploads/2018/06/alert-2-300x165.png', None], dtype=object) array(['https://docs.cooladata.com/wp-content/uploads/2018/06/alerts-3-1.png', None], dtype=object) ]
docs.cooladata.com
concat The Concat tag is a utility tag that can be used to concatenate several values together into a single variable. For example, suppose we have two variables 'first_name' and 'last_name'. <cms:set <cms:set To set a variable, 'welcome_message' to 'Hello John Doe! We welcome you!', using both the variables, we can do either this - <cms:set or use concat as follows - <cms:set Here we supply concat with all parts of the string as unnamed parameters separated by spaces (i.e. 'Hello ', first_name, ' ', last_name, and '! We welcome you!' with space between each as separator) and concat simply returns back the concatenated string. If many values are supplied to concat, the code sometimes becomes a little difficult to comprehend (as might be the case in the snippet above) because the only demarcation between the parameters is the space. In such cases, we can try naming the parameters (any arbitrary names can be used). Thus the above snippet could be written as - <cms:set Hopefully that should make the snippet more legible. One benefit of concat over the first method is that it avoids using '<cms:show />' with all variables used within the string. Another is that we can use '\n' and '\t' for inserting newline and tab characters in the string. For example - <cms:set msg = "<cms:concat 'item_name: ' pp_item_name '\n' 'item_number: ' pp_item_number '\n' 'quantity: ' pp_quantity />" /> In the snippet above, 'msg' variable is being set in response to a successful PayPal transaction and will then be emailed. Parameters Concat takes any number of unnamed parameters (either literal strings or variables) and returns back a single string containing the concatenated values. Variables This tag does not set any variables of its own.
https://docs.couchcms.com/tags-reference/concat.html
2021-10-15T23:28:32
CC-MAIN-2021-43
1634323583087.95
[]
docs.couchcms.com
Share User-ID Mappings Across Virtual Systems To share IP address-to-username mappings across virtual systems, assign a virtual system as a User-ID hub. To simplify User-ID™ source configuration when you have multiple virtual systems, configure the User-ID sources on a single virtual system to share IP address-to-username mappings with all other virtual systems on the firewall. Configuring a single virtual system as a User-ID hub simplifies user mapping by eliminating the need to configure the sources on multiple virtual systems, especially if a user’s traffic will pass through multiple virtual systems based on the resources the user is trying to access (for example, in an academic networking environment where a student will be accessing different departments whose traffic is managed by different virtual systems). To map the user, the firewall uses the mapping table on the local virtual system and applies the policy for that user. If the firewall does not find the mapping for a user on the virtual system where that user’s traffic originated, the firewall queries the hub to fetch the IP address-to-username information for that user. If the firewall locates the mapping on both the User-ID hub and the local virtual system, the firewall uses the mapping it learns locally. After you configure the User-ID hub, the virtual system can use the mapping table on the User-ID hub when it needs to identify a user for user-based policy enforcement or to display the username in a log or report but the source is not available locally. When you select a hub, the firewall retains the mappings on other virtual systems so we recommend consolidating the User-ID sources on the hub. However, if you don’t want to share mappings from a specific source, you can configure an individual virtual system to perform user mapping. - Assign the virtual system as a User-ID hub. - Selectand then select the virtual system where you consolidated your User-ID sources.DeviceVirtual Systems - On theResourcetab,Make this vsys a User-ID data huband clickYesto confirm. Then clickOK. - Consolidate your User-ID sources and migrate them to the virtual system that you want to use as a User-ID hub.This consolidates the User-ID configuration for operational simplicity. By configuring the hub to monitor servers and connect to agents that were previously monitored by other virtual systems, the hub collects the user mapping information instead of having each virtual system collect it independently. If you don’t want to share mappings from specific virtual systems, configure those mappings on a virtual system that will not be used as the hub. - Remove any sources that are unnecessary or outdated. - Identify all configurations for your Windows-based or integrated agents and any sources that send user mappings using the XML API and copy them to the virtual system you want to use as a User-ID hub.On the hub, you can configure any User-ID source that is currently configured on a virtual system. However, IP address-and-port-to-username mapping information from Terminal Server agents and group mappings are not shared between the User-ID hub and the connected virtual systems. - Specify the subnetworks that User-ID should include in or exclude from mapping. - On all other virtual systems, remove any sources that are on the User-ID hub. - Committhe changes to enable the User-ID hub and begin collecting mappings for the consolidated sources. - Confirm the User-ID hub is mapping the users. - Use theshow user ip-user-mapping allcommand to show the IP address-to-username mappings and which virtual system provides the mappings. - Use theshow user user-id-agent statisticscommand to show which virtual system is serving as the User-ID hub. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/pan-os/10-0/pan-os-admin/user-id/deploy-user-id-in-a-large-scale-network/share-user-id-mappings-across-vsys.html
2021-10-16T01:00:22
CC-MAIN-2021-43
1634323583087.95
[]
docs.paloaltonetworks.com
Update: SentryOne Document is now SolarWinds Database Mapper (DMR). See the Database Mapper product page to learn more about features and licensing. Want to explore Database Mapper? An interactive demo environment is available without any signup requirements. Overview The Database Mapper Solutions Dashboard allows you to manage the solutions that are uploaded to Database Mapper. Select Solutions to open the Solutions Dashboard. The Solutions Dashboard displays details about the solutions within your environment. Solution Items The Solution Items page displays metadata for the selected solution item in your environment. Task History The Task History page displays the usage history for a selected Solution, or Solution item. Expand a Task to display the individual task steps and their statuses. The Task History page gives you historical insight into the tasks that run on your Database Mapper solutions and solution items. The following details are provided: Task History Actions Documentation Exports The Documentation Exports page displays details about your selected solution's exported documentation. The following details are provided: Documentation Exports Actions Use the Documentation Exports page to download any of your exported documentation versions. Documentation Export window Select the Export Documentation button for the desired solution, and then select Export Documentation to open the Export Documentation window. The following options are available:
https://docs.sentryone.com/help/sentryone-document-solutions-dashboard
2021-10-15T23:44:29
CC-MAIN-2021-43
1634323583087.95
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60a530c68e121cd629ca6ff9/n/dmr-database-mapper-solutions-dashboard-2021.png', 'Database Mapper Solutions Dashboard Version 2020.128 Database Mapper Solutions Dashboard'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60a53170ad121cdf7dff25c7/n/dmr-database-mapper-solution-items-for-solution-2021.png', 'Database Mapper Solution Items page Version 2020.128 Database Mapper Solution Items page'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60a531d0ec161c3013fa5b33/n/dmr-database-mapper-task-history-expand-selection-2021.png', 'Database Mapper Task History expand task Version 2020.128 Database Mapper Task History expand task'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60a5325fec161c3013fa5e1b/n/dmr-database-mapper-documentation-exports-page-2021.png', 'Database Mapper Documentation Export page Version 2020.128 Database Mapper Documentation Export page'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60a53286ec161ce113fa5544/n/dmr-database-mapper-documentation-export-window-2021.png', 'Database Mapper Documentation Export window Version 2020.128 Database Mapper Documentation Export window'], dtype=object) ]
docs.sentryone.com
BindingNavigator The BindingNavigator control in your application will be converted to RadBindingNavigator. The following tables describe which properties and methods are removed and which are replaced with similar equivalents. Standard BindingNavigator and our RadBindingNavigator have different mapping of items' name. That's why the navigation will not work after the conversion process is finished. In order to resolve the issue, you need to change the Name properties of each item manually in the Designer file. Each name should starts with BindingNavigator's name plus item's name. For example: this.bindingNavigator1.Name = "bindingNavigator1"; We strongly recommend you to change the Name properties of each item manually after opening the form at design time. In addition, if there are other items added to the navigator, they will be converted as well, their properties, events and methods are listed here.
https://docs.telerik.com/devtools/winforms/winforms-converter/supported-controls/bindingnavigator
2021-10-15T22:31:47
CC-MAIN-2021-43
1634323583087.95
[]
docs.telerik.com
What’s New¶ The major novelties of all releases since v1.0 are as follows: New in Version 2.1.0:¶ - Ability to merge Databases using ‘+’ as a delimiter: “latest_fastlim” and “official_fastlim” are now written as “latest+fastlim”, and “official+fastlim”. - useSuperseded flag in getExpResults is marked as deprecated, as we now just put superseded results in separate database - DataSets now have an .isCombinableWith function - Alightly extended output of summary printer - Added scan summary (summary.txt) when running over multiple files - Added expandedOutput option to slha-printer - Output for efficiency-map results now reports L, L_max and L_SM - The likelihood is now maximized only for positive values of the signal strength in the computation of L_max - Pythia8 version in xsecComputer updated from 8226 to 8306 - Improved interactive plots - - - added EM results for - ATLAS-SUSY-2017-03 (EWino, WZ), ATLAS-SUSY-2018-06 (EWino, WZ), ATLAS-SUSY-2018-14 (sleptons), CMS-SUSY-14-021 (stops) - created and added THSCPM10 and THSCPM11 EMs for ATLAS-SUSY-2016-32; - - replaced some 8 TeV ATLAS conf notes with the published results - (ATLAS-CONF-2013-007 -> ATLAS-SUSY-2013-09, ATLAS-CONF-2013-061 -> ATLAS-SUSY-2013-18, ATLAS-CONF-2013-089 -> ATLAS-SUSY-2013-20) - corrected off-shell regions of some existing EM-type results (in three 13 TeV and eigth 8 TeV analyses). New in Version 2.0.0:¶ - Introduction of particle class - Introduction of model class (see Basic Input) - Input model can now be defined by an SLHA file with QNUMBERS blocks - Unified treatment of SLHA and LHE input files (see decomposer and LHE-reader) - Decomposition and Experimental Results can now handle lifetime dependent results - Added field “type” to the experimental results in the database - Added (optional) field “intermediateState” to the experimental results in the database - Inclusive branches can now describe inclusive vertices - Added possibility for analysis specific detector size - New missing topologies algorithm and output - Added “latest” and “latest_fastlim” Database abbreviations - Added support for central database server - Small bug fix in likelihood computation - Small fix due to an API change in pyhf 0.6 - Changes in output: width values added, coverage groups and others (see output description for details) - Added option for signal strength multipliers in cross section calculator - Small bug fixes in models New in Version 1.2.4:¶ - added pyhf support - pickle path bug fix - bug fix for parallel xseccomputers - Introduced the SMODELS_CACHEDIR environment variable to allow for a different location of the cached database file - fixed dataId bug in datasets New in Version 1.2.3:¶ - database updated with results from more than 20 new analyses - server for databases is now smodels.github.io, not smodels.hephy.at - small bug fix for displaced topologies - small fix in slha printer, r_expected was r_observed - Downloaded database files now stored in $HOME/.cache/smodels New in Version 1.2.2:¶ - Updated official database, added T3GQ eff maps and a few ATLAS 13 TeV results, see github database release page - Database “official” now refers to a database without fastlim results, “official_fastlim”, to the official database with fastlim - List displaced signatures in missing topologies - Improved description about lifetime reweighting in doc - Fix in cluster for asymmetric masses - Small improvements in the interactive plots tool New in Version 1.2.1:¶ - Fix in particleNames.py for non-MSSM models - Fixed the marginalize recipe - Fixed the T2bbWWoff 44 signal regions plots in ConfrontPredictions in manual New in Version 1.2.0:¶ - Decomposition and experimental results can include non-MET BSM final states (e.g. heavy stable charged particles) - Added lifetime reweighting at decomposition for meta-stable particles - Added finalState property for Elements - Introduction of inclusive simplified models - Inclusion of HSCP and R-hadron results in the database New in Version 1.1.3:¶ - Support for covariance matrices and combination of signal regions (see combineSR in parameters file) - New plotting tool added to smodelsTools (see Interactive Plots Maker) - Path to particles.py can now be specified in parameters.ini file (see model in parameters file) - Wildcards allowed when selecting analyses, datasets, txnames (see analyses, txnames and dataselector in parameters file) - Option to show individual contribution from topologies to total theory prediction (see addTxWeights in parameters file) - URLs are allowed as database paths (see path in parameters file) - Python default changed from python2 to python3 - Fixed lastUpdate bug, now giving correct date - Changes in pickling (e.g. subpickling, removing redundant zeroes) - Added fixpermissions to smodelsTools.py, for system-wide installs (see Files Permissions Fixer) - Fixed small issue with pair production of even particles - Moved the code documentation to the manual - Added option for installing within the source folder New in Version 1.1.1:¶ - C++ Interface - Support for pythia8 (see Cross Section Calculator) - improved binary database - automated SLHA and LHE file detection - Fix and improvements for missing topologies - Added SLHA-type output - Small improvements in interpolation and clustering New in Version 1.1.0:¶ - the inclusion of efficiency maps (see EM-type results) - a new and more flexible database format (see Database structure) - inclusion of likelihood and \(\chi^2\) calculation for EM-type results (see likelihood calculation) - extended information on the topology coverage - inclusion of a database broswer tool for easy access to the information stored in the database (see database browser) - the database now supports also a more efficient binary format - performance improvement for the decomposition of the input model - inclusion of new simplified results to the database (including a few 13 TeV results) - Fastlim efficiency maps can now also be used in SModelS
https://smodels.readthedocs.io/en/latest/ReleaseUpdate.html
2021-10-15T22:44:27
CC-MAIN-2021-43
1634323583087.95
[]
smodels.readthedocs.io
This guide shows you how to install CloudBees CI on modern cloud platforms on OpenShift. To perform the installation, you should be knowledgeable in OpenShift, Helm, OpenShift, be sure to review the Learn and Plan stages from Onboarding for CloudBees CI on modern cloud platforms. Using Helm to install CloudBees CI provides the following advantages: It lets you customize the CloudBees CI installation without resorting to error-prone modification of the CloudBees CI YAML files. It provides a history of changes applied to the CloudBees CI release. It provides a simpler and more robust rollback option. It provides a straightforward method of creating custom environment deployments of CloudBees CI. For example: development, staging, and production CloudBees CI environments. Things you should know before using Helm Using the Internet to download the Helm chart and the chart dependencies is the preferred method, but you can install a chart from a local archive that has been downloaded beforehand. About Helm charts A Helm chart is a package that defines a Kubernetes application and its dependencies. The chart is a combination of YAML templates for Kubernetes resources, such as pods, replica sets, deployments or ingresses. It also provides a values file that populates default configuration values for the templates. Instead of manually editing files, Helm manages the process by merging the templates files and the values into a custom YAML file. It then applies and tracks the deployment of the YAML file on the Kubernetes cluster. Working with Helm in an airgapped environment When working in an airgapped environment, the following method can be applied: Use helm fetchto download a Helm chart from the chart repository. Copy the Helm chart archive over to the airgapped environment. Use helm install [NAME] [CHART] [flags]where [CHART]refers to the Helm chart archive to install the Helm chart in the airgapped Kubernetes cluster.
https://docs.cloudbees.com/docs/cloudbees-ci/2.249.2.3/openshift-install-guide/
2021-10-15T23:02:00
CC-MAIN-2021-43
1634323583087.95
[]
docs.cloudbees.com
Clients can easily and quickly book a meeting with you using your Telloe event widget. If you haven't already setup your Telloe event widget then follow Your Telloe Event Widget. 1. Clients booking with your Telloe widget will find the widget in the bottom left of your website. 1. Clients will select their timezone from the drop down. 2. Clients will then select their desired meeting time from your availabilities. 1. Clients can then Enable/Disable recurring bookings, for more information read the third section of Making A Booking On Behalf Of Clients. 2. Clients will then select their booking type from your desired booking types, you can change your available booking types following Your Telloe Event Widget. 1. If clients wish to book as a guest without creating an account, they will be required to enter their first name, last name and email. 2. If your client already has an account with Telloe, they will be required to enter their email and password. Alternatively, clients can also login using Facebook or Google. If everything goes well, your client will receive a message confirming their booking and an email will be sent to their email address where they can add the booking to their Telloe, Google, Outlook, iCal or Yahoo calendar. Your new booking will automatically be added to your Telloe calendar or Google and Outlook calendar if they have been integrated. To integrate your Google or Outlook calendar follow Integrating Google Calendar and Outlook. You will also receive an email informing you of your new booking and all the details you need.
https://docs.telloe.com/bookings/making-a-booking-with-the-booking-widget
2021-10-16T00:39:05
CC-MAIN-2021-43
1634323583087.95
[]
docs.telloe.com
Settings Overview for Settings > Taxes > EU VAT Settings This is the settings overview for GetPaid > Settings > Taxes > EU VAT Settings page. Your Company Details - Your Company Name - The name of your company. - Your VAT Number - Specify your VAT number. Apply VAT Settings - Enable VAT Rules - Check to apply VAT to consumer sales from IP addresses within the EU, even if the billing address is outside the EU. - Prevent EU B2C Sales - You should enable this option if you are not registered for VAT in the EU. - Same Country Rule - Select how you want to handle VAT charge if sales are in the same country as the base country. - Disable VAT Fields - Disable VAT fields if GetPaid is being used for GST. - MaxMind License Key - Input the license key for the MaxMind Geolocation service. - IP Country Lookup - Select the option that should be used to determine the country from the IP address of the user. - Enable IP Country as Default - Show the country of the users IP as the default country, otherwise the site default country will be used. VIES Validation - Disable VIES VAT ID Check - Disable VAT number validation by the EU VIES system. - Disable VIES Name Check - Disable company name validation by the EU VIES system. - Disable Basic Checks - Disable basic checks for the correct format of VAT number (not recommended).
https://docs.wpgetpaid.com/article/389-settings-overview-for-settings-taxes-eu-vat-settings
2021-10-15T22:34:16
CC-MAIN-2021-43
1634323583087.95
[]
docs.wpgetpaid.com
Installation¶ This chapter reviews a couple of different ways of installing PyGYRE on your system. Pre-requisites¶ PyGYRE requires that the following Python packages are already present: If you opt to install from PyPI (below), then these pre-requisites should be taken care of automatically. Installing from PyPI¶ To install PyGYRE from the Python Package Index (PyPI), use the pip command: pip install pygyre If PyGYRE is already installed, you can upgrade to a more-recent version via pip install --upgrade pygyre Installing from Source¶ To install PyGYRE from source, download the source code and unpack it from the command line using the tar utility: tar xf pygyre-1.1.5.tar.gz Then, change into the source subdirectory of the newly created directory and run the setup.py: script: cd pygyre-1.1.5/source python setup.py install
https://pygyre.readthedocs.io/en/latest/installation.html
2021-10-15T23:27:59
CC-MAIN-2021-43
1634323583087.95
[]
pygyre.readthedocs.io
The graphical user interface. The node. Messages window¶ The Messages window is where all output from nodes ends up; be it errors, warnings, or simple notices. When a node has something to say it will add a single row to the Messages window. This summary line consists of the node’s label in the node column and a summary of the output in the details column. If you click on the arrowhead to the left of the node label the row will expand and show more details. You can right-click anywhere in the Node column and choose Clear. The details for an exception will. Data‘582‘7 (column_names = [‘TEST’, ‘CAR’, ‘PLANE’, ‘TURBINE’])..
https://sympathy-for-data.readthedocs.io/en/latest/src/gui.html
2021-10-16T00:11:58
CC-MAIN-2021-43
1634323583087.95
[]
sympathy-for-data.readthedocs.io
Modifiable layouts and scripts BaseElements ships as a set of three .fmp12 files. If you open the BaseElements files with FileMaker Pro or Pro Advanced, you have full access to modify Layouts and add your own Scripts to the BaseElements_UI file. When you modify layouts, you can add your own versions of layouts, or duplicate layouts and change the formats. If you want to, you can add your own Fields, Buttons or Tabs to any layout. As well, the BaseElements Data file is not locked for access by other tools. So you can also create separate layouts in another file that have Table Occurrences referring to Base Tables in the BE Data file. This allows you to have separate User Interfaces for your BaseElements data. Summary BaseElements main file doesn't allow editing. BaseElements_UI allows you to edit existing Layouts, add new Layouts, and add new Scripts. BaseElements_Data allows you to add new Layouts, add new Scripts and connect another file to the base Tables.
https://docs.baseelements.com/article/351-modifiable-layouts-and-scripts
2021-10-16T00:33:18
CC-MAIN-2021-43
1634323583087.95
[]
docs.baseelements.com
Connect to Ned via Ethernet on Ubuntu¶ Note - This tutorial is working from: - The version v3.0.0 of the ned_ros_stackThe version v3.0.0 of Niryo Studio If you are using a Niryo One, please refer to this tutorial. Using an ethernet cable provides the best connection to use the robot. On Windows, there is no specific task to do to setup the wired connection (magic!). But from a Ubuntu computer you might have some issues with your computer not finding the robot on the network: The Raspberry Pi inside Ned is configured with a static address (169.254.200.200), and if your Ubuntu is configured with DHCP, it might not work. Here’s what to do: Change your wired settings¶ Ned’s IP address). Once you have saved those settings, restart your wired connection and plug the Ethernet cable between your computer and Ned.
https://docs.niryo.com/applications/ned/v1.0.0/en/source/tutorials/setup_connect_ned_ethernet.html
2021-10-15T23:17:56
CC-MAIN-2021-43
1634323583087.95
[array(['../../_images/connect_ethernet_ubuntu.png', 'connect ned ethernet with ubuntu'], dtype=object) array(['../../_images/ned_ethernet_settings_ubuntu.png', 'Ethernet settings'], dtype=object) ]
docs.niryo.com
To edit your event type availabilities, first navigate to your Telloe dashboard. 1. Click Event Types. 2. To edit an existing event type's availabilities click the gear icon in the top right of the desired event. 3. Click edit. 4. To edit your availabilities on a new event type click Add Event Type. 1. Click Availability. 2. Enter your time range of available times - this should generally be your work time. 3. Enter your break time range. 4. Enable/Disable a day to make the entire day unavailable. 5. Click the blue "Save" button.
https://docs.telloe.com/bookings/set-event-type-availability
2021-10-15T22:56:42
CC-MAIN-2021-43
1634323583087.95
[]
docs.telloe.com
Date: Mon, 1 Apr 2019 12:22:46 +0200 From: Polytropon <[email protected]> To: Lowell Gilbert <[email protected]> Cc: [email protected] Subject: Re: eee-dee anyone? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> <[email protected]> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Sat, 30 Mar 2019 10:56:40 -0400, Lowell Gilbert wrote: > Polytropon <[email protected]> writes: >=20 > > I think you're confusing vi and ex here (which are the same > > executable), but ed is something different (a different program). > > But I think the reason for this confusion is that using ed > > feels like using vi's ex mode or the ex standalone program. :-) >=20 > Yes, definitely. >=20 > Because it's described by POSIX, ed(1) is with us to stay. Because it > has non-trivial differences between POSIX and BSD versions (which have > bitten me in the past), I use sed(1) regardless of whether ed would have > done the job. I suspect that is a common pattern. I think the aspect of POSIX-compliance is one of the main reasons that so many "old-fashioned" programs still exist in default installs of many UNIXes. UNIX books which cover UNIX in general, instead of concentrating on one specific Linux version, still often cover those "legacy tools". Yes, I just checked two: Wolfinger, Christine: Keine Angst vor UNIX. Ein Lehrbuch f=FCr Einsteiger. 5. Auflage. VDI-Verlag. D=FCsseldorf. 1991. ("Don't be afraid of UNIX - a textbook for first-time users") Gulbins, J=FCrgen & Obermayr, Karl: AIX UNIX. System V.4. Begriffe, Konzept= e, Kommandos. Springer-Verlag. Berlin, Heidelberg, New York. 1996. Sidenote: They also cover sh, csh, and ksh. And for further educational purposes, allow me to repeat that ed is the "termonology originator" of the grep command binary: g/re/p; g =3D global action, /re/ =3D regular expression, p =3D print; for every line matching /re/, perform the action "print the line". This is what grep does. Because it is what ed does. --=20 Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ... Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=86061+0+/usr/local/www/mailindex/archive/2019/freebsd-questions/20190407.freebsd-questions
2021-10-16T01:05:41
CC-MAIN-2021-43
1634323583087.95
[]
docs.freebsd.org
Element. Style Id Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Gets or sets a user defined value to uniquely identify the element. public string StyleId { get; set; } member this.StyleId : string with get, set Property Value A string uniquely identifying the element. Remarks Use the StyleId property to identify individual elements in your application for identification in ui testing and in theme engines.
https://docs.microsoft.com/en-us/dotnet/api/xamarin.forms.element.styleid?view=xamarin-forms
2021-10-16T00:10:57
CC-MAIN-2021-43
1634323583087.95
[]
docs.microsoft.com
Get Sample Data for PowerPivot.
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/gg399144(v=sql.105)
2021-10-16T01:28:37
CC-MAIN-2021-43
1634323583087.95
[]
docs.microsoft.com
CreateStreamOverRandomAccessStream function (shcore.h) Creates an IStream around a Windows Runtime IRandomAccessStream object. Syntax HRESULT CreateStreamOverRandomAccessStream( [in] IUnknown *randomAccessStream, [in] REFIID riid, [out] void **ppv ); Parameters [in] randomAccessStream The source IRandomAccessStream. [in] riid A reference to the IID of the interface to retrieve through ppv, typically IID_IStream. This object encapsulates randomAccessStream. [out] ppv CreateRandomAccessStreamOnFile CreateRandomAccessStreamOverStream
https://docs.microsoft.com/es-es/windows/win32/api/shcore/nf-shcore-createstreamoverrandomaccessstream
2021-10-16T01:10:08
CC-MAIN-2021-43
1634323583087.95
[]
docs.microsoft.com
PointCloudStatisticsCalculator Calculates statistics on point cloud components and adds the results as attributes. Typical Uses - Inspecting and analyzing point cloud features - Calculating statistics for use in further operations How does it work? The PointCloudStatisticsCalculator receives point cloud features and calculates selected statistics on each one individually. The results are output on each point cloud feature. Available statistics include: - Minimum - Maximum - Mean - Sum - Range - Standard Deviation - Median - Mode Statistics may be calculated for any or all of these point cloud components: - x - y - z - intensity - color_red - color_green - color_blue - classification - return - number_of_returns - angle - flight_line - scan_direction - point_source_id - posix_time - user_data - gps_time - gps_week - flight_line_edge - normal_x - normal_y - normal_z Statistics are stored as attributes, named <component>.<statisticname>, where <statisticname> will be one of the following: min, max, range, mean, stdev, sum, median, mode. For example, if the median was calculated for component z, then an attribute named z.median would be added to the feature. Examples In this example, we will calculate some statistics on a LiDAR point cloud. The source data has a number of components that we may use. The point cloud is routed into a PointCloudStatisticsCalculator. In the parameters dialog, we select the components of interest and choose which statistics are to be calculated for each one. The output feature has new attributes containing the requested statistics. Note that the attribute names are composed of the component name and statistic type. Usage Notes - To calculate point cloud statistics on a group of point clouds as a single entity, merge them prior to this transformer with the PointCloudCombiner transformer._6<< This table specifies which statistics to calculate for eachStatisticsCalculator on the FME Community. Examples may contain information licensed under the Open Government Licence – Vancouver and/or the Open Government Licence – Canada. Keywords: point "point cloud" cloud PointCloud LiDAR sonar
https://docs.safe.com/fme/html/FME_Desktop_Documentation/FME_Transformers/Transformers/pointcloudstatisticscalculator.htm
2021-10-15T23:13:30
CC-MAIN-2021-43
1634323583087.95
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/PointCloudStatisticsCalculatorExample01.png', None], dtype=object) array(['../Resources/Images/PointCloudStatisticsCalculatorExample02.png', None], dtype=object) array(['../Resources/Images/PointCloudStatisticsCalculatorExample03.png', None], dtype=object) array(['../Resources/Images/PointCloudStatisticsCalculatorExample04.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.safe.com
Planning for Fault Tolerance The goal of a fault-tolerant environment is to ensure that if a hardware component fails, FME Server remains online. The fault tolerant architecture is comprised of redundant FME Servers spread across separate servers. A third-party load balancer is required, which directs incoming traffic to one of the redundant web components. Organizations are expected to maintain the FME Server Database and FME Server System Share (a file system for hosting Repositories and Resources) on their own fault-tolerant servers. This ensures the fault-tolerant FME Server has reliable access to workspaces, repositories, resources, and other items. WARNING: We recommend installing all FME Servers on systems that are synchronized to the same time zone. If time zones differ across FME Servers, unexpected issues may arise, including: Improper timing of Schedule Initiated triggers. Inconsistent or misleading timestamps in log files (accessed from Resources). Note: In a fault tolerant installation of FME Server, the Automations triggers UDP Message Received and Email Received (SMTP) (and corresponding Notification Service UDP Publisher and SMTP Publisher) are not supported. To receive email messages, consider the Email Received (IMAP) trigger instead. To Install a Fault Tolerant System Proceed to Installing a Scalable, Fault-Tolerant FME Server.
https://docs.safe.com/fme/html/FME_Server_Documentation/AdminGuide/Planning-Fault-Tolerance.htm
2021-10-15T23:31:09
CC-MAIN-2021-43
1634323583087.95
[array(['../Resources/Images/AdminGuide/2018-Active-Active-Architecture.png', None], dtype=object) ]
docs.safe.com
To configure the vSphere Replication Cloud Service host, you must register each vSphere Replication Cloud Service appliance to your vCloud Director appliance, resource vCenter Server, and RabbitMQ. If you have more than one vCloud Director instance configured in your vCenter Server lookup service, the vSphere Replication Cloud Service VM registers to the first vCloud Director instance in the lookup service. Procedure - Configure the vSphere Replication Cloud Service Appliance. The cassandra-replication-factorargument in the following command defines the number of data replicas across the Cassandra cluster. A replication factor 4 means that there are four copies of each row, where each copy is on a different node. The replication factor must not exceed the number of nodes in the Cassandra cluster. By default, the following command uses the AMQP settings from vCloud Director. If vCloud Director is not using an SSL port for AMQP, the vcav hcs configureoperation returns an error. You can add the --amqp-port=port-numberargument to override the vCloud Director port and point the AMQP service to an SSL port. The system returns an OKmessage, after the process finishes. - Run the following command to verify that the hcsservice starts successfully.. Note:. You do not need to restart any component for the changes to take effect.
https://docs.vmware.com/en/vCloud-Availability-for-vCloud-Director/1.0.1/com.vmware.vcavcd.install.doc/GUID-BDC6FE03-65CD-4B44-8C42-B67A9BED4E3F.html
2021-10-16T00:11:54
CC-MAIN-2021-43
1634323583087.95
[]
docs.vmware.com
Date: Fri, 29 Jun 2007 09:34:08 -0500 From: Kevin Kramer <[email protected]> To: [email protected] Subject: 7-Current: turn off debugging (kqread?) Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help I know that debugging is turned on everywhere on 7-Current and I've read UPDATING. But what I can't find is how to turn off debugging. I've tried removing all debugging from the kernel, but it won't build. I can't find any userland debugging notes to turn it off. It is taking literally 5 minutes to login to my host and the same amount of time to run top. Processes get stuck in kqread. Maybe this is not a 7-Current debugging problem, but I need some guidance on where to go or what to look for. Any help appreciated. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1552550+0+/usr/local/www/mailindex/archive/2007/freebsd-questions/20070701.freebsd-questions
2021-10-16T01:13:04
CC-MAIN-2021-43
1634323583087.95
[]
docs.freebsd.org
> References: <CAHieY7S9b9F1jndpkR2Drw=GCoBxmEWRs6Ot8MRjjQFH=xmHQQ@mail.gmail.com> <[email protected]> <CAHieY7SSbO+wt68PeFLYDzAtqMnR0kJ3UakOjvLkSMzVA31LbA@mail.gmail.com> <[email protected]> ?? Thanks, -- Alejandro Imass Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=169212+0+/usr/local/www/mailindex/archive/2013/freebsd-questions/20130428.freebsd-questions
2021-10-16T01:00:51
CC-MAIN-2021-43
1634323583087.95
[]
docs.freebsd.org
Editing Subscriptions Important Note: Please be aware that the edits made to subscriptions as outlined here will NOT make any changes on the gateway level (i.e. in your PayPal Account, Stripe Dashboard, etc.). You should be using data from the respective gateway to update subscription info. In MemberPress, you can edit any subscription that appears on the MemberPress > Subscriptions page. Reasons why you would want to do this could include: - You incorrectly imported a Subscription and need to fix it. - There was an issue with the gateway connection that prevented the subscription being setup with the correct gateway ID or other data. In this case you are seeing a subscription ID that looks like mp-sub-xxxxxxxxxxx. - You would like to manually move a current subscription and the access paid, to another user (original user will still be billed unless the new user updates their payment info on that subscription through their account page). - Etc. How you go about editing the subscription depends on the subscription type. Editing Automatically Recurring Subscriptions To edit Automatically Recurring Subscriptions, please follow these basic steps: - Navigate to your WordPress Dashboard > MemberPress > Subscriptions > Recurring tab. - Find or search for the subscription you want to edit. - Under the "Subscription" column, hover over the unique Subscription ID. - Click the "Edit" link. - On the "Edit Subscription" page, edit the data you need to edit. - Save your changes by clicking the "Update" link near the bottom of the page. Editing Non-Recurring Subscriptions Please note that a Non-Recurring Subscription is really just a single isolated Transaction or payment. This means that when editing a Non-Recurring Subscription, you are simply just editing a user's Transaction. To edit Non-Recurring Subscriptions, please follow these basic steps: - Navigate to your WordPress Dashboard > MemberPress > Subscriptions > Non-Recurring tab. - Find or search for the subscription/transaction you want to edit. - Under the "Transaction" column, click on the unique transaction ID. - Under the "Transaction" column, hover over the unique transaction ID and click the "Edit" link. - On the "Edit Transaction" page, edit the data you need to edit. - Save your changes by clicking the "Update" link near the bottom of the page.
https://docs.memberpress.com/article/224-editing-subscriptions
2021-10-15T22:58:46
CC-MAIN-2021-43
1634323583087.95
[]
docs.memberpress.com
Difference between revisions of "Home screen" Latest revision as of 07:54, 8 June 2021. Available Items pane The Available Items pane on the right displays items that can be linked from your home. The list contains all entities, plus several additional elements, such as hubs or maps. Select an item and click Add to add it to the menu. Most items can be added multiple times, for example in the case of entities, each entry can point to a different view. - Section - You can add one or more section headers to keep your home screen items organized. Section headers cannot have icons, only text. The text can be also localized. - Search - Add global search to your home screen. Double-click the menu item to configure where to search and how to display the results. Images pane The Images pane on the right displays the various home screen icons available in your project. Select a home screen item and choose an icon from the Images pane to use the icon. You can even upload new icons. Toolbar functions toolbar buttons allow you to further customize the menu: - Use Move Up and Move Down to reorder menu items. - Click Properties to edit the properties of the menu item. - Click Rename to change the label for the menu item. You can also set tooltip text (displayed in the web version of Resco Mobile CRM when you hover over the item). - Click Remove to delete an item from the menu. - Click Set Startup to automatically open the selected item on application start. - Click Badge to set up notifications about new items. See Badges for more information. - Click Design to modify the looks of the menu item (size, color, fonts, etc.). The following controls are available directly on the menu item (left to right): - Arrows: Drag the item up or down or even away from the menu. - Edit: Close the home screen editor and edit the item instead. - Home: Automatically open the selected item on application start. - Delete: Delete the item from the menu. Properties of items on home You can double-click most items on the home screen to display their properties. In the case of entity views, the properties are organized into multiple tabs: - Public View tab - It displays the views that are enabled/available to users when they open the Home entity item. E.g.,. - Initial Control defines the default way of displaying records when you first open the view. - Auto Refresh allows you to define how fast should the content of view reload in online mode. -. Badges. This feature requires Resco Mobile CRM version 14.1 or later. To enable badges: - Edit an app project in Woodford. - On the home screen, select an item, then click Badge. - Enable auto-refresh and set up how often should the badge be refreshed - Optionally, define a condition if you are only interested in a subset of records. Conditions are defined using the Filter editor user interface. icons, just like any other entity. Swap to the Images pane and select or upload an icon. Change icons Each item on the home screen has an associated icon which is displayed in the app. The icons are managed on the Images pane. - Each project comes with a set of default icons available for the most common entities and other home screen items. - You can override the default icon: Select an item on the home screen, choose an icon, then click Save. - Click Add image on the Images pane to upload a custom icon and use it for the selected home screen item. - Click Remove image on the Images pane to disassociate a custom icon with the selected home screen item and return to its default icon. The icon is not actually deleted, it remains available for later use. - Once you're done updating icons, click Save. Advanced All icons, both default and custom, can be managed in the Images section of your project. - Select Design > Images from the Project menu. - Change the Directory to "Home". - You can filter the image list by typing into the search bar. The default icon names are equal to the label of the item on the home screen. The Images section of your project is an older, less recommended way of managing icons. It allows you to permanently delete icons from your project, potentially resulting in home screen items without any icon displayed in the app. It does offer some also some functions not available with the Images pane, such a bulk changes. See project images for more information. You can add a custom image as a banner to your home screen, for example, your company's logo. - Edit the home screen of your project. - Click Add banner and upload a suitable image. - Click Save and publish the project. When your app users synchronize their apps, the image is displayed above the home screen: The banner is saved as a normal project image. You can also modify it in the Images section of Woodford: go to the "Home" directory and look for a file called "image". Rename home items Home items can be renamed using localization. See how to change the title, subtitle, and icon of an entity on home screen..
https://docs.resco.net/mediawiki/index.php?title=Home_screen&diff=cur&oldid=3722
2021-10-16T00:20:15
CC-MAIN-2021-43
1634323583087.95
[]
docs.resco.net
-LOB type. In this case, the performance may be better because there is no LOB overhead. You may see some performance improvement especially when the data type is used with UDFs. You can use the maxlength together with the INLINE LENGTH options to improve space management for geospatial data by forcing smaller geometries to be stored inline as non-LOB values. This stores the geospatial data within the table row itself, rather than in a separate LOB subtable. Furthermore, by specifying a small inline length for small geometries, Vantage reserves less space in the row, so more columns can be added to the table. Additionally, non-LOB geometries work with some load utilities that do not support LOBs.
https://docs.teradata.com/r/1drvVJp2FpjyrT5V3xs4dA/ZyrYeoBF8yVfsooZ54qQJA
2021-10-16T00:54:38
CC-MAIN-2021-43
1634323583087.95
[]
docs.teradata.com
- Action Analytics - AppExpert Applications and Templates - How AppExpert application works - Download an application template - Import an application template - Verify and test application configuration - - Exporting an AppExpert Application to a Template File - Exporting a Content Switching Virtual Server Configuration to a Template File - Creating Variables in Application Templates - Uploading and Downloading Template Files - Understanding NetScaler Application Templates and Deployment Files - Deleting a Template File - - Sample Usage -! Importing an Application Template For NetScaler software version 9.3 or later, each AppExpert template has two XML files: a Template file and a Deployment file. You must import both files from your local computer to a NetScaler appliance. You can either import the template files from your computer to the AppExpert application templates directory in NetScaler appliance or upload files to a NetScaler appliance and then import them from the appliance. Note: When you import a template from an appliance, you have to provide the variable value available in the template. By default, the pre-configured value is display After you import the template files, the application-configuration and deployment information populates the target application automatically. The appliance imports all the configuration from the template files through the NITRO API. If you do not import the deployment file, the system generates an application populated with content switching virtual server configuration. For more information about the format of application templates and deployment files, see Understanding NetScaler Application Templates and Deployment Files. When you import a template, if you do not include a deployment file, you have to configure the public endpoints in the application that the system automatically generates from the template. One endpoint for HTTP and another endpoint for HTTPS. When configuring a public endpoint of type HTTPS, make sure you enable the SSL feature, bind the server certificate, and include the server-certificate and certificate-key files. For more information about configuring endpoints after you import a template, see Configuring Public Endpoints. To import AppExpert application template files to a NetScaler appliance by using the GUI: - Navigate to AppExpert > Applications. - In the details pane, click Import Template. - On the Import page, set the following parameters: - Application Name (mandatory) - Template File (mandatory) - Use deployment file - Click Continue to auto populate application-configuration and deployment information into an application. NetScaler’s video tutorials enable you to understand NetScaler features in an easy and simply way. Watch video to learn how to import an application template.
https://docs.citrix.com/en-us/netscaler/12/appexpert/appexpert-application-templates/getting-started-appexpert-applications/Importing_Template.html
2019-01-16T04:58:00
CC-MAIN-2019-04
1547583656665.34
[]
docs.citrix.com
sendgrid-rs Unofficial Rust library for the SendGrid API. This crate requires Rust 1.15 or higher as it uses a crate that has a custom derive implementation. sendgrid-rs implements all of the functionality of other supported SendGrid client libraries. To use sendgrid-rs you must first create a SendGrid account and generate an API key. To create an API key for your SendGrid account, use the account management interface or see the SendGrid API Documentation. sendgrid-rs is available on crates.io and can be included in your Cargo.toml as follows: [dependencies] sendgrid = "X.X.X" Build Dependencies This library utilises hyper and hyper-native-tls. This crate enables easy TLS setup for mac OS and Windows users. If you are on Linux, you must have OpenSSL installed. The instructions here are the most comprehensive if you have trouble. Example An example of using this library can be found in the examples directory. This example code expects to find your SendGrid API key in the process environment. In shells such as Bash or ZSH this can be set as follows: export SENDGRID_API_KEY="SG.my.api.key" Documentation Please don't hesitate to contact me at the email listed in my profile. I will try to help as quickly as I can. If you would like to contribute, contact me as well. Mentions Thanks to meehow for his contributions to improve the library. License MIT
https://docs.rs/crate/sendgrid/0.7.1
2019-01-16T04:23:30
CC-MAIN-2019-04
1547583656665.34
[]
docs.rs
Contents IT Business Management Previous Topic Next Topic Service charging ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Service charging As a service owner, you can use the Service Pricing Console to monitor the consumption of the services, the status of each statement item, and can set the pricing for the statement items. As a service charging analyst, you can analyze and research on the economic trends and conditions of the business and make necessary charging recommendations for the business service. Monitor service charges in the service pricing consoleAs a service owner use the service pricing console to generate service charge lines, view the service charge lines, and set pricing policy. Setting the pricing policy generates the rate card, which captures the set price, surcharge or discount details.Create ratecards to fix price for your business serviceCreate a ratecard that lists prices for your business service or business service components. As a service owner, you can create a ratecard for a statement item, which represents the business service that you own. The ratecard is based on the pricing policy method attached to the statement item for a fiscal period. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-it-business-management/page/product/it-finance/concept/service-charging.html
2019-01-16T04:23:50
CC-MAIN-2019-04
1547583656665.34
[]
docs.servicenow.com
Using Targets and Rate Controls with State Manager Associations AWS Systems Manager enables you to create State Manager associations on a fleet of managed instances by using targets. Additionally, you can control the execution of these associations across your fleet by specifying a concurrency value and an error threshold. The concurrency value specifies how many resources are allowed to run the association simultaneously. An error threshold specifies how many association executions are allowed to fail before Systems Manager sends a command to each instance configured with that association to stop executing the association until the next scheduled execution. The concurrency and error threshold features are collectively called rate controls. Concurrency Concurrency helps to limit the impact on your fleet by allowing you to specify that only a certain number of instances can process an association at one time. You can specify either an absolute number of instances, for example 20, or a percentage of the target set of instances, for example 10%. State Manager concurrency has the following restrictions and limitations: If you choose to create an association by using targets, but you don't specify a concurrency value, then State Manager automatically enforces a maximum concurrency of 50 instances. If new instances that match the target criteria come online while an association that uses concurrency is running, then the new instances execute the association if the concurrency value is not exceeded. If the concurrency value is exceeded, then they are ignored during the current association execution interval. They will run the association during the next scheduled interval while conforming to the concurrency requirements. If you update an association that uses concurrency, and one or more instances are processing that association when it is updated, then any instance that is executing the association is allowed to be completed. Those associations that haven't started are aborted. After running associations are completed, all target instances immediately run the association again because it was updated. When the association runs again, the concurrency value is enforced. Error Thresholds An error threshold specifies how many association executions are allowed to fail before Systems Manager sends a command to each instance configured with that association to stop executing the association until the next scheduled execution. You can specify either an absolute number of errors, for example 10, or a percentage of the target set, for example 10%. If you specify an absolute number of three errors, for example, State Manager sends the stop command when the fourth error is received. If you specify 0, then State Manager sends the stop command after the first error result is returned. If you specify an error threshold of 10% for 50 associations, then State Manager sends the stop command when the sixth error is received. Associations that are already running when an error threshold is reached are allowed to be completed, but some of these associations might fail as well. If you need to ensure that there won’t be more errors than the number specified for the error threshold, then set the Concurrency value to 1 so that associations proceed one at a time. State Manager error thresholds have the following restrictions and limitations: Error thresholds are enforced for the current interval. Information about each error, including step-level details, are recorded in the association history. If you choose to create an association by using targets, but you don't specify an error threshold, then State Manager automatically enforces a threshold of 50 failures. Create an Association that Uses Targets and Rate Controls (CLI) You can create associations on tens, hundreds, or thousands of instances by using the targets parameter. The targets parameter accepts a Key,Value combination based on Amazon EC2 tags that you specified for your instances. When you run the request to create the association, the system locates and attempts to create the association on all instances that match the specified criteria. After the association is created and assigned to the instance or to a target set of instances, then State Manager immediately executes the association. Note When you create an association, you specify when the schedule runs. You must specify the schedule by using a cron or rate expression. There are many tools on the internet to help you create these expressions. For more information about cron and rate expressions, see Cron and Rate Expressions for Associations. Use the following format to create an AWS CLI command that uses targets to create a State Manager association. aws ssm create-association --targets Key=tag: TagKey,Values= TagValue--name command_document_name--compliance-severity " severity_level" --schedule " cron_or_rate_expression" --parameters (if any)--max-concurrency (Optional) a_number_of_instances_or_a_percentage_of_target_set--max-errors (Optional) a_number_of_errors_or_a_percentage_of_target_set The following example creates an association on instances tagged with "Environment,Linux". The association uses the AWS-UpdateSSMAgent document to updates SSM Agent on the targeted instances at 2:00 every Sunday morning. For compliance reporting, this association is assigned a severity level of Medium. aws ssm create-association --association-name Update_SSM_Agent_Linux --targets Key=tag:Environment,Values=Linux --name AWS-UpdateSSMAgent --compliance-severity "MEDIUM" --schedule "cron(0 0 2 ? * SUN *)" Note If you use tags to create an association on one or more target instances, and then you remove the tags from an instance, that instance no longer executes the association. The instance is dissociated from the State Manager document.
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-state-manager-targets-and-rate-controls.html
2019-01-16T04:13:08
CC-MAIN-2019-04
1547583656665.34
[]
docs.aws.amazon.com
To publish a document after it has been uploaded in Documents Module you have to double click on the respective file to open a box. In its left side you will find „Publish” button. If you click on this button, you will be able to select the persons you give access to the uploaded file, you will have access to their name and their description. If you click on the green button, you will make the respective file public for everyone.
https://docs.kinderpedia.co/how-to-use-kinderpedia/2-documents-module/how-to-publish-a-document
2019-01-16T04:28:41
CC-MAIN-2019-04
1547583656665.34
[]
docs.kinderpedia.co
Inserting recipients Now that you have the required PayPal credentials, you can register all recipients you are going to share your store sales with. Remember that any recipient must be a user that’s registered to your website and among the PayPal recipients address list, yours must not be entered, meaning the main vendor’s. The receiver is not required to have an account PayPal Business. - Access the Receiver Settings section in the plugin settings panel. - Type in the user name of the recipient (in case of no results, make sure the user is registered in your store) - For each recipient, specify the commission rate they are entitled to in case of sales and the email address associated to his PayPal account. From now on, each of the users you entered, will receive a commission for each registered sale on your shop.
https://docs.yithemes.com/yith-paypal-adaptive-payments-for-woocommerce/settings/insert-recipients/
2019-01-16T03:59:32
CC-MAIN-2019-04
1547583656665.34
[]
docs.yithemes.com
@UML(identifier="DirectPosition", specification=ISO_19107) public interface DirectPosition extends Position DirectPositions, as data types, will often be included in larger objects (such as geometries) that have references to coordinate reference system, the getCoordinateReferenceSystem()method may returns nullif this particular DirectPositionis included in a larger object with such a reference to a coordinate reference system. In this case, the cordinate reference system is implicitly assumed to take on the value of the containing object's coordinate reference system. Note: this interface does not extends Cloneable on purpose, since DirectPosition implementations are most likely to be backed by references to internal structures of the geometry containing this position. A direct position may or may not be cloneable at implementor choice. getDirectPosition @UML(identifier="coordinateReferenceSystem", obligation=MANDATORY, specification=ISO_19107) CoordinateReferenceSystem getCoordinateReferenceSystem() nullif this particular DirectPositionis included in a larger object with such a reference to a coordinate reference system. In this case, the cordinate reference system is implicitly assumed to take on the value of the containing object's coordinate reference system. null. @UML(identifier="dimension", obligation=MANDATORY, specification=ISO_19107) int getDimension() @UML(identifier="coordinate", obligation=MANDATORY, specification=ISO_19107) double[] getCoordinate() To manipulate ordinates, the following idiom can be used:To manipulate ordinates, the following idiom can be used:final int dim = position.getDimension(); for (int i=0; i<dim; i++) { position.getOrdinate(i); // no copy overhead } There are a couple reasons for requerying a copy:There are a couple reasons for requerying a copy:position.setOrdinate(i, value); // edit in place DirectPosition), or we want to protect the array from future DirectPositionchanges. DirectPosition.getOrdinates()is garanteed to not return the backing array, then we can work directly on this array. If we don't have this garantee, then we must conservatively clone the array in every cases. Precedence is given to data integrity over getOrdinates() performance. Performance concern can be avoided with usage of getOrdinate(int). DirectPositionobject. double getOrdinate(int dimension) throws IndexOutOfBoundsException dimension- The dimension in the range 0 to dimension-1. IndexOutOfBoundsException- If the given index is negative or is equals or greater than the envelope dimension. void setOrdinate(int dimension, double value) throws IndexOutOfBoundsException, UnsupportedOperationException dimension- the dimension for the ordinate of interest. value- the ordinate value of interest. IndexOutOfBoundsException- If the given index is negative or is equals or greater than the envelope dimension. UnsupportedOperationException- if this direct position is immutable. boolean equals(Object object) objectis non-null and is an instance of DirectPosition. Double.equals(java.lang.Object). In other words, Arrays.equals(getCoordinate(), object.getCoordinate())returns true. equalsin class Object object- The object to compare with this direct position for equality. trueif the given object is equals to this direct position. int hashCode() Arrays.hashCode(getCoordinate()) + getCoordinateReferenceSystem().hashCode()where the right hand side of the addition is omitted if the coordinate reference system is null. hashCodein class Object
http://docs.geotools.org/latest/javadocs/org/opengis/geometry/DirectPosition.html
2019-01-16T03:52:50
CC-MAIN-2019-04
1547583656665.34
[]
docs.geotools.org
Source code for bitshares.amount # -*- coding: utf-8 -*- from .asset import Asset from .instance import BlockchainInstance from graphenecommon.amount import Amount as GrapheneAmount[docs]@BlockchainInstance.inject class Amount(GrapheneAmount): """ This class deals with Amounts of any asset to simplify dealing with the tuple:: (amount, asset) :param list args: Allows to deal with different representations of an amount :param float amount: Let's create an instance with a specific amount :param str asset: Let's you create an instance with a specific asset (symbol) :param bitshares.bitshares.BitShares blockchain_instance: BitShares instance :returns: All data required to represent an Amount/Asset :rtype: dict :raises ValueError: if the data provided is not recognized .. code-block:: python from peerplays.amount import Amount from peerplays.asset import Asset a = Amount("1 USD") b = Amount(1, "USD") c = Amount("20", Asset("USD")) a + b a * 2 a += b a /= 2.0 Way to obtain a proper instance: * ``args`` can be a string, e.g.: "1 USD" * ``args`` can be a dictionary containing ``amount`` and ``asset_id`` * ``args`` can be a dictionary containing ``amount`` and ``asset`` * ``args`` can be a list of a ``float`` and ``str`` (symbol) * ``args`` can be a list of a ``float`` and a :class:`bitshares.asset.Asset` * ``amount`` and ``asset`` are defined manually An instance is a dictionary and comes with the following keys: * ``amount`` (float) * ``symbol`` (str) * ``asset`` (instance of :class:`bitshares.asset.Asset`) Instances of this class can be used in regular mathematical expressions (``+-*/%``) such as: .. code-block:: python Amount("1 USD") * 2 Amount("15 GOLD") + Amount("0.5 GOLD") """
http://docs.pybitshares.com/en/latest/_modules/bitshares/amount.html
2019-01-16T03:22:26
CC-MAIN-2019-04
1547583656665.34
[]
docs.pybitshares.com
The easiest way to create Best Bets is to directly add keywords to URLs. This skips the group and display settings, which can be customized later (and are detailed below). From the "List/Edit URLs" page, enter the desired URL and click on the URL to get the details on that URL. There is a form on the page that allows keywords to be added to that URL. You can define a priority, title, description, and keywords for the URL (as detailed in the list below, under Fully Customized). The group will be listed as (Create New). This will create a default group and automatically set it to display, instantly using the Best Bet you just created. The created group ( default) can then be used to create any number of other keyword-URL associations. You can go to the "Search Settings" page to customize how the Best Bets are displayed, as detailed below.
https://docs.thunderstone.com/site/webinatorman/quick_creation.html
2019-01-16T04:27:20
CC-MAIN-2019-04
1547583656665.34
[]
docs.thunderstone.com
WebPI From Joomla! Documentation Revision as of 21:10, 27 April 2011 by Pasamio (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) WebPI or "Web Platform Installer" is a Microsoft tool for installing web technologies on Windows. WebPI can install Joomla!, MySQL, PHP and IIS. See: [[1]] Retrieved from ‘’ Categories: IISWebPI
https://docs.joomla.org/index.php?title=WebPI&direction=prev&oldid=57112
2015-08-28T02:25:45
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
Battery life varies depending on how you use your BlackBerry device. To help battery power last longer, here are some considerations: Prolong battery life by changing your device settings to dim the screen. Do any of the following: Do any of the following: Increase how long a single charge lasts by closing any apps or features that you finished using so that they aren't continuously running in the background. Some features consume more battery power than others. Close or turn off these apps when you don't use them: The camera, the BlackBerry Browser, Voice Control, GPS, and Bluetooth technology. If your device is out of a wireless coverage area, turn off the connection so that your device doesn't search for a network signal continuously. You can often gain a boost in power savings by using the latest version of the BlackBerry 10 OS. If a software update is available for your device, a notification appears in the BlackBerry Hub. Reduce power usage by keeping less data on your device. Save data to a media card instead of your device storage space. Conserve more power by turning off the flash when taking pictures.
http://docs.blackberry.com/en/smartphone_users/deliverables/61705/als1342451670263.html
2015-08-28T02:36:01
CC-MAIN-2015-35
1440644060173.6
[]
docs.blackberry.com
JString/strpospos Find position of first occurrence of a string [<! removed edit link to red link >] <! removed transcluded page call, red link never existed > Syntax strpos($str, $search, $offset=FALSE) Returns mixed Number of characters before the first match or FALSE on failure Defined in libraries/joomla/utilities/string.php Importing jimport( 'joomla.utilities.string' ); Source Body function strpos($str, $search, $offset = FALSE) { if ( $offset === FALSE ) { return utf8_strpos($str, $search); } else { return utf8_strpos($str, $search, $offset); } } [<! removed edit link to red link >] <! removed transcluded page call, red link never existed > Examples <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API15:JString/strpos&oldid=98286
2015-08-28T03:18:56
CC-MAIN-2015-35
1440644060173.6
[]
docs.joomla.org
# Attack Source # Problem description This IP has been identified as a source of attacks. Usually this means that someone else is controlling your device and using it to attack others. Your device may be used to find new victims to infect, gain unauthorized access to to other devices, scan networks for vulnerabilities and try to exploit them, or cause disruption to normal Internet services. These forms of attacks are detected automatically by researchers with devices called honeypots, which simply listen for attack attempts and record the attacker IP address and attack type when an attack is detected. Sometimes also the victims of these attack attempts alert researchers that they have seen an attack from a particular IP. Our research partner Deutsche Telekom Honeypot Project operates a network of honeypots around the world as part of their Cyber Early Warning System. # Suggestions for repair First of all you need to identify the device which is sending out these attacks. Please read our instructions on locating vulnerable devices. After you find the correct device, we recommend you to reset it to its factory settings or perform a full reinstall of the operating system. If your device is compromised and.
https://docs.badrap.io/types/attacksource.html
2022-09-25T00:48:20
CC-MAIN-2022-40
1664030334332.96
[]
docs.badrap.io
This section describes how to configure Address Manager and DNS/DHCP Servers using the command-line interface of their respective Administration Consoles. The Administration Console lets you input straightforward commands to configure interface, network, and system settings on your BlueCat appliance or virtual machine. It includes tab completion on all static keywords and dynamic input values, context specific keyword help, consistent configuration operators for entering user configurations, and scripting of configuration operations. Certain settings, such as backup and database (Address Manager only) and DHCP (DNS/DHCP Server only) must still be configured using Additional Configuration mode. Use the Additional Configuration mode to modify these settings. For more details, refer to Configuring additional options.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Administration-Console/9.2.0
2022-09-25T01:42:29
CC-MAIN-2022-40
1664030334332.96
[]
docs.bluecatnetworks.com
The dataflow optimization is useful on a set of sequential tasks (for example, functions and/or loops), as shown in the following figure. The above figure shows a specific case of a chain of three tasks, but the communication structure can be more complex than shown, as long as there are no cycles in the task dependence graph.. Refer to Vitis-HLS-Introductory-Examples/Dataflow on Github for examples of these concepts..
https://docs.xilinx.com/r/en-US/ug1399-vitis-hls/Exploiting-Task-Level-Parallelism-Dataflow-Optimization
2022-09-25T01:41:11
CC-MAIN-2022-40
1664030334332.96
[]
docs.xilinx.com
. In a nutshell, a rollover typically works like this: - you request/create new key material - you publish the new validation key in addition to the current one. You can use the AddValidationKey.
https://identityserver4.readthedocs.io/en/docs-preview/topics/crypto.html
2022-09-25T01:07:20
CC-MAIN-2022-40
1664030334332.96
[]
identityserver4.readthedocs.io
Indices and tables¶ The main goals of this project are to: - Provide a powerful malicious file triage tool for cyber responders. - Help fill existing detection gaps for malicious office documents, which are still a very prevalent attack vector today. - Deliver a new avenue for threat intelligence, a way to group similar malicious office documents together to identify phishing campaigns and track use of specific malicious document templates. These goals are achieved through clever feature engineering and applied machine learning techniques like Random Forest and TF-IDF. Installation¶ sudo pip install mmbot That’s it! Otherwise checkout the source on this git repo. Triage office files with five lines of code Import, instantiate, predict: from mmbot import MaliciousMacroBot mmb = MaliciousMacroBot() mmb.mmb_init_model() result = mmb.mmb_predict('./your_path/your_file.xlsm', datatype='filepath') print result.iloc[0] Note: mmb_predict() returns a Pandas DataFrame. If you are unfamiliar with Pandas DataFrames, there is a helper function that can be used to convert a useful summary of the prediction result to json. Convert result from Pandas DataFrame to json print mmb.mmb_prediction_to_json(prediction) This package was designed for flexibility. The mmb_predict() function will take in single office documents as a path to the specific file, as a path to a directory and recursively analyze all files in the path and subdirectories, as a raw byte stream of a file passed to it, or as a string of already extracted vba text that a different tool already processed. Finally, all of these options can be done in bulk mode, where the input is a Pandas DataFrame. The method will decide how to handle it based on the “datatype” argument and the actual python object type passed in. More Information¶ Python 3 not fully supported. One package dependency is not working in Python 3.5 and higher, but once that is updated the rest of this project is ready to support Python 3. License¶ - Free software: MIT License - Documentation:.
https://maliciousmacrobot.readthedocs.io/en/latest/
2022-09-25T01:21:05
CC-MAIN-2022-40
1664030334332.96
[]
maliciousmacrobot.readthedocs.io
2.4. Solving Poisson’s equation in 1d¶ This example shows how to solve a 1d Poisson equation with boundary conditions. from pde import CartesianGrid, ScalarField, solve_poisson_equation grid = CartesianGrid([[0, 1]], 32, periodic=False) field = ScalarField(grid, 1) result = solve_poisson_equation(field, bc=[{"value": 0}, {"derivative": 1}]) result.plot() Total running time of the script: ( 0 minutes 0.083 seconds) Gallery generated by Sphinx-Gallery
https://py-pde.readthedocs.io/en/v0.19.1/examples_gallery/poisson_eq_1d.html
2022-09-25T02:01:58
CC-MAIN-2022-40
1664030334332.96
[array(['../_images/sphx_glr_poisson_eq_1d_001.png', 'poisson eq 1d'], dtype=object) ]
py-pde.readthedocs.io
ApplicationSet Security¶ ApplicationSet is a powerful tool, and it is crucial to understand its security implications before using it. Only admins may create/update/delete ApplicationSets¶ ApplicationSets can create Applications under arbitrary Projects. Argo CD setups often include Projects (such as the default) with high levels of permissions, often including the ability to manage the resources of Argo CD itself (like the RBAC ConfigMap). ApplicationSets can also quickly create an arbitrary number of Applications and just as quickly delete them. Finally, ApplicationSets can reveal privileged information. For example, the git generator can read Secrets in the Argo CD namespace and send them to arbitrary URLs (e.g. URL provided for the api field) as auth headers. (This functionality is intended for authorizing requests to SCM providers like GitHub, but it could be abused by a malicious user.) For these reasons, only admins may be given permission (via Kubernetes RBAC or any other mechanism) to create, update, or delete ApplicationSets. Admins must apply appropriate controls for ApplicationSets' sources of truth¶ Even if non-admins can't create ApplicationSet resources, they may be able to affect the behavior of ApplicationSets. For example, if an ApplicationSet uses a git generator, a malicious user with push access to the source git repository could generate an excessively high number of Applications, putting strain on the ApplicationSet and Application controllers. They could also cause the SCM provider's rate limiting to kick in, degrading ApplicationSet service. Templated project field¶ It's important to pay special attention to ApplicationSets where the project field is templated. A malicious user with write access to the generator's source of truth (for example, someone with push access to the git repo for a git generator) could create Applications under Projects with insufficient restrictions. A malicious user with the ability to create an Application under an unrestricted Project (like the default Project) could take control of Argo CD itself by, for example, modifying its RBAC ConfigMap. If the project field is not hard-coded in an ApplicationSet's template, then admins must control all sources of truth for the ApplicationSet's generators.
https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Security/
2022-09-25T02:05:36
CC-MAIN-2022-40
1664030334332.96
[]
argo-cd.readthedocs.io
Documentation for a newer release is available. View Latest Go Golang rebased to 1.10 With this update, Golang packages have been rebased to version 1.10. The new version offers performance improvements, support for new instructions, new features, and bugfixes. See for a detailed list of all changes in version 1.10. All dependent packages have been rebuilt.
https://docs.fedoraproject.org/te/fedora/f28/release-notes/developers/Development_Go/
2022-09-25T02:20:06
CC-MAIN-2022-40
1664030334332.96
[]
docs.fedoraproject.org
Reset-Brokerenabledfeaturelist¶ Resets the broker's list of enabled features. Syntax¶ Reset-BrokerEnabledFeatureList [-AdminAddress <String>] [-BearerToken <String>] [<CommonParameters>] Detailed Description¶ The Reset-BrokerLicensingConnection cmdlet resets the broker's list of enabled features. Toggling site features on or off doesn't become effective immediately. There will typically be a delay as the changes are propagated across the site based on the scheduling of refresh logic built into the controllers. After toggling features on or off you can run Reset-BrokerEnabledFeatureList to ensure that the broker can access the new list of enabled features immediately. Each broker service instance holds its list of enabled features. In order for the feature list-BrokerEnabledFeatureList Description¶ Reset the broker's list of enabled features.
https://developer-docs.citrix.com/projects/citrix-virtual-apps-desktops-sdk/en/1808/Broker/Reset-BrokerEnabledFeatureList/
2022-09-25T01:29:57
CC-MAIN-2022-40
1664030334332.96
[]
developer-docs.citrix.com
Primary Use Cases Over the past months there have been many wonderful developments on the Solana blockchain. As huge supporters of the ecosystem and having been developing with the protocol for sometime, we believe that the ability to earn yield is a growth catalyst and something that is needed on Solana. With Acumen, we hope to bring seamless yield earning to the platform, fostering more growth and innovation on the Solana blockchain. A major problem that plagues most DeFi protocols on the Ethereum blockchain are gas fees. Since Acumen is natively built on Solana, Acumen can attain the network’s sub-second finality and the average transaction fee of $0.0001, making the protocol faster and more efficient while at the same time saving money. Speed and cost is an essential component for lending protocols; if gas fees are too high it makes basic transactions unprofitable and ineffective. By leveraging Solana, we are able to remedy the problem that plagues most lending protocols on the Ethereum blockchain and make micro lending a profitable reality. Acumen protocol also has other use cases. Individuals who are bullish on a specific asset can use the protocol to earn additional returns on their investment by supplying assets in the protocol. For example, if someone was bullish on SOL, they can lend out SOL and earn interest on it. Conversely, the Acumen protocol can also be used to short tokens. For example, if an individual believes that token X will depreciate, they can borrow asset X sell it for a stablecoin, and when asset X depreciates, buy it again, repay the loan and keep the difference. Individuals can also benefit from the easy-to-use liquidation platform (covered more in-depth later on). In order to be a liquidator on most protocols, one must have an innate knowledge of smart contracts to code and launch their own bot. Acumen makes liquidation easier by allowing users to see all undercollateralized loans. From the liquidation tab, users can then choose which loans they want to liquidate. Essentially, buying their assets at a discounted rate, this will be discussed further in the documentation. Introduction Tokenomics Last modified 1yr ago Copy link
https://docs.acumen.network/primary-use-cases
2022-09-25T02:39:13
CC-MAIN-2022-40
1664030334332.96
[]
docs.acumen.network
API Overview This document describes the BigBlueButton application programming interface (API). For developers, this API enables you to - create meetings - join meetings - end meetings - insert documents - get recordings for past meetings (and delete them) - upload closed caption files for meetings To make an API call to your BigBlueButton server, your application makes HTTPS requests to the BigBlueButton server API endpoint (usually the server’s hostname followed by /bigbluebutton/api). All API calls must includeMeetingInfo - Added fields on the returned XML and deprecated parameters - getRecordings - Added meta parameter and state parameter to filter returned results Updated in 1.1: - create - Added fields on the returned XML - getMeetings - Added fields on the returned XML - getMeetingInfo - Added fields on the returned XML - getRecordings - Returns an XML block with thumbnails from the slides as well as a <participants>N</participants>element with number of participants who attend the meeting. - updateRecordings - Meta parameters can be edited Updated in 2.0: - create - Added bannerText, bannerColor, logo, muteOnStart. - getMeetings - Now returns all the fields in getMeetingInfo. - getMeetingInfo - Added <client>field to return client type (FLASH, or HTML5). Updated in 2.2: - create - Added endWhenNoModerator. - getRecordingTextTracks - Get a list of the caption/subtitle files currently available for a recording. - putRecordingTextTrack - Upload a caption or subtitle file to add it to the recording. If there is any existing track with the same values for kind and lang, it will be replaced. Updated in 2.3: - create - Renamed keepEventsto meetingKeepEvents, removed joinViaHtml5, added endWhenNoModeratorDelayInMinutes. - getDefaultConfigXML obsolete, not used in HTML5 client. - setConfigXML obsolete, not used in HTML5 client. Updated in 2.4: - getDefaultConfigXML Removed, not used in HTML5 client. - setConfigXML Removed, not used in HTML5 client. - create - Added meetingLayout, learningDashboardEnabled, learningDashboardCleanupDelayInMinutes, allowModsToEjectCameras, virtualBackgroundsDisabled, allowRequestsWithoutSession, userCameraCap. name, attendeePW, and moderatorPWmust be between 2 and 64 characters long meetingIDmust be between 2 and 256 characters long and cannot contain commas - join - Added role, excludeFromDashboard. Updated in 2.5: create - Added: meetingCameraCap, groups, disabledFeatures, meetingExpireIfNoUserJoinedInMinutes, meetingExpireWhenLastUserLeftInMinutes, preUploadedPresentationOverrideDefault; Deprecated: learningDashboardEnabled, breakoutRoomsEnabled, virtualBackgroundsDisabled. insertDocument endopoint was first introduced Updated in 2.6: - create - Added: notifyRecordingIsOn, uploadExternalUrl, uploadExternalDescription. enables 3rd-party applications to make API calls (if they have the shared secret), but not allow other people (end users) to make API calls. The BigBlueButton API calls are almost all made server-to-server. If you installed the package bbb-demo on your BigBlueButton server, you get a set of API examples, written in Java Server Pages (JSP), that demonstrate how to use the BigBlueButton API. These demos run as a web application in tomcat7. The web application makes HTTPS requests to the BigBlueButton server’s API end point. You can retrieve your BigBlueButton API parameters (API endpoint and shared secret) using the command $ bbb-conf --secret Here’s a sample return URL: Secret: ECCJZNJWLPEA3YB6Y2LTQGQD3GJZ3F93 You should not embed the shared secret within located in the /etc/bigbluebutton/bbb-web.properties file. Look for the parameter securitySalt (it’s called securitySalt due to legacy naming of the string) securitySalt=<your_salt> We’ll refer to this value as sharedSecret. When you first install BigBlueButton on a server, the packaging scripts create a random 32 character sharedSecret. You can also change the sharedSecret at anytime using the command bbb-conf --setsecret. $ sudo bbb-conf --setsecret <new_shared_secret> The following command will create a new 32 character shared secret for your server $ sudo bbb-conf --setsecret \$(openssl rand -base64 32 | sed 's/=//g' | sed 's/+//g' | sed 's/\///g') IMPORTANT: DO NOT ALLOW END USERS TO KNOW YOUR SHARED SECRET OR ELSE YOUR SECURITY WILL BE COMPROMISED. There are other configuration values in bbb-web’s configuration bigbluebutton.properties (overwritten by /etc/bigbluebutton/bbb-web.properties ) related to the lifecycle of a meeting. You don’t need to understand all of these to start using the BigBlueButton API. For most BigBlueButton servers, you can leave the default values. Usage The implementation of BigBlueButton’s security model lies in the controller ApiController.groovy. For each incoming API request, the controller computes a checksum out of the combination of the entire HTTPS + sharedSecret) - PHP - simply call sha1(string . sharedSecret) as your application can always call create before returning the join URL to the user. This way, regardless of the order in which users join, the meeting will always exist when the user tries to join (the first create call actually creates the meeting; subsequent calls to create simply return SUCCESS). The BigBlueButton server will automatically remove empty meetings that were created but have never had any users after a number of minutes specified by meetingExpireIfNoUserJoinedInMinutes defined in bbb-web’s properties. Resource URL:?[parameters]&checksum=[checksum] Parameters: Example Requests: - - - Example Response: <response> <returncode>SUCCESS</returncode> <meetingID>Test</meetingID> <internalMeetingID>640ab2bae07bedc4c163f679a746f7ab7fb5d1fa-1531155809613</internalMeetingID> <parentMeetingID>bbb-none</parentMeetingID> <attendeePW>ap</attendeePW> <moderatorPW>mp</moderatorPW> <createTime>1531155809613</createTime> <voiceBridge>70757</voiceBridge> <dialNumber>613-555-1234</dialNumber> <createDate>Mon Jul 09 17:03:29 UTC 2018</createDate> <hasUserJoined>false</hasUserJoined> <duration>0</duration> <hasBeenForciblyEnded>false</hasBeenForciblyEnded> <messageKey>duplicateWarning</messageKey> <message>This conference was already in existence and may currently be in progress.</message> </response> Pre-upload Slides You can upload slides within the create call. If you do this, the BigBlueButton server will immediately download and process the slides. You can pass the slides as a URL or embed the slides in base64 as part of the POST request. For embedding the slides, you have to send a HTTPS="" filename="report.pdf"/> <document name="sample-presentation.pdf">JVBERi0xLjQKJ.... [clipped here] ....0CiUlRU9GCg== </document> </module> </modules> When you need to provide a document using a URL, and the document URL does not contain an extension, you can use the filename parameter, such as filename=test-results.pdf to help the BigBlueButton server determine the file type (in this example it would be a PDF file). In the case more than a single document is provided, the first one will be loaded in the client, the processing of the other documents will continue in the background and they will be available for display when the user select one of them from the client. For more information about the pre-upload slides check the following link. For a complete example of the pre-upload slides check the following demos: demo7 and demo8 Upload slides from external application to a live BigBlueButton session For external applications that integrate to BigBlueButton using the insertDocument API call, uploadExternalUrl and uploadExternalDescription parameters can be used in the create API call in order to display a button and a message in the bottom of the presentation upload dialog. Clicking this button will open the URL in a new tab that shows the file picker for the external application. The user can then select files in the external application and they will be sent to the live session. End meeting callback URL You can ask the BigBlueButton server to make a callback to your application when the meeting ends. Upon receiving the callback your application could, for example, change the interface for the user to hide the ‘join’ button. To specify the callback to BigBlueButton, pass a URL using the meta-parameter meta_endCallbackUrl on the create command. When the BigBlueButton server ends the meeting, it will check if meta_endCallbackUrl is sent URL and, if so, make a HTTP GET request to the given URL. For example, to specify the callback URL as add the following parameter to the create API call: &meta_endCallbackUrl=https%3A%2F%2Fmyapp.example.com%2Fcallback%3FmeetingID%3Dtest01 (note the callback URL needs to be URLEncoded). Later, when the meeting ends, BigBlueButton will make an HTTPS GET request to this URL (HTTPS is supported and recommended) and to the URL add an additional parameter: recordingmarks=true|false. The value for recordingmarks will be true if (a) the meeting was set to be recorded ( record=true was passed on the create API call), and (b) a moderator clicked the Start/Stop Record button during the meeting (which places recording marks in the events). Given the example URL above, here’s the final callback if both (a) and (b) are true: Another param is the meetingEndedURL create param. This create param is a callback to indicate the meeting has ended. This is a duplicate of the endCallbackUrl meta param. We have this separate as we want this param to stay on the server and not propagated to client and recordings. Can be used by scalelite to be notified right away when meeting ends. The meta callback url can be used to inform third parties. Recording ready callback URL You can ask the BigBlueButton server to make a callback to your application when the recording for a meeting is ready for viewing. Upon receiving the callback your application could, for example, send the presenter an e-mail to notify them that their recording is ready. To specify the callback to BigBlueButton, pass a URL using the meta-parameter meta_bbb-recording-ready-url on the create command. Later, when the BigBlueButton server finishes processing the recording, it will check if meta_bbb-recording-ready-url is set and, if so, make a HTTP POST request to the given URL. For example, given the callback URL to pass this to BigBlueButton add the following parameter to the create API call: &meta_bbb-recording-ready-url=https%3A%2F%2Fexample.com%2Fapi%2Fv1%2Frecording_status (note the callback URL needs to be URLEncoded). Later, when the recording is ready, the BigBlueButton server will make an HTTPS POST request to this URL (https is supported and recommended). The POST request body will be in the standard application/x-www-form-urlencoded format. The body will contain one parameter, named signed_parameters. The value of this parameter is a JWT (JSON Web Tokens) encoded string. The JWT will be encoded using the “HS256” method. (i.e. the header should be { "typ": "JWT", "alg": "HS256" } ). The payload will contain a the following JSON keys: meeting_id- The value will be the meeting_id (as provided on the BigBlueButton create API call). record_id- The identifier of the specific recording to which the notification applies. This corresponds to the IDs returned in the getRecordings api, and the internalMeetingIdfield on the getMeetingInfo request. The secret used to sign the JWT message will be the shared secret of the BigBlueButton API endpoint that was used to create the original meeting. The receiving endpoint should respond with one of the following HTTP codes to indicate status, as described below. Any response body provided will be ignored, although it may be logged as part of error handling. All other HTTP response codes will be treated as transient errors. join Joins a user to the meeting specified in the meetingID parameter. Resource URL:?[parameters]&checksum=[checksum] Parameters: Example Requests: - - - Example Response: There is a XML response for this call only when the redirect parameter is set to false. You should simply redirect the user to the call URL, and they will be entered into the meeting. <response> <returncode>SUCCESS</returncode> <messageKey>successfullyJoined</messageKey> <message>You have joined successfully.</message> <meeting_id>640ab2bae07bedc4c163f679a746f7ab7fb5d1fa-1531155809613</meeting_id> <user_id>w_euxnssffnsbs</user_id> <auth_token>14mm5y3eurjw</auth_token> <session_token>ai1wqj8wb6s7rnk0</session_token> <url></url> </response> insertDocument This endpoint insert one or more documents into a running meeting via API call Resource URL:?[parameters]&checksum=[checksum] Parameters: Example Requests: You can do the request either via greenlight or curl, which I am going to demonstrate in the following paragraph. First, it is necessary to have the xml string with the batch of the wanted presentations in hand. As an example, see the xml down below: > Now you need to write the curl command which will be: curl -s -X POST "https://{your-host}/bigbluebutton/api/insertDocument?meetingID=Test&checksum=6b76e90b9a20481806a7ef513bc81ef0299609ed" --header "Content-Type: application/xml" --data '{xml}' Combining both together, we get: curl -s -X POST "https://{your-host}/bigbluebutton/api/insertDocument?meetingID=Test&checksum=6b76e90b9a20481806a7ef513bc81ef0299609ed" --header "Content-Type: application/xml" --data '>'>183f0bf3a0982a127bdb8161e0c44eb696b3e75c-1531240585189</internalMeetingID> <createTime>1531240585189</createTime> <createDate>Tue Jul 10 16:36:25 UTC 2018</createDate> <voiceBridge>70066<>1531240585239</startTime> <endTime>0</endTime> <participantCount>2</participantCount> <listenerCount>1</listenerCount> <voiceParticipantCount>1</voiceParticipantCount> <videoCount>1</videoCount> <maxUsers>20</maxUsers> <moderatorCount>1</moderatorCount> <attendees> <attendee> <userID>w_2wzzszfaptsp</userID> <fullName>stu</fullName> <role>VIEWER</role> <isPresenter>false</isPresenter> <isListeningOnly>true</isListeningOnly> <hasJoinedVoice>false</hasJoinedVoice> <hasVideo>false</hasVideo> <clientType>FLASH</clientType> </attendee> <attendee> <userID>w_eo7lxnx3vwuj</userID> <fullName>mod</fullName> <role>MODERATOR</role> <isPresenter>true</isPresenter> <isListeningOnly>false</isListeningOnly> <hasJoinedVoice>true</hasJoinedVoice> <hasVideo>true</hasVideo> <clientType>HTML5</clientType> </attendee> </attendees> <metadata /> <isBreakout>false</isBreakout> <>1</sequence> <freeJoin>false</freeJoin> </breakout> </response> getMeetings This call will return a list of all the meetings found on this server. Resource URL:[checksum] Parameters: Since BigBlueButton 0.80, it is no more required to pass any parameter for this call. Example Requests: Example Response: <response> <returncode>SUCCESS</returncode> <meetings> <meeting> <meetingName>Demo Meeting</meetingName> <meetingID>Demo Meeting</meetingID> <internalMeetingID>183f0bf3a0982a127bdb8161e0c44eb696b3e75c-1531241258036</internalMeetingID> <createTime>1531241258036</createTime> <createDate>Tue Jul 10 16:47:38 UTC 2018</createDate> <voiceBridge>70066</voiceBridge> <dialNumber>613-555-1234</dialNumber> <attendeePW>ap</attendeePW> <moderatorPW>mp</moderatorPW> <running>false</running> <duration>0</duration> <hasUserJoined>false</hasUserJoined> <recording>false</recording> <hasBeenForciblyEnded>false</hasBeenForciblyEnded> <startTime>1531241258074</startTime> <endTime>0</endTime> <participantCount>0</participantCount> <listenerCount>0</listenerCount> <voiceParticipantCount>0</voiceParticipantCount> <videoCount>0</videoCount> <maxUsers>0</maxUsers> <moderatorCount>0</moderatorCount> <attendees /> <metadata /> <isBreakout>false</isBreakout> </meeting> </meetings> </response> getRecordings Retrieves the recordings that are available for playback for a given meetingID (or set of meeting IDs). Resource URL:?[parameters]&checksum=[checksum] Parameters: Example Requests: - - - - - - - Example Response: Here the getRecordings API call returned back two recordings for the meetingID c637ba21adcd0191f48f5c4bf23fab0f96ed5c18. Each recording had two formats: podcast and presentation. <response> <returncode>SUCCESS</returncode> <recordings> <recording> <recordID>ffbfc4cc24428694e8b53a4e144f414052431693-1530718721124<30718721124</internalMeetingID> <name>Fred's Room</name> <isBreakout>false</isBreakout> <published>true</published> <state>published</state> <startTime>1530718721124</startTime> <endTime>1530718810456</endTime> <participants>3</participants> <metadata> <isBreakout>false</isBreakout> <meetingName>Fred's Room</meetingName> <gl-listed>false</gl-listed> <meetingId>c637ba21adcd0191f48f5c4bf23fab0f96ed5c18</meetingId> </metadata> <playback> <format> <type>podcast</type> <url></url> <processingTime>0</processingTime> <length>0</length> </format> <format> <type>presentation</type> <url></url> <processingTime>7177</processingTime> <length>0</length> <preview> <images> <image alt="Welcome to" height="136" width="176"></image> <image alt="(this slide left blank for use as a whiteboard)" height="136" width="176"></image> <image alt="(this slide left blank for use as a whiteboard)" height="136" width="176"></image> </images> </preview> </format> </playback> </recording> <recording> <recordID>ffbfc4cc24428694e8b53a4e144f414052431693-1530278898111<30278898111</internalMeetingID> <name>Fred's Room</name> <isBreakout>false</isBreakout> <published>true</published> <state>published</state> <startTime>1530278898111</startTime> <endTime>1530281194326</endTime> <participants>7</participants> <metadata> <meetingName>Fred's Room</meetingName> <isBreakout>false</isBreakout> <gl-listed>true</gl-listed> <meetingId>c637ba21adcd0191f48f5c4bf23fab0f96ed5c18</meetingId> </metadata> <playback> <format> <type>podcast</type> <url></url> <processingTime>0</processingTime> <length>33</length> </format> <format> <type>presentation</type> <url></url> <processingTime>139458</processingTime> <length>33</length> <preview> <images> <image width="176" height="136" alt="Welcome to"></image> <image width="176" height="136" alt="(this slide left blank for use as a whiteboard)"></image> <image width="176" height="136" alt="(this slide left blank for use as a whiteboard)"></image> </images> </preview> </format> </playback> </recording> </recordings> <RecordingTextTracks Get a list of the caption/subtitle files currently available for a recording. It will include information about the captions (language, etc.), as well as a download link. This may be useful to retrieve live or automatically transcribed subtitles from a recording for manual editing. Resource URL: GET?[parameters]&checksum=[checksum] Parameters: Example Response: An example response looks like the following: { "response": { "returncode": "SUCCESS", "tracks": [ { "href": "", "kind": "subtitles", "label": "English", "lang": "en-US", "source": "upload" }, { "href": "", "kind": "subtitles", "label": "Brazil", "lang": "pt-BR", "source": "upload" } ] } } The track object has the following attributes: - kind Indicates the intended use of the text track. The value will be one of the following strings: subtitles - captions The meaning of these values is defined by the HTML5 video element, see the MDN docs for details. Note that the HTML5 specification defines additional values which are not currently used here, but may be added at a later date. - lang - The language of the text track, as a language tag. See RFC 5646 for details on the format, and the Language subtag lookup for assistance using them. It will usually consist of a 2 or 3 letter language code in lowercase, optionally followed by a dash and a 2-3 letter geographic region code (country code) in uppercase. - label - A human-readable label for the text track. This is the string displayed in the subtitle selection list during recording playback. - source - Indicates where the track came from. The value will be one of the following strings: - live - A caption track derived from live captioning performed in a BigBlueButton. - automatic - A caption track generated automatically via computer voice recognition. - upload - A caption track uploaded by a 3rd party. - href - A link to download this text track file. The format will always be WebVTT (text/vtt mime type), which is similar to the SRT format. The timing of the track will match the current recording playback video and audio files. Note that if the recording is edited (adjusting in/out markers), tracks from live or automatic sources will be re-created with the new timing. Uploaded tracks will be edited, but this may result in data loss if sections of the recording are removed during edits. - Errors - In addition to the standard BigBlueButton checksum error, this API call can return the following errors in when returncode is FAILED: - missingParameter - A required parameter is missing. - noRecordings - No recording was found matching the provided recording ID. putRecordingTextTrack Upload a caption or subtitle file to add it to the recording. If there is any existing track with the same values for kind and lang, it will be replaced. Note that this api requires using a POST request. The parameters listed as GET parameters must be included in the request URI, and the actual uploaded file must be included in the body of the request in the multipart/form-data format. Note that the standard BigBlueButton checksum algorithm must be performed on the GET parameters, but that the body of the request (the subtitle file) is not checksummed. This design is such that a web application could generate a form with a signed url, and display it in the browser with a file upload selection box. When the user submits the form, it will upload the track directly to the recording api. The API may be used programmatically as well, of course. This API is asynchronous. It can take several minutes for the uploaded file to be incorporated into the published recording, and if an uploaded file contains unrecoverable errors, it may never appear. Resource URL: Parameters: - POST Body - If the request has a body, the Content-Type header must specify multipart/form-data. The following parameters may be encoded in the post body. - file - (Type Binary Data, Optional) Contains the uploaded subtitle or caption file. If this parameter is missing, or if the POST request has no body, then any existing text track matching the kind and lang specified will be deleted. If known, the uploading application should set the Content-Typeto a value appropriate to the file format. If Content-Type is unset, or does not match a known subtitle format, the uploaded file will be probed to automatically detect the type. Multiple types of subtitles are accepted for upload, but they will be converted to the WebVTT format for display. The size of the request is limited (TODO: determine the limit maybe 8MB?) The following types of subtitle files are accepted: - SRT (SubRip Text), including basic formatting. - SRT does not have a standard mime type, but application/x-subrip is accepted. - SSA or ASS (Sub Station Alpha, Advanced Sub Station). Most formatting will be discarded, but basic inline styles (bold, italic, etc.) may be preserved. - SSA/ASS does not have a standard mime type. - WebVTT. Uploaded WebVTT files will be used as-is, but note that browser support varies, so including REGION or STYLE blocks is not recommended. The WebVTT mime type is text/vtt. Errors In addition to the standard BigBlueButton checksum error, this API call can return the following errors in - missingParameter - A required parameter is missing. - noRecordings - No recording was found matching the provided recording ID. - invalidKind - The kind parameter is not set to a permitted value. - invalidLang - The lang parameter is not a well-formed language tag. The uploaded text track is not validated during upload. If it is invalid, it will be ignored and the existing subtitles will not be replaced. Success { "response": { "messageKey": "upload_text_track_success", "message": "Text track uploaded successfully", "recordId": "baz", "returncode": "SUCCESS" } } Failed { "response": { "messageKey": "upload_text_track_failed", "message": "Text track upload failed.", "recordId": "baz", "returncode": "SUCCESS" } } Or { "response": { "message": "Empty uploaded text track.", "messageKey": "empty_uploaded_text_track", "returncode": "FAILED" } } Missing parameter error { "response": { "messageKey": "paramError", "message": "Missing param checksum.", "returncode": "FAILED" } } API Sample Code BigBlueButton provides API Sample Codes so you can integrated easily with your application. Feel free to contribute and post your implementation of the API in other language code in the bigbluebutton-dev mailing list. PHP There is stable against a local BigBlueButton server. If you’re developing new API calls or adding parameters on API calls, you can still use the API Mate to test them. Just scroll the page down or type “custom” in the parameter filter and you’ll see the inputs where you can add custom API calls or custom parameters. New API calls will appear in the list of API links and new parameters will be added to all the API links.: - - -
https://docs.bigbluebutton.org/dev/api.html
2022-09-25T01:52:39
CC-MAIN-2022-40
1664030334332.96
[]
docs.bigbluebutton.org
EuroLinux 8.5 Release Notes. Dynamic programming languages, Web and Database servers New versions of the following components are now available: - Ruby 3.0 - module rubystream 3.0. - nginx 1.20 - module nginxstream 1.20 - Nodejs 16 - module nodejsstream 16 Compiler Toolset The following compiler toolsets have been updated in EL 8.5: - GCC Toolset 11 - software collection gcc-toolset-11 - LLVM Toolset 12.0.1 - module llvm-toolsetstream rhel - Rust Toolset 1.54.0 - module rust-toolsetstream rhel - Go Toolset 1.16.7 - module go-toolsetstream rhel Security Most of the security features are connected to rebased/updated versions of the packages. The most crucial security changes include: - rsyslog - now support OpenSSL - OpenSCAP framework - added new profiles and multiple changes and enhancements - cryptopolicies were updated - these system-wide cryptographic policies from this version support different policies for different backends (scoped policies) High Availability and Resilient Storage - cmirror was updated to version 2.03.12 - corosync was updated to version 3.1.5 - pacemaker was updated to version 2.1.0 - resource-agents-paf package was added to HA an RS repositories General updates and improvements Among the others, the following packages were updated and enhanced: - NetworkManager was updated to 1.32.10 from 1.30 - OpenIMP was updated to 2.0.31 from 2.0.29 - acel was updated to 3.1 from 2.8 - bluez was updated to 5.56 from 5.52 - chrony was updated to 4.1 from 3.5 - cockpit was updated to version 251.1 from 238.2 - crypto-policies was updated to 20210617 from 20210209 - dnf was updated to 4.7.0 from 4.4.2 - elfutils were updated to 0.185 from 0.182 - freeipmi was updated to 1.6.8 from 1.6.6 - gnutls was updated to 3.6.16 from 3.6.14 - ibacm was updated to 35.0 from 32.0 - libmodulemd updated to version 2.13.0 from 2.9.4 - libreswan was updated to version 4.4 from 4.3 - main GCC was updated to 8.5.0 from 8.4.1 - opencryptoki updated to version 3.16.0 from 3.15.1 - rsyslog was updated to version 8.2102.0 from 8.1911.0 - sssd was updated to version 2.5.2 from 2.4.0 New packages Among the others, the following packages were added to this release: - adwaita-qt5 - ansible-collection-microsoft-sql - ansible-collection-redhat-rhel_mgmt - ansible-freeipa-tests - ansible-pcp - compact-hwloc1 - coreos-installer - dotnet-build-reference-packages - dotnet-sdk-3.1-source-built-artifacts - dotnet-sdk-5.0-source-built-artifacts - dotnet5.0-build-reference-packages - eth-tools - flatpak (i686) included in PowerTools - java-17-openjdk* - libadwaita-qt5 - libcap-ng-python3 - libcomps-devel is now included in PowerTools - libvoikko-devel - is now included in PowerTools - lpsolve (i686) is now included in PowerTools - mobile-broadband-provider-info-devel is now included in PowerTools - modulemd-tools - pcm - python3-cloud-what - python3-libstoragemgmt - python3-pillow (i686) is now included in PowerTools - python3-pyghmi - qt5-qtserialbus-devel is now included in PowerTools - resource-agents-paf (resilient storage) - rsyslog-openssl - samba-vfs-iouring - sblim-gather - sblim-gather-provider is now included in PowerTools - sevctl - stratisd-dracut - tesseract - tss2 - udftools - unicode-ucd-unihan - xapian-core and its development packages are now included How to update from beta The beta repository has an updated package el-release, that contains production repositories. Upgrading from EuroLinux 8.5 beta to 8 - initial-setup - ipa - libreport - the distribution but are not included in upstream repositories. Right now, this repository includes more than 2100 packages for each buildroot. Multiple batteries were updated for this release there including: - Bootstraps for rust-toolset - Bootstraps for go-toolset Gaia build system Gaia build system was updated in all interested parties' environments. We also changed the rebuild policy. None of our customers was interested in using RHEL as a buildroot for their own Enterprise Linux forks. It means that EuroLinux must be released faster, but at the very same time, we can exclusively focus on providing EuroLinux in the early stages and thus release it faster. We also decided that the next version will be released asynchronously with more batch compilation. Synchronization was especially problematic in this release as there was a lot of holidays in Poland. Other notable changes - EuroLinux 8.5 is the first version that can use baseos-all, appstream-alland powertools-all, high-availability-all, resilient-storage-allrepositories that contain all packages produced during the build process. However, these packages are not supported by upstream or EuroLinux. They are intended for developers to build they own solutions. - EuroLinux reverted the previous changes that made DockerHub the default container image registry. We observed other RHEL clones and decided that keeping default might be more suitable for users. Additional resources - Download EuroLinux ISO - EuroLinux Public Request for Change and Bug Tracker - A roadmap for the project can be found in press notes available on our company blog. EuroLinux Roadmap For Q4 2021. - Red Hat 8.5 Release Notes parts of our release notes are loosely based on this document.
https://docs.euro-linux.com/release-notes/8.5/
2022-09-25T02:01:49
CC-MAIN-2022-40
1664030334332.96
[]
docs.euro-linux.com
Start with a blank Excel workbook: Setting the Freeze Panes You will start off by setting the Freeze Panes at the correct location using jFreezePanes() detailed here Step 1: To start, type “=jFreezePanes” in cell F10: Click on the function editor (fx): Step 2: There are two formula arguments for jFreezePanes(), FreezePanesCell and AnchorViewCell. AnchorViewCell specifies the very top row that will be visible when the panes are frozen. The cells above AnchorViewCell will be hidden when the panes are frozen. The cells between AnchorViewCell and FreezePanesCell is the block that is frozen at the top of the sheet as you scroll down the sheet. Set FreezePanesCell = A26 and AnchorViewCell = A18: Now that you have our freeze panes set up, you can start with formatting the spreadsheet. INTERJECT uses the hidden area of the frozen pane to define INTERJECT report functions and to set up the formatting of the report. Formatting the Behind the Scenes Section You will start by setting up the titles of the sections that hold the different report formulas. This formatting is standard across all INTERJECT reports. Step 1: Start by selecting row 1 and coloring it dark blue (#1F4E78). This is the color that INTERJECT uses for titles of report definition sections. 1. First, click on the “1” that denotes row 1 to highlight the entire row. 2. Click the paint bucket to fill the color. 3. Choose the darkest blue (#1F4E78). For this report, you will need 5 different titled sections. Now that you have the color selected in your paint bucket click on every other row and then click on the paint bucket until you have 5 dark blue rows: Now, name the title sections. You will need names: “Column Definitions,” “Formatting Range,” “Report Formulas,” “Hidden Parameters and Notes,” and “Report Area Below.” You will enter “Column Definitions” and make it white and bold as follows: Now enter the names “Formatting Range,” “Report Formulas,” “Hidden Parameters and Notes,” and “Report Area Below” in the next 4 title rows. Do not worry about the formatting of these 4 for now. Next, use the format painter to copy the formatting of the first title to the remaining 4: As you may have noticed, the jFreezePanes() is out of place. And our hidden freeze panes sections goes all the way down to row 17, so the space where our titles are laid out should occupy all of this space. Let’s insert some more empty rows under our titles to put more space for formula definitions. Copy two empty rows from somewhere in the sheet: Paste them above row 2 by right clicking on row 2: Next, copy and paste 2 more rows under each title so that your report looks like this: You can now see that the size of our report definitions area matches the size was set for jFreezePanes(), ending at row 17. Let’s move our jFreezePanes() definition back to cell F10. Cut and paste cell F18 to cell F10: Now let’s add the standard light blue color to the titled sections: 1. Select the 3 rows under Column Definitions. 2. Click the paint bucket. 3. Select the lightest blue color (#DDEBF7). Repeat this step for the three other report definition areas so that your report looks as follows: Now format the report area. You will start by putting a report title in cell B19 “Customer Orders:” Next, name the report filters for this report. The report filters act as a way to specify which data is being pulled into the report from the data portal by specifying a set of characters that the pulled in data must contain. In cells B21, B22 and B23, respectively, type in: “Company Name:”, “Contact Name:”, and “Customer ID:” Then, resize column A to be smaller, and extend column B by a bit: Next, color the input fields for the report filters. Apply the lightest orange color () to cells C21, C22 and C23: Expand column C a little bit to give the user more space for their input: Make the spreadsheet look better by removing the gridlines in Excel. Go to the Files tab in Excel: Go to Options: 1. Go to the Advanced tab. 2. Scroll down until you see “Display options for this worksheet”. 3. Uncheck the “Show gridlines” checkbox. Name the current worksheet “CustomerOrderHistory” and delete any other worksheets you have in the workbook: Adding ReportRange() to the Report Step 1: Add our first INTERJECT report formula to the report. You will start with ReportRange(). ReportRange() is a report formula used to PULL data into a defined range of a report from the Data Portal. ReportRange() can be used with formatting to format the data returned from the Data Portal into the spreadsheet. Read more about ReportRange() here. Type “=ReportRange()” in cell C10 then click on the function builder icon. As you can see, DataPortal is the first parameter that will must provide ReportRange() so that it knows where to pull in the data from. Type “NorthwindCustomerOrders_MyName” into the DataPortal parameter box for now. You will now switch to configuring an INTERJECT Data Connection, and a Data Portal that you can pull from using ReportRange(). Setting Up the First Data Connection In order to continue our work from here, you need to set up the back-end Data Portal that ReportRange() will be using. For now, you will pause working on the front-end Excel report to configure the Data Portal and Data Connection that ReportRange() will use in our report. You will start with the Data Connection. INTERJECT Data Connections enable users to connect to a database in order to pull data out of that database based on criteria specified in stored procedures which are set up with Data Portals. An overview of Data Connections and Data Portals can be found here. Step 1: Logging in Start by navigating to the INTERJECT portal site (here) and logging in. Step 2: Create the connection: Create a new data connection by clicking the New Connection button. Name you connection NorthwindDB_MyName (substitute for your name) and give it a quick description. Select ”Database” from the dropdown list for Connection Type. For the connection string, you will need to have your own sample Northwind database to use. You can download a Northwind sample database from Microsoft here. Substitute in your server and database name in italicized parts of the following sample connection string: ”Server=MyServerAddress;Database=MyDatabase;Trusted_Connection=True;” Once you have your connection string entered, press Save to continue. Setting Up the First Data Portal Step 1: Create the Data Portal Now, you will create the Data Portal that allows us to actually pull data from the Data Connection that you made above. Data Portals are provided as a way to connect to specific stored procedures within the Data Connection to an existing database. It is a finer-grain level of control, and connects to a single stored procedure on the database you connect to through the provided Data Connection. You can have multiple Data Portals connected to one Data Connection, but not vice-versa. For more, see the website portal documentation. Navigate again to the portal site and choose Data Portals. Create a new data portal. Start by naming your Data Portal ”NorthwindCustomerOrders_YourName” (substitute in your name) and giving it a description. For the Connection, you will use the Data Connection you created in the last section, ”NorthwindDB_YourName”. It should appear in the dropdown list when clicked Now you will specify the stored procedure that this data portal will be referencing. You will write the stored procedure itself shortly. Name your stored procedure ”[demo].[northwind_customer_orders_myname]”. For the Category, enter Demo and for the Command Type, choose Stored Procedure Name from the dropdown list. Make sure Data Portal Status is set to Enabled and Is Custom Command? is set to No, then save the new Data Portal: Setting up ReportRange() with the Data Portal Now, you have a Data Connection to a database, and a Data Portal which specifies a stored procedure to provide data to it; but you need to write the stored procedure in order to actually get anything back from our ReportRange() call in the report. In order to show how the front-end Excel interface ties into the writing of the back-end stored procedure, let’s start by going back to the report and figuring out what data you want to display to the user. Step 1: Go back to the report, click in cell C10 and open the function builder. Enter 2:4 into the ColDefRange to tell ReportRange() that all of its column definitions can be found in this range of rows. You can read more about ColDefRange here. Now, you can specify the columns that you want to get back from our Data Portal via ReportRange in the Column Definitions section of our report. Starting with row 2, type CustomerID into cell B2, CompanyName into cell C2, ContactName into cell E2, OrderID into cell F2, OrderDate into cell G2, OrderAmount into cell H2, Freight into cell I2, TotalAmount into cell J2. In row 3, you just need ShipVia in cell C3 and ShippedDate in cell E3. Now, let’s add the other parameters. Open the function arguments for ReportRange() again. ReportRange() works by inserting the result set returned from the Data Portal in between two or more rows. These rows are specified by the TargetDataRange argument. Input 27:28 for our TargetDataRange. The Formatting Range is the part of the report definitions section that specifies how final output will be formatted when returned to the end user. Our formatting range occupies rows 6:8, so input 6:8 in FormatRange. The Parameters parameter specifies which cells will be the “filter” cells whose values are sent to the Data Portal to filter results to the user’s specifications. The Param() function (read more here) is used here to capture the cells. Type Param(C21,C22,C23) into Parameters. As a best practice, we recommend you set UseEntireRow to TRUE and PutFieldNamesAtTop to FALSE Now that you know which pieces of data you need in our report, you can design the stored procedure. Using a SQL editor like SQL Server Management Studio, copy and paste in the following code: code Here is the SELECT statement in the code. The columns returned from the SELECT statement are the ones that populate into the report. select code Setting up ReportDefaults() The ReportDefaults() function is used to capture values from one or a set of cells (or an independently specified value) and send the value/s to another cell or set of cells. Its execution is triggered based on another action/event happening in the report (for example a save or clear action. It is commonly used to clear the values in the filter list after results have been pulled in and then cleared, which is how it will be used herein. Read more about ReportDefaults() here.
https://docs.gointerject.com/wGetStarted/L-Dev-Building-a-Report-from-Scratch.html
2022-09-25T03:01:56
CC-MAIN-2022-40
1664030334332.96
[array(['/images/L-Dev-Report_from_Scratch/01.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/02.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/03.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/04.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/05.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/06.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/07.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/08.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/09.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/21.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/24.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/25.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/26.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/27.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/33.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/34.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/35.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/36.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/37.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/38.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/39.png', None], dtype=object) array(['/images/L-Dev-Report_from_Scratch/40.png', None], dtype=object)]
docs.gointerject.com
Group logs by fields using log aggregation 🔗 Aggregations group related data by one field and then perform a statistical calculation on other fields. Aggregating log records helps you visualize problems by showing averages, sums, and other statistics for related logs. For example, suppose that you’re browsing the Logs table to learn more about the performance of your services. If you’re concerned about the response time of each service, you can group log records by service URL and calculate average response time using an aggregation. This aggregation helps you identify services that are responding slowly. After you identify services with poor response time, you can drill down in the log records for the service to understand the problems in more detail. Aggregate log records 🔗 To perform an aggregation, follow these steps: Find the aggregations control bar. Log Observer Connect has no default aggregation. Log Observer defaults to Group by: severity. This default corresponds to the following aggregation controls settings: COUNT All(*) Group by: severity To change the field to group by, type the field name in the Group by text box and press Enter. The aggregations control bar also has these features: When you click in the text box, Log Observer displays a drop-down list containing all the fields available in the log records. The text box does auto-search. To find a field, start typing its name. To select a field in the list, click its name. When searching for a field to group by, you can only view 50 fields at a time. Continue typing to see a more and more specific list of fields to choose from. To change the calculation you want to apply to each group, follow these steps: Select the type of statistic from the calculation control. For example, to calculate a mean value, select. Choose the field for the statistic by typing its name in the calculation field control text box. The text box does auto-search, so start typing to find matching field names. To perform the aggregation, click Apply. Whenever you use a field for grouping or calculation, the results shown in the Timeline histogram and Logs table include only logs containing that field. Logs are implicitly filtered by the field you group by, ensuring that calculations are not impacted by logs that do not contain the field you used. Example 1: Identify problems by aggregating severity by service name 🔗 One way you can discover potential problems is to find services that are generating a high number of severe errors. To find these services, group log records by service name and count all the records. Services with problems appear as groups with many records that have a severity value of ERROR. To apply this aggregation, follow these steps: Using the calculation control, set the calculation type by selecting COUNT. Using the calculation field control, set the calculation field to All(*). Using the Group by text box, set the field to group by to service.name. Click Apply. The Timeline histogram displays a count of logs by all your services as stacked columns, in which each severity value has a different color. The histogram legend identifies the color of each severity. Example 2: Identify problems by aggregating response time by request path 🔗 Longer than expected service response might indicate a problem with the service or other part of the host on which it runs. To identify services that are responding more slowly than expected, group log events by http.req.path, a field that uniquely identifies each service. For each group, calculate the mean of the response time field http.resp.took_ms. To apply this aggregation, follow these steps: Using the calculation control, set calculation type to AVG. Using the calculation field control, set the field to http.resp.took_ms Using the Group by text box, set the field to group by to http.req.path. Click Apply. The Timeline histogram displays the average response time for each service.
https://docs.signalfx.com/en/latest/logs/aggregations.html
2022-09-25T02:51:46
CC-MAIN-2022-40
1664030334332.96
[]
docs.signalfx.com
Supported browsers for Splunk Observability Cloud 🔗 Splunk Observability Cloud works as expected when using the latest and next-to-latest official releases of the following browsers: Chrome (recommended) Safari An official release is a stable release and doesn’t include alpha, beta, or release candidate releases. For example, if the latest official major release of a supported browser is 10 and the next-to-latest release is 9, Observability Cloud works as expected when used with releases 10 and 9, including their official minor point releases.
https://docs.signalfx.com/en/latest/references/supported-browsers.html
2022-09-25T01:44:02
CC-MAIN-2022-40
1664030334332.96
[]
docs.signalfx.com
Nodes FAQs Frequently asked questions about Orka nodes. Quick navigation Are Orka nodes regular macOS machines? | How do I log on to my nodes directly? Are Orka nodes regular macOS machines? Orka nodes are genuine Apple physical hosts which provide computational resources for your worklooads. However, Orka nodes are not regular macOS machines. While they provide genuine Apple hardware, a host OS is running on top. As a result, you cannot control your Orka nodes as you would control a regular macOS machine. You can manage your nodes only through the management operations available in the Orka tools. How do I log on to my nodes directly? You don't have direct log-on access to your nodes. You cannot use VNC, SSH, or Apple Screen Sharing to log on to your nodes directly. Updated about 2 years ago
https://orkadocs.macstadium.com/docs/nodes-faqs
2022-09-25T01:37:05
CC-MAIN-2022-40
1664030334332.96
[]
orkadocs.macstadium.com
The holiday season is usually the strongest in eCommerce and you get extraordinary numbers of new customers. That's a great opportunity and you should be fighting to keep those customers as regulars. The really nice thing about holiday shoppers is that they usually shop for gifts and therefore your products are given to other people. This opens the door to not one, but two potential customers at a time. What can you do to re-engage holiday buyers after the holidays? send them product- and usage-related content via email (they will share it with the recipient of the gift to improve their experience) ask for feedback (same logic - the gift giver will forward it to the receiver; you can include an email field to collect the receiver's address) offer complimentary products based on the products bought, offer new season items when the time comes judging by the items purchased, send holiday offers on other occasions - Mother's Day, Father's Day, Valentine's Day, etc. How to do that using Metrilo? From the main menu, go to Revenue Breakdown. From the time period dropdown menu (top right corner), select the period you believe to have brought the most holiday shoppers, e. g. 15th Nov - 15th Dec. (While there are people who start Christmas shopping in August, choose a shorter period for more accurate results.) In the breakdown by new and returning customer below, click the number of new customers to get the full list. Apply a global "Holiday shopper" tag to the whole list (the Tag button is at the top). Go to the Customer Database tab and filter customers by that tag. Then, add a filter by the number of orders equal to 1. Those would be your one-time holiday shoppers. Now, send them an email to invite them back to your store. Note: If you have a special holiday product category that you know will give you all holiday shoppers, then you can simply filter the people who shopped it and be done: In Customer Database, use the filter by Product categories they interacted with - choose your holiday category and "ordered" to be sure they shopped that category.
http://docs.metrilo.com/en/articles/725963-reactivate-holiday-shoppers-after-the-holidays
2022-09-25T03:00:24
CC-MAIN-2022-40
1664030334332.96
[]
docs.metrilo.com
Introduction¶. Installation¶ Using pip (root access required): pip install array_split or local user install (no root access required): pip install --user array_split or local user install from latest github source: pip install --user git+git://github.com/array-split/array_split.git#egg=array_split Testing¶ Run tests (unit-tests and doctest module docstring tests) using: python -m array_split.tests or, from the source tree, run: python setup.py test Travis CI at: and AppVeyor at: Documentation¶ Latest sphinx generated documentation is at: and at github gh-pages: Sphinx documentation can be built from the source: python setup.py build_sphinx with the HTML generated in docs/_build/html. Latest source code¶ Source at github: Bug Reports¶ To search for bugs or report them, please use the bug tracker at: Contributing¶ License information¶ See the file LICENSE.txt for terms & conditions, for usage and a DISCLAIMER OF ALL WARRANTIES.
https://array-split.readthedocs.io/en/0.5.2/about/index.html
2022-09-25T00:51:12
CC-MAIN-2022-40
1664030334332.96
[]
array-split.readthedocs.io
What's new in the 8x8 Contact Center 9.15 release? This release addresses bugs, stability improvements, and added features to provide an improved customer experience for 8x8 Contact Center. The following bugs have been fixed in this release: Looking for new features? See our upcoming feature release page for the latest information.
https://docs.8x8.com/8x8WebHelp/VCC/release-notes/Content/9-15-release/what-is-new.htm
2022-09-25T01:31:50
CC-MAIN-2022-40
1664030334332.96
[]
docs.8x8.com
Features Description of features in App On this page: Prefill The App App can validate the datamodel based on the datamodel itself and based on custom app logic defined by the application developer. Data Calculation The App can calculate data in the datamodels based on custom applogic defined by the application developer. Instansiation Hooks The App can perform instansiations checks based on custom app logic defined by the application developer. Policy Enforcment The App has built in Policy Enforcments Points at the different API endpoints to make sure that the user / system is authorized to perform operation on the app / data. This includes checks on - Roles - Rights - Authentication level Formset In the future the App will support multiple datamodels. PDF of data The App will be able to present at PDF of the data.
https://docs.altinn.studio/technology/solutions/altinn-apps/app/features/
2022-09-25T02:26:08
CC-MAIN-2022-40
1664030334332.96
[]
docs.altinn.studio
Control CenterEstimated reading time: 18 minutes Overview Purpose: This tool controls who has ability to save to Budgets and when the data will flow from the Budget Module into Interject. It also controls the Projections and Capital lock levels. Report Location Filter Options: - District(s) - Optional. Blank defaults to show all districts you have rights to. It can be individual district, district range, or groupings - Year Month - Optional. Blank defaults to the current Year-Month. Must be in YYYY-MM format Accessing Templates from Control Center Control Center pulls in data showing who last saved to the Budget and Projections Templates, when they saved, and where they saved from. The file location is clickable, and if you have access to the folder location you can open it up. If a file has not been saved to a given Budget or Projections Template, or you want to make a fresh one, you can drill on either section to create a new Template. Lock Levels and District Position Assignment The Control Center manages the lock levels for the Budget, Capital, and Projections modules. The possible lock levels are: Ops, A/C, Dist, Div, Reg, or Corp. Your level of access is determined by your position in the District Position Assignment (DPA). This can be found in the toolbox at the following location: In the screenshot below, notice the lock level for Budget is Dist Level. IF your permissions are high enough, you can change the Lock Level for Budgets. In this case, since it is locked at Dist Level, you need to be at Dist level or higher in the DPA. If you have a high enough level, the Lock Level drop-down will show all levels up to your yours, plus one more to send it up once you are done. The screenshot below is for a Dist level user who can lock the Budget up to the Div Level. If a Dist level user locks the Budget up to Div level (usually done once Budget is ready for review), they will be locked out. If more changes are needed, they will need to work with a supervisor to unlock it back down to Dist level. Contract Center Lock Status The Contract Center in Toolbox is locked based on the Budget Lock Level for the District in Control Center. If the Budget Lock Level is Reg Level or higher, Contract Center will be locked and you will not be able to edit. Budget Review Dates The Budget Review Dates are used to version the budget amounts from Target Center 2.0 as they are synced into Interject. The four Budget Review Date options are: Corp Review Thru, Reg Review Thru, Reg Cutoff, and Corp Cutoff. When a budget amount is synced to Interject, it will fall into one of these four Review Date Buckets. If an amount is saved before or at 7/1/2019 5:00 PM, that amount will be placed in the Corp Review Thru Bucket. If a budget amount is saved after that time but before or at 7/31/2019 5:00 PM, it will be placed in the Corp Cutoff Bucket. This organization is repeated for the two remaining buckets. AFTER CORP CUTOFF If a budget amount is saved AFTER Corp Cutoff, it will NOT be synced to Interject. It will remain only in Target Center 2.0. Do I have rights to change the Review Dates? If you have Reg Level access to a district, you can update the Reg Review Thru and Reg Cutoff dates. If you have Corp Level access to a district, you can update all of the Review Dates. Review Dates Example To see how this works in real time, we’ll save an amount to a Budget Template. For this example, we’ll save $100 to an account on 10/1/2019 3:30 PM. According to our Control Center Budget Review Dates (see earlier screenshot), 10/1/2019 3:30 PM is after Reg Cutoff but before Corp Cutoff. This means that the $100 will be placed in the Corp Cutoff Bucket. We can double check this in two places: The Budget Change Query Tool and the Budget Book. In Budget Change Query Tool , we look on the first tab ReviewDateSummary, pulling on the same District and Year. The Budget Change Query Tool pulls in amounts saved to TC 2.0, and here we see the amount in the correct bucket. Next, look at the Budget Book, which shows the amounts saved to Interject. First, on the Summary tab we set Versions to Corp Cutoff. We also set the In Districts and Budget Year to correspond with what we are using. Now go to the 4_BudSumByMonth tab and pull. Since we’re pulling on the Corp Cutoff version, we see the $100 go into the correct account grouping. Going back to the Summary tab, we change the Versions filter to Reg Cutoff. Back on the 4_BudSumByMonth tab, pull and notice that the $100 is gone. This is because the amount is NOT in the Reg Cutoff Bucket. Now, review dates will change throughout the budget season as Region/Corporate complete their reviews. For example, here Region has gone through a series of reviews and updated Reg Review Thru and Reg Cutoff to the values below. The save we completed at 10/1/2019 3:30 PM now falls into the Reg Cutoff Bucket. When this review date is updated, the amounts in Interject will be updated to move this amount automatically. We’ll confirm using the same tools we did in previous steps. In the Budget Change Query Tool we see the $100 has moved to the Reg Cutoff column. Also notice that the $100 is still in the Corp Cutoff column, because Corp Cutoff shows ALL the amounts leading up to the Corp Cutoff date, which is 11/1/2019 5:00 PM in this case. With this in mind, check the Budget Book for both Reg Cutoff and Corp Cutoff versions. We should see the $100 in both, because it fits in the Reg Cutoff Bucket and happens before Corp Cutoff. To make sure, update the Versions on the Summary tab to Reg Review Thru and pull again. The $100 will be gone, because the save was after Reg Review Thru. AFTER CORP CUTOFF Now if an amount is saved AFTER the Corp Cutoff date, it will NOT be synced to Interject. Take the following example: we saved at 11/2/2019 5:00 PM (after the Corp Cutoff date used in earlier example). Let’s check the two tools we used before to see how this shows up. Notice that the $5,000 is in the After Corp Cutoff column. It is also NOT included in the Change From Corp Cutoff column, because that includes only data BEFORE Corp Cutoff. In the Budget Book tool, we pull for the highest Versions setting, Corp Cutoff, to show EVERYTHING in Interject. Notice that the $5,000 is absent because it is not yet synced to Interject. Once the Corp Cutoff date is extended beyond 11/1/2019 5:00 PM, this amount will be synced to Interject. BOD and Versioning Once BOD is applied and versioning is enabled, the Review Dates no longer take effect. Budget amounts are not synced to Interject based on when they were saved but rather if their ChangeIDs are versioned by Corporate. Budget Cutoff Sync Dates The Control Center also controls when PI and Depr accounts get synced to Interject. When PI accounts are updated in the Contract Center, or Depr accounts in Capital, they will be synced through to Interject automatically as long as they are saved BEFORE their associated cutoff sync dates. If saved after, they will not be synced over. Once the cutoff sync dates are updated to a datetime after the updated account save, the unsynced changes will sync. Do I have rights to change the Cutoff Sync Dates? Only Corporate Admins have rights to update these dates. Drills There are many drills availabe on this tool, depending on where you drill from. Create New Projection Template opens a new Projection Template . Create New Budget Template opens a new Budget Template . Drilling on the Review Date columns gives the following drill options: - Drill to Change History - Summary opens the Summary tab on the Budget Change Query Tool . - Drill to Change History - Detail opens the Detail tab on the Budget Change Query Tool . - Drill to Review Date Amounts opens the ReviewDateSummar tab on the Budget Change Query Tool . Drilling on the Capital columns gives the following drill options: - Capital Report: Summary opens the Summary tab on the Capital Input Tool . - Capital Report: Detail opens the Detail tab on the Capital Input Tool . Common Save Errors You are either trying to lock a district to a level you do not have permissions for, or the district itself is already locked above your level The Review Dates cannot equal each other. The later of the review dates has been automatically offset by 1 second from the former by just one second to not create any sync conflicts. You need a valid Year Month, format YYYY-MM The district(s) you have manually added to the Control Center are either inactive or non-financial. You cannot save them, or they should not be handled in Control Center The Lock Level you tried saving up is not valid. Please use one of the drop-down options pulled into the tool The timestamp you tried saving to the Review Date is not a valid format. It needs to be MM/DD/YY HH:MM (AM/PM) You are below the level required to change the associated Review Dates
https://docs.gointerject.com/bApps/InterjectTraining/Budget/ControlCenter.html
2022-09-25T02:33:49
CC-MAIN-2022-40
1664030334332.96
[array(['/images/WCNTraining/Budget/ControlCenter_ReportLibrary.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_FullView.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetFileLink.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetFileOpened.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_DrillNewTemplates.png', None], dtype=object) array(['/images/WCNTraining/Capital/CapitalInput_DPANavigation.png', None], dtype=object) array(['/images/WCNTraining/Capital/CapitalInput_DPAWindow.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_CurrentLockLevel.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_LockLevelOptions.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_LockedOut.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_ContractCenterUnlocked.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_ContractCenterLocked.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_ReviewDates.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudTemplateSave01.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudChangeQueryReviewDates01.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetBookCorpCutoffSetting.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetBookCorpCutoffAmount01.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetBookRegCutoffSetting.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetBookRegCutoffAmount02.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_ReviewDates02.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudChangeQueryReviewDates02.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetBookRegAndCorpCutoffAmounts.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetBookRegReviewThruSetting.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetBookRegReviewThruAmounts.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudTemplateSave02.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudChangeQueryReviewDates03.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudgetBookCorpCutoffAmount02.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_CutoffSyncDates.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_ProjDrill.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_BudDrill.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_ReviewDateDrills.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_CapitalDrills.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_Errors_LockLevelTooHigh.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_Errors_OneSecondOffsetReviewDates.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_Errors_YearMonth.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_Errors_InvalidDistricts.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_Errors_InvalidLockLevel.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_Errors_InvalidReviewDate.png', None], dtype=object) array(['/images/WCNTraining/Budget/ControlCenter_Errors_LowLevelReviewDates.png', None], dtype=object) ]
docs.gointerject.com
Advanced Hide Row/Section LabEstimated reading time: 6 minutes Overview In this example of the ReportHideRowsOrColumns() function, we will hide an entire section of a report based on the condition that the section is empty. You would typically use this in a report when data is pulled in with zero values. By hiding the zero value rows, and the entire section when all the rows within it are zero vale, the reporting area will be more usable. For this lab, find the Interject Inventory Demo in the Interject Demo folder within the Report Library . Once open, you’ll use the InvByCategory_WithDetail tab. Hiding Rows Step 1: Start by using the Quick Tools button in the ribbon menu and selecting **Freeze/Unfreeze Panes. Step 2: Insert a row above row above row 6, so that there are 2 blank rows, and then expand the collapsed columns by clicking the plus sign in the upper left. Step 3: Click into cell C4 and open the Function Wizard by clicking fx. Then scroll down in the Report Variable section of the wizard until you see UseTopSpacerRow, and type True into the empty field. Step 4: In cell B5, type “For Backward Compatibility:”. Then, in cell C5 enter =ReportCalc() or double click it from the formula menu. Once it’s entered, open the Function Wizard and input the following and click OK: - OnPullSaveOrBoth: Pull - OnClearRunOrBoth: Both - SheetOrWorkbook: Sheet - Disabled: Leave blank Step 5: In cell B6 type “Hide/Show Section:”. In cell C6 type =ReportHideRowOrColumn() or begin typing and double click the formula from the menu. Next, open the Function Wizard by clicking fx, enter the following information and click OK: - OnPullSaveOrBoth: Pull - OnClearRunOrBoth: Both - RowOrColumnRange: C19:C56 - Disabled: Leave blank Step 6: In cell A20 enter =Rows(A19:A56). Step 7: In cell C8, enter =C7, to reference the cell above. Step 8: Now, in cell C19 of the report area, write the formula =IF(AND(I22=0,$A$20<>38),”Hide”,”Show”). You will write this same formula in C23, C27, and so on, but you will change the cell reference I22 to the subtotal cell corresponding to each section. For example, notice in the second screenshot below that the subtotal cell for “Condiments” is I26. You can do this by copying the orginal formula, then highlighting the cell reference in the formula window (item 1 in the second screenshot) and clicking into the corresponding subtotal cell (item 2 in the second screenshot). Step 9: In column C on each row with a category name, enter a reference to the cell directly above. For example, in cell C20 type =C19; in cell C24 type =C23. Do this for all categories of the report. Step 10: Now, type the same reference in the cells below each of the ones in which you just entered it. For example, in C21 type =C19; in C45 type C43. Step 11: To test that the function and formulas are working correctly, click into cell B20, which contains the category name “Beverages” and add and “X”. Now freeze the panes using the Quick Tools menu, hide the leftmost columns, and pull the report. You should see that the empty category comes in collapsed, the other category detail is expanded, and the original “Beverages” category comes in at the bottom as “Items Not Included.”
https://docs.gointerject.com/wGetStarted/L-Create-AdvancedHideRowsSections.html
2022-09-25T02:30:23
CC-MAIN-2022-40
1664030334332.96
[array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide1.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide2.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide3.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide4.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide5.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide6.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide7.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide8.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide9.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide10.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide12.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide13.png', None], dtype=object) array(['/images/L-Create-AdvancedHideRow/AdvanceRowHide14.png', None], dtype=object) ]
docs.gointerject.com
ImageInfo() Returns a structure that contains information about the image, such as height, width, color model, size, and filename. Requires Extension: Image extension ImageInfo( image=any ); Examples img = imageNew("",100,100,"rgb","yellow"); dump(img); dump(ImageInfo(img)); See also - Image manipulation - image.info() - Search Issue Tracker - Search Lucee Test Cases (good for further, detailed examples)
https://docs.lucee.org/reference/functions/imageinfo.html
2022-09-25T02:56:43
CC-MAIN-2022-40
1664030334332.96
[]
docs.lucee.org
This topic provides an overview of the code review functionality available across Parasoft Test products. Sections include: Code Review Overview Parasoft’s Code Review module is designed to make peer reviews more practical and productive by automating preparation, notification, and tracking. It automatically identifies updated code, matches it with designated reviewers, and tracks the progress of each review item until closure. This allows teams to establish a bulletproof review process that ensures the designated items are reviewed and all identified issues are resolved. Parasoft provides built-in support for the following typical code review flows: -. Post-Commit Post-commit code reviews are for teams who want to review code after code is committed to source control. A Code Review Test Configuration is typically scheduled to run automatically on a regular basis (for example, every 24 hours). It scans the designated source control repository to identify code that requires review, then sends this information to Team Server, which then distributes it to the designated reviewer. In this scenario, the code authors do not need to perform any special actions to have their code reviewed; simply committing it to source control is sufficient. After the Code Review Test Configuration runs, the designated reviewers are automatically notified about the required reviews. In a pre-commit process, a code review package is created for code that needs to be accepted before it is committed to source control. In this type of process, the author should always see that packages are finally accepted. At that point, he can commit the files and then close the package. Some teams who submit code for review via a pre-commit process also like to perform a post-commit nightly scan to: - Generate emails notifying authors and reviewers about their assigned code review tasks. - Identify any code changes that were committed to source control, but were not submitted for review using the pre-commit procedure.. For example:
https://docs.parasoft.com/display/CPPDESKV1032/Code+Review+Introduction
2022-09-25T02:33:33
CC-MAIN-2022-40
1664030334332.96
[]
docs.parasoft.com
SSO: App Access and Launch Talkdesk® provides users with direct access to all apps installed from Talkdesk AppConnect™ within their instance of Talkdesk. To further streamline this experience, you are required to provide Single Sign-On (SSO) via Talkdesk to your hosted web apps. Figure 2 - App Access and Launch. SSO Fail If the SSO. Initiating SSO SSO is accomplished using the OAuth 2.0 Authorization Code grant type.. As part of the app installation process, some information you receive from the Events API app.installed event regarding the installing account, includes: - Authorization URL. - Tokens URL. - Client ID. - Installation ID (for the partner app). Talkdesk recommends storing the information above to map the installation ID from this SSO request to the correct client ID, authorization URL and token URL (required to perform Talkdesk SSO via Authorization Code). Automated Authentication When users are redirected to the standalone URL you provided, you must initiate the SSO process immediately. No additional clicks (i.e. "Login with Talkdesk") must be required from the user. Updated 4 months ago
https://docs.talkdesk.com/docs/single-sign-on
2022-09-25T02:28:25
CC-MAIN-2022-40
1664030334332.96
[array(['https://files.readme.io/8190dcc-Figure_4_-_App_Access_and_Launch.png', 'Figure 4 - App Access and Launch.png 1924'], dtype=object) array(['https://files.readme.io/8190dcc-Figure_4_-_App_Access_and_Launch.png', 'Click to close... 1924'], dtype=object) ]
docs.talkdesk.com
Update to the latest version of the product program to take advantage of product enhancements. These are the options available on this screen. Operating System: Select to update operating system components. Smart Protection Server: Select to update the product server program file. Widget Components: Select to update widgets. Enable scheduled updates: Select to update program files daily at a specified time or weekly. Download only: Select to download updates and receive a prompt to update program files. Update automatically after download: Select to apply all updates to the product after download regardless of whether a restart or reboot is required. Do not automatically update programs that require a restart or reboot: Select to download all updates and only install programs that do not require a restart or reboot. Upload: Click to upload and update a program file for Smart Protection Server. Browse: Click to locate a program package. Save and Update Now: Click to apply settings and perform an update immediately. There are three ways to update the program file: scheduled updates, manual updates, and by uploading the component.
https://docs.trendmicro.com/en-us/enterprise/smart-protection-server-33p7/using-smart-protecti_001/updates/program-file-updates.aspx
2022-09-25T01:03:03
CC-MAIN-2022-40
1664030334332.96
[]
docs.trendmicro.com
DescribeFleetLocationAttributes Retrieves information on a fleet's remote locations, including life-cycle status and any suspended fleet activity. This operation can be used in the following ways: To get data for specific locations, provide a fleet identifier and a list of locations. Location data is returned in the order that it is requested. To get data for all locations, provide a fleet identifier only. Location data is returned in no particular order. When requesting attributes for multiple locations, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a LocationAttributes object is returned for each requested location. If the fleet does not have a requested location, no information is returned. This operation does not return the home Region. To get information on a fleet's home Region, call DescribeFleetAttributes.", "Limit": number, "Locations": [ " string" ], "NextToken": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. In the following list, the required parameters are described first. - FleetId A unique identifier for the fleet to retrieve remote locations for. You can use either the fleet ID or ARN value. Type: String Pattern: ^fleet-\S+|^arn:.*:fleet\/fleet-\S+ Required: Yes - Limit The maximum number of results to return. Use this parameter with NextTokento get results as a set of sequential pages. This limit is not currently enforced. Type: Integer Valid Range: Minimum value of 1. Required: No - Locations A list of fleet locations to retrieve information for. Specify locations in the form of an Amazon Region code, such as us-west-2. Type: Array of strings Array Members: Minimum number of 1 item. Maximum number of 100 items. Response Syntax { "FleetArn": "string", "FleetId": "string", "LocationAttributes": [ { "LocationState": { "Location": "string", "Status": "string" }, "StoppedActions": [ "string" ], "UpdateStatus": "string" } ], "NextToken": were requested for. Type: String Pattern: ^fleet-\S+|^arn:.*:fleet\/fleet-\S+ - LocationAttributes Location-specific information on the requested fleet's remote locations. Type: Array of LocationAttributes - UnsupportedRegionException The requested operation is not supported in the Region specified. HTTP Status Code: 400 Examples Retrieve remote fleet locations This example retrieves information on all remote locations for a fleet. The requested fleet's home Region is us-west-2. It can deploy instances in the following Amazon Regions: us-west-2, us-west-1, and ca-central-1. In this example, auto-scaling has been suspended in ca-central-1, and there is a fleet update that has not yet been completed for that location. Sample Request { "FleetId": "fleet-2222bbbb-33cc-44dd-55ee-6666ffff77aa" } Sample Request { "FleetArn": "arn:aws:gamelift:us-west-2::fleet/fleet-2222bbbb-33cc-44dd-55ee-6666ffff77aa", "FleetId": "fleet-2222bbbb-33cc-44dd-55ee-6666ffff77aa", "LocationAttributes": [ { "LocationState": { "Location": "us-west-1", "Status": "ACTIVE" } }, { "LocationState": { "Location": "ca-central-1", "Status": "ACTIVE" }, "StoppedActions": [ "AUTO_SCALING" ], "UpdateStatus": "PENDING_UPDATE" } ], "NextToken": "string" } See Also For more information about using this API in one of the language-specific Amazon SDKs, see the following:
https://docs.amazonaws.cn/gamelift/latest/apireference/API_DescribeFleetLocationAttributes.html
2022-09-25T02:06:33
CC-MAIN-2022-40
1664030334332.96
[]
docs.amazonaws.cn
This section will show you how to set up the Ameto client library, upload an asset to Ameto, create a processing pipeline, trigger a job, and retrieve the job result. TIP You need an API key to try out the examples. If you haven't done it already, register a free account and create an API key. Using Maven or Gradle (requires Java 8 or higher): // Initialize the client library Ameto ameto = new Ameto("", "<your API token>"); // Create a pipeline consisting of a single step that returns an optimized JPEG image Pipeline pipeline = ameto.addPipeline("convertToJpeg") .format(Pipeline.Format.Jpeg) .build(); Path image = Paths.get("<path/to/image.png>"); Asset asset = ameto.add(image); // Retrieve an asset from the file ProcessedAsset result = pipeline.push(asset); // Trigger a job Path outputPath = Paths.get("<path/to/result.jpeg>"); Files.copy(result.getEssence(), outputPath); // Write the result to your disk Ameto is built on three fundamental concepts: assets, pipelines, and jobs. Assets are arbitrary binary data that are valuable to you. Your company logo is an asset, for example. Once uploaded, an asset will be archived until it is explicitly deleted. A pipeline is a sequence of processing steps. A "create thumbnail" pipeline, for instance, might resize and sharpen an image. Pipelines accept assets and create processed assets. All pipeline steps are non-destructive, meaning they do not modify the asset. A new asset is created instead. Once an asset is "pushed" into a pipeline, a job is triggered that performs the processing steps defined by the pipeline. The example code on the right does exaclty that.
https://docs.ameto.de/tutorials/
2022-09-25T02:39:54
CC-MAIN-2022-40
1664030334332.96
[]
docs.ameto.de
# Open RDP service # Problem description Our data source has detected in your network an open and unprotected RDP (Remote Desktop Protocol) desktop-sharing service, which anyone can access from the Internet. RDP is a common protocol used to share your computer desktop with someone else. It is commonly used to allow family members to administer relatives' computers remotely, or to allow IT support to access and service your computer from somewhere else. While there is a valid use for RDP for remote administration, having your computer open for anyone from the Internet is likely not what you want. Often RDP is enabled on work computers when they are being used inside the office network. When you move the computer to a home network, as a result of a misconfiguration the RDP service may be left open, and visible to the whole Internet. Having the RDP RDP service from it. Search for instructions from the Internet with the keywords disable rdp and include your operating system version to further refine the search results, e.g. disable rdp windows 10. See Disable Remote Desktop from Windows 10 (opens new window) for step-by-step instructions for Windows PCs. If the RDP service is on intentionally and you want to keep it that way, at least block access to the service from the Internet at your firewall or home router. If the service is needed for work, ask your IT support to configure the service in a secure way.
https://docs.badrap.io/types/rdp.html
2022-09-25T01:26:54
CC-MAIN-2022-40
1664030334332.96
[]
docs.badrap.io
17.162. ábra - Applying example for the Image Gradient filter Original image Filter „Image Gradient” applied with default options. Filter „Image Gradient” applied with Direction option. This filter detects edges in one or two gradient directions. Magnitude is default: it combines both directions. Direction: only one direction is used. Both is like Magnitude. 17.164. ábra - Output mode examples Original image, with abrupt luminosity change Option Magnitude is selected Option Direction is selected. In result, black is no edge detected, white is edge detected. The result of this filter can be larger than the original image. With the default Adjust option, the layer will be automatically resized as necessary when the filter is applied. With the Clip option the result will be clipped to the layer boundary.
https://docs.gimp.org/2.10/hu/gimp-filter-image-gradient.html
2022-09-25T02:33:47
CC-MAIN-2022-40
1664030334332.96
[]
docs.gimp.org
Set up and administer Splunk Observability Cloud 🔗 One of the first steps in getting started with Observability Cloud is setting up your organization. In Observability Cloud, an organization, or account, is the highest-level security grouping. For example, data within an organization cannot be accessed by other organizations and their users. To set up your organization, create and carry out a plan that addresses key aspects of your organization as covered below. Many of these tasks require administrator permissions and some tasks might need to be performed on an ongoing administrative basis beyond the initial setup. Here are key aspects of your Observability Cloud organization to plan for and set up: Set up authentication that follows your security protocols Invite administrators to help with the setup process For information, see Create and manage users in Splunk Observability Cloud. Create access tokens to authenticate API calls and data ingestion For information, see Create and manage organization access tokens using Splunk Observability Cloud. Create and configure teams to ensure that correct groups of users have easy access to relevant dashboards and alerts For information, see Create and manage teams in Splunk Observability Cloud. Invite users For information, see Create and manage users in Splunk Observability Cloud. Integrate with notification services to enable team workflows and communication channels For information, see Send alert notifications to third-party services using Splunk Observability Cloud. Create global data links For information, see Link metadata to related resources using global data links in Splunk Observability Cloud. Understand your subscription usage For information about APM subscription usage, see Monitor Splunk APM subscription usage. For information about Infrastructure Monitoring subscription usage, see Monitor Splunk Infrastructure Monitoring subscription usage. For information about usage metrics for Observability Cloud, see View organization metrics for Splunk Observability Cloud.
https://docs.signalfx.com/en/latest/admin/admin.html
2022-09-25T01:36:42
CC-MAIN-2022-40
1664030334332.96
[]
docs.signalfx.com
equipment ] Deletes a dataset and associated artifacts. The operation will check to see if any inference scheduler or data ingestion job is currently using the dataset, and if there isn't, the dataset, its metadata, and any associated data stored in S3 will be deleted. This does not affect any models that used this dataset for training and evaluation, but does prevent it from being used in the future. See also: AWS API Documentation delete-dataset --dataset>] --dataset-name (string) The name of the dataset.
https://docs.aws.amazon.com/cli/latest/reference/lookoutequipment/delete-dataset.html
2022-09-25T01:54:13
CC-MAIN-2022-40
1664030334332.96
[]
docs.aws.amazon.com
Prokka Descriptiom Prokka is a software tool to annotate bacterial, archaeal and viral genomes. License Free to use and open source under GNU GPLv3. Available Prokka 1.14.6 is Available in Puhti. Usage In Puhti, Prokka should be executed as a batch job. An interactive batch job for testing Prokka can be started with command: sinteractive -i -m 8G Prokka is installed to Puhti as a bioconda environment called prokka. In addition to Prokka, this environment contains also Roary pan genome pipeline. To use it, you should activate Prokka environment with commands: export PROJAPPL=/projappl/your_project_name module load bioconda source activate prokka prokka. By default Prokka tries to use 8 coputing cores, but in this interactive batch job case, you have just one core available. Because of that you should always define the number of cores that Prokka will use with option -cpus. For example: prokka --cpus 1 contigs.fasta Larger analysis should be executed as a batch job utilizing several cores. Sample batch job script (batch_job_file.bash) below. #!/bin/bash -l #SBATCH --job-name=prokka #SBATCH --output=output_%j.txt #SBATCH --error=errors_%j.txt #SBATCH --time=24:00:00 #SBATCH --ntasks=1 #SBATCH --nodes=1 #SBATCH --cpus-per-task=8 #SBATCH --mem=16000 #SBATCH --account=your_project_name # #set up prokka export PROJAPPL=/projappl/your_project_name module load bioconda source activate prokka #Run prokka prokka --cpus $SLURM_CPUS_PER_TASK --outdir results_case1 --prefix mygenome contigs_case1.fa In the batch job example above one Prokka task (--ntasks 1) is executed. The job reserves 8 core (--cpus-per-task=8 ) with total of 16 GB of memory (--mem=16000). The maximum duration of the job is twelve hours (--time 24 See the Puhti user guide for more information about running batch jobs.
https://docs.csc.fi/apps/prokka/
2022-09-25T01:41:47
CC-MAIN-2022-40
1664030334332.96
[]
docs.csc.fi
How to add docker hub credentials to a project Since 2nd November 2020, docker hub has imposed a rate limit for image pulls. For Rahti this means a limit of 200 pulls every 6 hours. This limit can be easily reached and it prevents new applications to be deployed if the image is in docker hub. The error looks like this: Pulling image "docker.io/centos/python-38-centos7@sha256:da83741689a8d7fe1548fefe7e001c45bcc56a08bc03fd3b29a5636163ca0353" ... pulling image error : toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: The solution involves using both the Web UI and the client: First, you need a docker hub account. It can be a free account. In this case you will still have rate limits, but only the pulls you have done using your credential will be taken into account for the rate limit. Paid accounts have no limit. - You will need a TOKEN, go to and create a token. You will be able to see when the token was last used. Also you can create several tokens, and use them in different projects, increasing security. Secondly, navigate to the Web UI and open your project. On the left navigation, select Resources -> Secrets. On upper right, select "Create Secret" button, and on the secret creation dialogue, set: - Secret Type = "Image Secret" - Secret Name = give it a clear name, this will be used later - Authentication Type = "Image Registry Credentials" - Image Registry Server Address = "docker.io" - Username = your docker user name - Password = your docker token Note: Leave "Link secret to a service account." empty, we'll do this on command line. Verify values are correct and select "Create". Next we'll go to command line. Log in and use following commands to link credentials to service accounts: $ oc -n <project-name> secrets link builder <secret-name> --for=pull $ oc -n <project-name> secrets link deployer <secret-name> --for=pull $ oc -n <project-name> secrets link default <secret-name> --for=pull Note: Substitute Troubleshooting If the error persists, you may check two things: From you will be able to see when the token was last used. Please check that if the time there matches the last time it should have been used. Check that the links between the secret and the service accounts are there: $ oc -n <project-name> describe sa builder $ oc -n <project-name> describe sa deployer $ oc -n <project-name> describe sa default Note: Substitute
https://docs.csc.fi/cloud/rahti/tutorials/docker_hub_login/
2022-09-25T01:36:17
CC-MAIN-2022-40
1664030334332.96
[array(['/cloud/rahti/img/create_docker_hub_secret.png', 'create secret'], dtype=object) ]
docs.csc.fi
Caveats fcl-dev-wallet support Flowser currently supports fcl-dev-wallet integration only for "custom projects", where flow emulator is managed (started/stopped) by the flowser itself. We recommend that you do not run flow emulator by yourself and instead create a custom emulator configuration through the flowser app. If you do want to run the emulator by yourself (from a shell with flow emulator command), please leave a comment or a "thumbs up" on this issue.
https://docs.flowser.dev/getting-started/caveats/
2022-09-25T01:21:30
CC-MAIN-2022-40
1664030334332.96
[]
docs.flowser.dev
Release Version 1.96 Infrastructure GCP as Cloud Provider for Hevo US Hevo is now available on Google Cloud Platform (GCP) to allow users to choose GCP as their cloud provider for replication and processing data. This feature has currently been implemented for the US (N. Virginia ) region. It is expected to benefit customers who were/are using GCP-native Sources and or Destinations such as Google BigQuery and MySQL and Snowflake on GCP, with the exception of the Databricks Destination. Key benefits for such customers include: Reduced latency as computing also takes place in GCP. Reduced cost as no more cross-cloud data transfer will be needed. Existing customers who want to move the Pipelines to GCP can reach out to Hevo Support to request account creation on GCP, and then re-create their Pipelines in the US-GCP region. Pipelines Draft Pipelines In a major step to allow users to configure the Hevo Pipelines according to their convenience and improve collaboration, provided an option for saving incomplete Pipelines in Draft status. This ensures that users do not lose their work if they exit the Hevo half-way through Pipeline creation. It also makes it easier for team members to collaborate and help with the Pipeline configuration. Users or their team member(s) can continue the Pipeline creation from where they left off. At that time, they are redirected to the Source configuration page to reauthorize or re-renter the API key as it may have expired. Hevo also saves the Destination in Draft status if they have entered at least some of its configuration settings. However, it would not be visible in the Destinations List View and must be configured as part of the draft Pipeline only. Read Draft Pipelines. Sources Azure Blob Storage as a Source Provided integration with Azure Blob Storage as a Source for creating Pipelines. Read Azure Blob Storage. User Experience Easier Object Selection for Stripe Enhanced the UI of the Select the Objects you want to replicate and Pipeline Overview page for the Stripe Source, to highlight the objects for which the Stripe API key does not have permissions for ingesting data. Hevo now displays a warning icon along with the permission required to access the object. Read Object Selection. Re-categorization of Source Integrations The Source integrations have been re-categorized in the Hevo website based on either the business purpose they serve or the way they are implemented or used. Sources that serve multiple business purposes are present in more than one category. Accordingly, the Hevo Sources documentation pages have also been reorganized. Read Types of Sources for the new Source classifications. Note: Refresh your bookmarks to access the latest documentation. Fixes and Improvements Sources Support for RedoLog Ingestion mode for Oracle Database 19c and higher Documentation Updates The following pages have been created, enhanced, or removed in Releases 1.96: Account Management Destinations Getting Started Authentication for Google Workspace Applications (New) Authentication for GCP-hosted Services (New) Selecting your Hevo Region User Roles (Deleted) User Roles and Workspaces (New) Using Google Account Authentication Introduction Pipelines Sources - Azure Blob Storage (New) - - - - - - - - - - Revision History Refer to the following table for the list of key updates made to this page:
https://docs.hevodata.com/release-notes/v1.96/
2022-09-25T02:25:20
CC-MAIN-2022-40
1664030334332.96
[]
docs.hevodata.com
WooCommerce On This Page WooCommerce is an open-source, e-commerce plugin for WordPress. It is designed for small to large-sized online merchants using WordPress. It uses a MySQL database. The details associated with the content in WooCommerce are stored as Events in the MySQL database. For example, orders, payments, and shipping. WooCommerce provides some default tables and also allows you to create custom tables. You can replicate the Events data from your WooCommerce account to a Destination database or data warehouse using Hevo Pipelines. Note: You must provide the MySQL database details to configure the WooCommerce Source. Prerequisites Database credentials of the MySQL database used by WooCommerce. Read more about creating a database for WooCommerce. Hevo’s IP addresses are whitelisted. SELECT privileges are granted to the database user. Perform the following steps to configure WooCommerce via MySQL as the Source in your Pipeline. You must provide details of the MySQL database to configure the WooCommerce database user: GRANT SELECT ON *.* TO 'username'; # to grant privileges to a user with <username> as username Specify MySQL Connection Settings: Specify the following settings in the Configure your WooCommerce WooCommerce is a WordPress plugin and uses many of the WordPress tables. See the WordPress Database Description document to understand the structure of WordPress. The following is the list of tables (objects) that are created by default at the Destination when you run the Pipeline. In addition to the default tables, you can ingest any custom tables that you have created. Note: Each table name is prefixed with your <_WP Database="" Table="" Prefix="">_, for example, _wp__. Learn more about WooCommerce database. Limitations - The WordPress SQL table prefix should be less than 20 characters. Otherwise, WooCommerce is unable to create the tables properly and can lead to problems in loading the data. See Also Revision History Refer to the following table for the list of key updates made to this page:
https://docs.hevodata.com/sources/prod-analytics/woocommerce/
2022-09-25T02:23:34
CC-MAIN-2022-40
1664030334332.96
[array(['https://res.cloudinary.com/hevo/image/upload/v1643024556/hevo-docs/WooCommerce/configure_your_woocommerce_source_tcyii9.png', 'Configure WooCommerce via MySQL as a Source'], dtype=object) ]
docs.hevodata.com
QuotedValueList() Returns a quoted list of all the values, for a given column within the query, delimited by the value given. this function is deprecated, use instead the function queryColumnData Status: deprecated QuotedValueList( query_column=queryColumn, delimiter=string ); Examples There are currently no examples for this function. See also - Search Issue Tracker - Search Lucee Test Cases (good for further, detailed examples)
https://docs.lucee.org/reference/functions/quotedvaluelist.html
2022-09-25T02:45:13
CC-MAIN-2022-40
1664030334332.96
[]
docs.lucee.org
etcd certificates are signed by the etcd-signer; they come from a certificate authority (CA) that is generated by the bootstrap process. The CA certificates are valid for 10 years. The peer, client, and server certificates are valid for three years. etcd certificates are used for encrypted communication between etcd member peers, as well as encrypted client traffic. The following certificates are generated and used by etcd and other processes that communicate with etcd: Peer certificates: Used for communication between etcd members. Client certificates: Used for encrypted server-client communication. Client certificates are currently used by the API server only, and no other service should connect to etcd directly except for the proxy. Client secrets ( etcd-client, etcd-metric-client, etcd-metric-signer, and etcd-signer) are added to the openshift-config, openshift-monitoring, and openshift-kube-apiserver namespaces. Server certificates: Used by the etcd server for authenticating client requests. Metric certificates: All metric consumers connect to proxy with metric-client certificates.
https://docs.okd.io/4.8/security/certificate_types_descriptions/etcd-certificates.html
2022-09-25T02:35:08
CC-MAIN-2022-40
1664030334332.96
[]
docs.okd.io
This section describes a collection of uniform styles throughout the manual. The conventions used in this manual are as follows: The GUI convention styles are intended to mimic the appearance of the GUI. In general, the objective is to use the non-hover appearance, so a user can visually scan the GUI to find something that looks like the instruction in the manual. A shadow indicates a clickable GUI component. The manual also includes styles related to text, keyboard commands and coding to indicate different entities, such as classes, or methods. They don’t correspond to any actual appearance. Lines of code are indicated by a fixed-width font PROJCS["NAD_1927_Albers", GEOGCS["GCS_North_American_1927", GUI sequences and small amounts of text can be formatted inline: Click File QGIS ‣ Quit to close QGIS. This indicates that on Linux, Unix and Windows platforms, click the File menu option first, then Quit from the dropdown menu, while on Macintosh OSX platforms, click the QGIS menu option first, then Quit from the dropdown menu..
https://docs.qgis.org/2.0/fi/docs/user_manual/preamble/conventions.html
2022-09-25T01:48:37
CC-MAIN-2022-40
1664030334332.96
[]
docs.qgis.org
If all the parcels in the response do not fit in one response buffer, the database first sends one buffer-full of the response and later, as directed by CLI, sends the remaining response one buffer-full at a time. A buffer-full of parcels refers to the number of whole parcels that can fit in the buffer. Parcels do not span buffers. As a rule of thumb, the larger the response buffer, the more efficient the transmission of the response from the database.
https://docs.teradata.com/r/Teradata-Call-Level-Interface-Version-2-Reference-for-Workstation-Attached-Systems/October-2021/Response-Sequences/The-Response-Buffer/Sending-Buffer-fulls
2022-09-25T01:59:31
CC-MAIN-2022-40
1664030334332.96
[]
docs.teradata.com
Basic configuration synchronization This section sets up configuration synchronization with the scenario shown in the following figure. In this example: - The master system is R1. - The standby system is R2. The remote system is to be accessed by using the default username and password: vyatta. - Firewall configuration is to be synchronized. To configure R1 for configuration synchronization in this way, perform the following steps in configuration mode.
https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/high-availability/configuration-synchronization/configuration-synchronization-examples/basic-configuration-synchronization
2022-09-25T02:06:33
CC-MAIN-2022-40
1664030334332.96
[]
docs.vyatta.com
Zivver Office Plugin I received an update notification Why am I receiving an update notification saying: “A new version is available!”? This feature helps you to stay up to date. The notification tells you that a newly released update has been published for the Zivver Office Plugin, and that you can install it. What improvements were added to the newest version of the Zivver Office Plugin? For the latest version one can find the release notes here. It lists the fixes and improvements. Do I have to install the update right away? It is advised to install the update as fast as possible. However, the notification can be ignored if it comes up at an inconvenient time and one wishes to install the update later on. You will be prompted to update again when you start up Outlook. I don’t have the required rights to install the update. In that case it is likely that your system administrator limited your rights, in order to ensure that installations and updates are performed under the guidance of the system administrator. Please contact your system administrator in that situation. I tried to install the update, but now Zivver does not work anymore. What should I do? It might help to uninstall or remove the currently installed Zivver Office Plugin in the Windows overview of installed applications. After removing the Plugin, one can install the new version by performing the steps in this installation manual. If you experience problems while performing these steps, please contact Zivver Support via [email protected].
https://docs.zivver.com/en/user/officeplugin/references/update-notification.html
2022-09-25T01:14:49
CC-MAIN-2022-40
1664030334332.96
[]
docs.zivver.com
This page shows how to install and configure the Screens players on your devices. Deploying Screens Note For information about concepts and terminology of Screens, see Screens Concepts. For information about Screens authoring capabilities, see Authoring Screens. There are three different players for iOS, OS X and Android. To install the Screens player on your iPad: Install the AEMScreensPlayer ipa with iTunes. Open Settings > General > Profiles & Device Management. Under Enterprise App, you should see "Adobe Systems Inc.", open it. Tap Trust "Adobe Systems Inc". To configure the player: Open the Settings for AEMScreensPlayer. Change the URL field to contain the IP of your machine. Note The Device Id, User and Password will be filled by the registration process. The resolution field allows you to set the resolution you would like to simulate on the iPad. Note If you leave the Resolution field empty, it will use the native resolution of the device. To install and configure the Screens player on your mac: Double-click on the DMG file and move AEMScreensPlayer to the Application folder. Start the application. Press Ctrl+Cmd+F to exit fullscreen mode. Click on AEMScreensPlayer and then Preferences. Enter the URL of your AEM server. Note The Device Id, User and Password will be filled by the registration process. Restart the player for the new preferences to be applied. To install and configure the Screens player on your Android device: Install the application and tap DONE. Find the AEM Settings widget and drag and drop it to the home screen. Open the settings widget from the home screen and configure it accordingly. Note The device id, User and Password will be filled by the registration process. The devices registration process is done on 2 separate machines: - The actual device to be registered, for example your Signage Display - The AEM server that is used to register your device On your device, start the AEM Screens Player. The registration UI is shown. In AEM, navigate to the Devices folder of your project. Tap/click the Device Manager button in the action bar. Tap/click the Device Registration button on the top right. Select the required device (same as step 1) and tap/click Register Device. In AEM, wait for the device to send its registration code. In your device, check the Registration Code. If the Registration Code is the same on both machines, tap/click Validate button in AEM. Set the desired name for the device, and select Register button. Note If you don't set a name, it will use its unique identifier as name. Tap/click Finish to complete the registration process . Note The Go Back button would allow you to register a new device. The Assign Display button would let you directly add the device to a display. In your device, the registration UI should be updated to show the device identifier. In AEM, the device should be added to the list and remain unassigned. To learn how to assign it, see Device Assignment. By submitting your feedback, you accept the Adobe Terms of Use. Thank you for submitting your feedback. Any questions?
https://docs.adobe.com/docs/en/aem/6-2/deploy/screens.html
2017-02-19T18:58:59
CC-MAIN-2017-09
1487501170249.75
[array(['/content/docs/en/aem/6-2/deploy/screens/_jcr_content/contentbody/image.img.png/1462287556587.png', 'file'], dtype=object) array(['/content/docs/en/aem/6-2/deploy/screens/_jcr_content/contentbody/image_931371664.img.png/1462287571658.png', 'file'], dtype=object) ]
docs.adobe.com
Subgroup Discovery (RapidMiner Studio Core) SynopsisThis operator performs an exhaustive subgroup discovery. The goal of subgroup discovery is to find rules describing subsets of the population that are sufficiently large and statistically unusual. Description This operator discovers subgroups (or induces a rule set) by generating hypotheses exhaustively. Generation is done by stepwise refining the empty hypothesis (which contains no literals). The loop for this task hence iterates over the depth of the search space, i.e. the number of literals of the generated hypotheses. The maximum depth of the search can be specified by the max depth parameter. Furthermore the search space can be pruned by specifying a minimum coverage (by the min coverage parameter) of the hypothesis or by using only a given amount of hypotheses which have the highest coverage. From the hypotheses, rules are derived according to the user's preference. This operator allows the derivation of positive rules and negative rules separately or the combination by deriving both rules or only the one which is the most probable due to the examples covered by the hypothesis (hence: the actual prediction for that subset). This behavior can be controlled by the rule generation parameter. All generated rules are evaluated on the ExampleSet by a user specified utility function (which is specified by the utility function parameter) and stored in the final rule set if: - They exceed a minimum utility threshold (which is specified by the min utility parameter) or - They are among the k best rules (where k is specified by the k best rules parameter). The problem of subgroup discovery has been defined as follows: Given a population of individuals and a property of those individuals we are interested in finding population subgroups that are statistically most interesting, e.g. are as large as possible and have the most unusual statistical (distributional) characteristics with respect to the property of interest. In subgroup discovery, rules have the form Class >- Cond, where the property of interest for subgroup discovery is the class value Class which appears in the rule consequent, and the rule antecedent Cond is a conjunction of features (attribute-value pairs) selected from the features describing the training instances. As rules are induced from labeled training instances (labeled positive if the property of interest holds, and negative otherwise), the process of subgroup discovery is targeted at uncovering properties of a selected target population of individuals with the given property of interest. In this sense, subgroup discovery is a form of supervised learning. However, in many respects subgroup discovery is a form of descriptive induction as the task is to uncover individual interesting patterns in data. Rule learning is most frequently used in the context of classification rule learning and association rule learning. While classification rule learning is an approach to predictive induction (or supervised learning), aimed at constructing a set of rules to be used for classification and/or prediction, association rule learning is a form of descriptive induction (non- classification induction or unsupervised learning), aimed at the discovery of individual rules which define interesting patterns in data. Let us emphasize the difference between subgroup discovery (as a task at the intersection of predictive and descriptive induction) and classification rule learning (as a form of predictive induction). The goal of standard rule learning is to generate models, one for each class, consisting of rule sets describing class characteristics in terms of properties occurring in the descriptions of training examples. In contrast, subgroup discovery aims at discovering individual rules or 'patterns' of interest, which must be represented in explicit symbolic form and which must be relatively simple in order to be recognized as actionable by potential users. Moreover, standard classification rule learning algorithms cannot appropriately address the task of subgroup discovery as they use the covering algorithm for rule set construction which hinders the applicability of classification rule induction approaches in subgroup discovery. Subgroup discovery is usually seen as different from classification, as it addresses different goals (discovery of interesting population subgroups instead of maximizing classification accuracy of the induced rule set). Input training set (Data Table) This input port expects an ExampleSet. It is the output of the Generate Nominal Data operator in the attached Example Process. The output of other operators can also be used as input. Output model (Rule Set) The Rule Set - modeThis parameter specifies the discovery mode. - minimum_utility: If this option is selected the rules are stored in the final rule set if they exceed the minimum utility threshold specified by the min utility parameter - k_best_rules: If this option is selected the rules are stored in the final rule set if they are among the k best rules (where k is specified by the k best rules parameter). - utility_functionThis parameter specifies the desired utility function. Range: selection - min_utilityThis parameter specifies the minimum utility. This parameter is useful when the mode parameter is set to 'minimum utility'. The rules are stored in the final rule set if they exceed the minimum utility threshold specified by this parameter. Range: real - k_best_rulesThis parameter specifies the number of required best rules. This parameter is useful when the mode parameter is set to 'k best rules'. The rules are stored in the final rule set if they are among the k best rules where k is specified by this parameter. Range: integer - rule_generationThis parameter determines which rules should be generated. This operator allows the derivation of positive rules and negative rules separately or the combination by deriving both rules or only the one which is the most probable due to the examples covered by the hypothesis (hence: the actual prediction for that subset). Range: selection - max_depthThis parameter specifies the maximum depth of breadth-first search. The loop for this task iterates over the depth of the search space, i.e. the number of literals of the generated hypotheses. The maximum depth of the search can be specified by this parameter Range: integer - min_coverageThis parameter specifies the minimum coverage. Only the rules which exceed this coverage threshold are considered. Range: real - max_cacheThis parameter bounds the number of rules which are evaluated (only the most supported rules are used). Range: integer Tutorial Processes Introduction to the Subgroup Discovery operator The Generate Nominal Data operator is used for generating an ExampleSet. The ExampleSet has two binominal attributes with 100 examples. The Subgroup Discovery operator is applied on this ExampleSet with default values of all parameters. The mode parameter is set to 'k best rules' and the k best rules parameter is set to 10. Moreover the utility function parameter is set to 'WRAcc'. Thus the Rule Set will be composed of 10 best rules where rules are evaluated by the WRAcc function. The resultant Rule Set can be seen in the Results Workspace. You can see that there are 10 rules and they are sorted in order of their WRAcc values.
http://docs.rapidminer.com/studio/operators/modeling/predictive/rules/subgroup_discovery.html
2017-02-19T18:41:01
CC-MAIN-2017-09
1487501170249.75
[]
docs.rapidminer.com
This document contains information for an outdated version and may not be maintained any more. If some of your projects still use this version, consider upgrading as soon as possible.. [Zend_Date API](). This means all formats are defined in [ISO date format](), not PHP's built-in date(). class: URLSegmentFilter::$default_use_transliterator = false DateField::set_default_config() and TimeField::set_default_config(). If no 'locale' default is set on the field, i18n::get_locale() will be used. Important: Form fields in the CMS are automatically configured according to the profile settings for the logged-in user ( Member->Locale, Member->DateFormat and Member->TimeFormat). This means that in most cases, fields created through DataObject::getCMSFields() The preferred template syntax has changed somewhat since version 2.x. %> Collecting text To collect all the text in code and template files we have just to visit: http://<mysite>/dev/tasks/i18nTextCollectorTask: http://<mysite>/dev/tasks/i18nTextCollectorTask/?module=cms You'll need to install PHPUnit to run the text collector (see testing-guide).). The format of language definitions has changed significantly in since version 2.x. i18n in javascript works with mostly the same assumption as its PHP-equivalent. Requirements Add the i18n library requirement to your code. Requirements::javascript(FRAMEWORK, ]()
https://docs.silverstripe.org/en/3.0/topics/i18n
2017-02-19T18:42:28
CC-MAIN-2017-09
1487501170249.75
[]
docs.silverstripe.org
No art submitted yet Consequences of the Talking Board: Part III Ashley’s Transformation The almost omnipotent Southern moon snuck lunar rays through the blinds of the small apartment living room. In the next room, Ashley pushed around the piles of junk mail littered across her kitchen table as she searched for car keys. Having returned from a visit to her parent’s house that afternoon, she had thoughtlessly tossed her key chain on the table earlier that day. She had plans that evening to meet Mabel, Savannah, and a few other friends at the VFW for another small town Saturday night that would entail cheap beer, snobbish remarks about the more unscrupulous Rennisville women, and dances with the local guys. “Damn it!” the young women curtly said to herself, frustrated that she had once against failed for another week to bother cleaning the table, which acted as a temporary filing station. The keys had obviously slipped underneath the unneeded parcels. All she needed were those keys to get going. She was already dressed for a night on the town. Wearing the standard young country girl uniform, Ashley, along with a medium wash jean skirt had put on a red flannel shirt with a white tank top underneath. But no country girl outfit was complete without a pair of cowboy boots, which Ashley wore dutifully. Not that her feet had ever been on a horse or tractor, much less did any backbreaking work indicative of the uniform, but she also never actually knew anything about Waylon Jennings or George Jones beyond the shameless name-dropping written by the latest Nashville bro-country songwriters, either. Regardless of any country poseur pretensions, Ashley convincingly pulled it off with short, sandy blonde hair and faint freckles. The sound of shuffling papers was broken by the electronic buzz of a cell phone. Placing her purse on the table, Ashley pulled out her phone and saw Mabel’s name and number appear on the screen. Last minute phone calls before a night out were not unusual. “Hey girl, I’m just about to leave once I find my keys.” “Ashley…” Her friend’s voice shook. “Mabel, is everything okay?” Ashley was not the smartest girl around, but she had a keen sense of emotional intelligence. “Ashley…you’re not going to believe this.” Mabel was attempting to hold back some sort of anxiety on the phone. “Okay…” Concern filled Ashley voice. “You know Uume? He’s…he’s real.” ”Uume?” Ashely had no idea what Mabel was talking about. “Yeah, from the Ouija board…” ”Oh…” Ashley found the previous night’s events disturbing, but she had all but forgotten them. It was just a fun little game that people play. She had read some stories about séances and Ouija sessions going horribly wrong, but she had always figured those were just made up stories. “Uume is real and he followed me home last night.” By now, it was apparent that Mabel was almost unsuccessfully attempting to hold back tears. “Mabel, calm down!” It was a sincere attempt to assuage the panicked girl on the other end of the phone. Mabel continued, growing somewhat hysterical. “Uume is real and he followed me. He caught me in the field up by Savannah’s. He stripped me! Ashley, he stripped me naked.” The emphasis was on he as if Mabel were afraid to utter the demon’s name any more than needed to convey a point. “Mabel!” It was all Ashley could conjure up, trying to get her friend to slow down. “He then used some sort of magic on me…and he turned me into a…” Despite not being able to see the other person on the phone, Ashley knew tears were rolling down Mabel’s face at this point. What ever Mabel was getting at, actually saying it was difficult. “He turned me into a giant pe…” – and then radio silence on the other end. Ashley pulled the phone away, looking down at the screen to see the call had been dropped. She pushed the screen to redial Mabel. “User Unavailable” came up in bold, white letters. A sinking dread fell over the Ashley. She flipped to the text messaging option and began typing. “MABEL I THINK WE GOT DISCONNECTED PLEASE TELL ME THAT YOU ARE ALRIGHT. I AM COMING OVER RI…” Before she could finish her message, the phone slipped from her hands like a bar of soap, shooting across the room and landing on the kitchen counter. Ashley yelped. Things were starting to become weird. Mabel just called, hysterically claiming that the demon they had supposedly spoken to the night before was in fact real, and now Ashley’s phone just leapt from her hands. Ashely rationalized to herself that she was not in fact gripping the phone very tightly. Perhaps the stars just lined up in that moment and she accidentally sent the device flying in her mad typing rush to Mabel. The frightened young woman stared at the counter. Still, a part of her remained afraid to grab the phone on the other side. Placing her palms down in the air, Ashley explained to herself, “I’ll find my keys, grab my phone, and get over to Mabel’s.” Before she leaned over to start rooting again for her keys, a guttural and hellish voice rumbled through the doorway. It penetrated Ashley’s entire body, sending chills down her core and causing her to jump, almost knocking into the refrigerator that stood behind. “matiti yako, kitako yako, uke wako ni wangu!” The demon needed no further introduction. The horrid truth was clear to Ashley: Uume was real. Worst of all: He was inside her home. “lakini kwanza i itachukua punda wako kwa ajili ya mgodi” Ashley did not know what the virility spirit had just said, but his demonic words were foreboding in tone. With the power of an invisible unruly beast, the white wooden chair flung itself from under the table in front of Ashley, landing upturned on the other side of the room. “AHHHHHHH!!!!!!!” Ashley roared. Reacting to the very deliberate thrashing of the chair, she leaned back against the refrigerator, bending slightly at her stomach. In that moment, she was a damsel in distress – feminine prey for an over-androgenized virility demon, whose only immediate interest was to sexually corrupt her in anyway he saw fit. In the context of that situation, her physical attributes would seem all the more amplified to the poised eye of an audience. In appearance, she was the final girl in every horror film who you wanted to see succumb to the perversions of the monster but by whom you are always let down. But Ashley’s situation was not that of a typical late night viewing, and Uume had no use for cliché tropes. An unseen force latched around Ashley’s wrists and dragged her forward. She fell halfway across the kitchen table, knocking envelopes and papers all over. The palms of her hands and meaty chest slapped against the wooden surface. The wind was briefly knocked from Ashley’s lungs when her stomach slammed into the edge of the table. Ashely tried to stand up, but an unflinching invisible weight kept her head and torso pinned down, bending her over the edge of the board in a state of submission. A swooshing sound seemed to envelope the air all around the hot little quarry. It started off subtle and then grew louder, like a baby sonic boom. The sound came to a cresendo, ending in a loud, physical ‘whap’ on Ashley’s rear end. She let out a scream that was partially due to pain and partially due to humiliation. Uume had just spanked her like an insolent child. The cutting of air filled the room again, landing another firm smack upon her ass. She found no relief from the tight denim that stretched over her toned butt that barely shook from the spirit’s discipline. A third slap came, and then a fourth and a fifth. “No, please stop, “Ashley cried out, trying to spit pieces of hair that had become unkempt and lodged in her mouth. But this plea only seemed to further enrage the lustful demon. He spanked her unprotected ass so hard that the entire table slid forward. Before she could recover, a seventh crack caused Ashley to lunge forward, pushing the makeshift spank bench up against the counter. His victim now jammed with no place to go, Uume laid a barrage of floggings upon Ashley’s ass. As the spankings continued, chilling sensations ran across Ashley’s crotch. The chill became warmer, as if some type of soothing cream were being applied to her twat. The sensation started across her vaginal lips before boring itself further inside. Her face flushed red with shame and heat. Some of the weight coming down upon her back subsided, causing her to lean upwards. Each subsequent spank made her hips pitch forward, making it look as though Ashley were humping the table. But any relief was really just Uume toying with Ashley’s defenseless ass. He wanted to give her the sense that there was an escape, that she was able to avoid the fates of Mabel and Savannah. Ashley would get hers, make no mistake about it. The demon let go and Ashely was able to break free. She leapt away from the table, trying to not trip over herself. She gave the table and floor one last look, hoping to see her car keys – no luck. She sprinted from the kitchen into the darkened living room, out the door and onto the sidewalk of her small apartment complex. Frantically, Ashley ran down the row of units, banging on doors. “Please, please someone be home,” she whimpered. With every knock, she kept moving, looking back to hope that someone would open a door. All of the lights were turned off in the units, but she couldn’t think of anything else at the moment. Unfortunately, the five or so apartments were all rented by young twenty-somethings much like Ashley herself. Most – if not all – were probably out on the town, which is what she would be doing if it weren’t for the monstrous assailant in her apartment. Ashley banged on the last apartment door. No one answered. She turned around, looking left and right. Her complex was out of the way and situated down a gravel road. It was quintessential country living – at least as far as apartment living went. The nearest house was at least a half a mile away. Making a split decision, she wasn’t going back to the apartment to grab her keys. She would have to hoof it out of there. Before giving the damsel a chance to run away, the misty form of Uume clouded behind Ashley, ensnaring her in its grasp and yanking her back against the door of the vacant apartment. The girl screamed until her throat was raw, but her cries only fell upon the uncaring deafness of the summer breeze. A hazy form, somewhat resembling a clawed hand, pushed up against her stomach, partially untucking Ashley’s flannel shirt. She felt the icy touch shoot through her, goose pimples rising. Uume grabbed one of his prey’s boobs and began to squeeze, forcefully massaging it. A second translucent hand reached up and began giving similar attention to the other tit. Ashley looked down, seeing each side of her bust moving up and down while twisting under the demon’s unholy mauling. Uume spoke into Ashley’s ear, “ni uhalifu kwamba wewe kujificha matiti hizi.” She didn’t understand the demon’s ancient Swahili language, but the now more serpentine voice that contrasted to the booming voice he spoke with earlier while inside her apartment caused the hair to stand up on the back of her neck. The demon’s pawing momentarily stopped and the mist began tugging on either side of her flannel shirt. A plastic button popped from its thread just below the neckline, dropping to the sidewalk. The steamy force worked its way down Ashley’s top, pulling the buttons free, until there was nothing left to keep the shirt open. The force ceremoniously pulled her shirt open and yanked it down off of her shoulders. The tank top underneath revealed just how much of a banging body Ashley really had. She was clearly the living embodiment of country girl spank-material. At first stunned for a moment after losing her shirt, Ashley saw an opportunity to break away from Uume’s hold. She still feeling some weighty pressure against her body, with all of the power she could muster, she was able to pull from his otherworldly grip. Ashley ran through the tiny parking lot that was empty, save for her own car that she could not even get into. Tripping along at first due to boot heels, she found a specific useful stride by stomping down with the whole of her sole that allowed her to maintain a steady gait while not losing balance. Ashley knew that she would probably run quicker in bare feet, but getting as far away from the grabby unseen hands of the demon took precedent. She didn’t know how far he would take his molestations nor did she wish to find out. The clop, clop of boot on pavement changed to the soft thud of hastened steps on grass. Ashley tore through the rolling meadow outside of her complex. She figured it was the quickest route to the main road. Daring never to look back, she ran into the forestry area that separated her small community from the back roads of Rennisville. Jumping and leaping through brier patches and across impeding stones like a nascent acrobat performing an improvised parkour routine, Ashley made sure that no move was wasted, no decision would let Uume catch up with her. Only one foot barely touched Leeman’s ditch; she sprinted up over the path leading to Mackey’s old farm. The old, rickety fence stood in her way. Ashley slowed down enough to grab onto the rotting wood and lift herself up. A queasy feeling took over the young woman’s stomach as she felt herself losing balance. Her feet slipped and she felt herself fall forward from the tallest beam. Intuitively, Ashley bent her arms up to avoid spraining her wrists, and she embraced for the hard impact of the cold ground. There was no time to assess the pain. Her backside still stung from the humiliating spanking she had earlier incurred anyway. “Did I lose him?” she thought. It was no matter; Ashley still pushed forward, trying to reach a semblance of civilization. The Southern moon lit the way across the field. Ashley felt herself lose balance a few times. Two or three times, she sensed her foot sinking into a gopher hole, but she quickly recovered and kept moving. In the frantic attempt to put as much distance as possible between herself and the uncleansed spirit, worry began to consume Ashley as she realized she had run so far that she no longer knew where she was. “Oh shit!…Oh shit!…Oh shit!” she said to herself with every deep breath of air. Coming to a stop with a hand upon a tree, Ashley bent over panting. Mustering up enough courage, she looked towards the absent path from which she had just come – no sign of him. Again, it was him. Ashley dare not think of Uume’s name, let alone speak it. At least fifteen minutes had passed since she ran from the parking lot. She thought surely she had lost the demon by now. But he…he just became present in her kitchen. Was the demon able to materialize at will? If that were the case, Ashley realized that she might not be particularly safe, though she seemed to be for now. “I can’t be alone tonight,” she reasoned. Her breathing became steady and shallower. Looking up at the moon, she thought, “As soon as I catch my breath, I’ll set out of here. I’ll find Mabel and Savannah and get this figured out.” The trees rustled in the Alabama breeze. It would have been a serene night if it weren’t for the disturbing haunting that the three women had apparently invited upon themselves the night before. The whole scenario seemed to give the evening air an eerie feeling. Ashley’s initial comfort that she had gotten away from Uume’s perverse touch was short-lived. She jilted her head around, feeling as though she were not alone. An unpleasant and chilly breeze shot across the foolish girl’s chest. She knew it was him. “Go away!” she screamed. The same breeze danced across her butt, almost mocking her with its demonically playful grope. “Leave! Me! Alone!” Her protests were completely ineffectual. In fact, they only helped to whet Uume’s twisted appetite. Orange smoke rose from the ground before the tremulous young lady. It flickered and spit like flames from a bonfire. Sulfuric smells filled the senses, and the entire area lit up in a glowing hue. The swirling vapors took on a vaguely humanoid shape with two burning eyes. Uume appeared, towering like Chernabog over Ashley. Ashley shook her head in fear and disbelief. There was nowhere to run. He would find her, wherever she went, and he would not stop until she suffered for her childish games from the night before. There was a penalty for such trespasses, and Uume took payment in a woman’s sexual being. He decided her kinky, mortifying fate in seconds. His laugh thundered. The smoke dissipated into long spires that whisked all around the stacked little babe. Up, down, and through they shot. Inhuman shrieks emanated from them, swelling and falling in volume. Ashley could hear what sounded like voices, shouting what she assumed were filthy insults at her: “hebu uke wako kukua” “uume itakuwa kutomba mwili wako” “kuja kwa ajili yangu” “matiti yako jazwa” “unahitaji kuwa juu ya mikono yako na magoti msichana” And then in a flash, the onslaught stopped. The spires disappeared, and Ashley was left standing alone next to the old tree. Uume’s voice roared one last time: “angalia chuchu yako kukua kwa muda mrefu na ngumu.” The sensations came on slowly, two of them, starting each as a prickle no larger than a pinpoint. They escalated, energized with tense chill, centering at the end of each of Ashley’s teardrop shaped boobs. She moaned, her syrupy Southern accent unmistakable in the disinclined coo from her parting lips. “You bastard!” she yelled to the wind. The provocation stirred ever so stronger in her bosom. First, a small nub at the front of each breast sprung up, causing the smooth surface of her non-ribbed tank top to become convex at the points of her tits. She stomped about, trying to pull her mind from the oven that was stoking in her chest. The telltale sign of female arousal only intensified. She reached up and rolled one of the swollen nipples between her thumb and index finger. The sensitivity was unbearable; as soon as Ashley pulled her hand away as if she had touched a hot stove, a stretching sound could be heard. Her mammalian ducts expanded and the soft, pink, suckable part of her began to harden and elongate. Two narrow, rounded marks grew over an inch long underneath her top. Ashley unmistakably felt tightness under her arms from the shifting of her clothes as a result of the pointy development. Her nipples stuck out like two fondle-ready headlights. The change had made her a something of a freak show. Ashley imagined herself chained with her hands behind her back in some carnival tent, while passersby pulled and twisted her new oversized tit-sausages. The thought was utterly repugnant. Sensitivity from each erect nipple shot down, forming a Y of sexualized energy between her legs. Ashley leaned against the tree, Uume’s unwelcome aphrodisiac magick taking hold, griping her precious pearl of a pussy in its bestial, clenching claw. Under the control of a virility spirit, her womanhood was not a flower to worship with boyish awe. Instead, it was a tool of pure domination, attached to her own body to cruelly use against her. Ashley didn’t close her eyes in ecstasy. They were wide open in disbelief and immoral shame. Wetness would be an understatement of description. Ashley felt like she had a swamp under her skirt. The smell of the woman’s own juices waifed upwards towards her nose; she had never actually ever smelled herself before, but now, she could almost taste the pungent fragrance of her meat purse. Two smokey rings materialized and hooked onto her obscenely large nipples, pulling her forward off of the tree by her tits. Her nipples were pulled and abused in every way possible by Uume’s touch. Each motion was intended to reduce her to a cumming cowgirl slut. And like that, her dam broke. Ashley’s discharge was steamy and heavy. It poured from under her skirt in big creamy dabs that greasily slid down her legs. Each serving of girl juice splashed into an ever-growing puddle on the earthen ground. Unreality set in. Ashley realized that she had never came in such a way before. Actually, she had never seen or heard of a woman coming in such a way. But there she was, churning out spunky blasts of splooge down her trembling legs like the subject of some strange medical study on sexual paraphilia. The molestation of her nipples subsided. Ashley grunted, swearing that her chest was stuck in an unrelenting vice that only ramped up her sexual agitation. The sensitivity contained in her suckables seemed to augment and then disperse throughout her entire chest. It was not a tactile tingle of arousal. The entire sensation had a drenched, naughty feel to it – a feeling that she had never really before experienced. Throwing her hands up, Ashley yelped. Her chest barreled out. Each boob grew in tandem with the other, pushing out against the inside of her shirt. It pulled up, showing more of her stomach, that now had a sticky looking glisten to it – as though her sweat had some creamy property to it. Her tits expanded in every physically possible direction – forward, up, and out – tearing the seams under her arms and ripping the shirt down its front. The unintentional v-slit widened, exposing her ivory-colored balcony bra. She reached up and began unintentionally massaging one of the meaty sacks. It’s not that she wanted to feel herself up; it’s that her mind almost commanded her to woman-handle her tits. She actually felt her chest pulling apart her brassiere from its expanding force. Ashley cried out, “I can’t help myself!” The tits tore her bra where it held strong or just simply pushed away at the locations at which it easily slid under he ginomorous mass. The growth stopping, Ashley’s tits had become two giant woman-udders hanging off of her. Their mass had expanded past the frame of her shoulders. The throb in Ashley’s pussy offered no relief. It wanted the girl on her hands and knees to drive home the point that she was no longer under control of her own sexual release. No, instead agency subjugated itself to the warm heat and hard clit between Ashlely’s two slippery inner-thighs. The weight of her overgrown tits, coupled with the overpowering orgasms, caused the young lady to fall forward. Her palms and knees fell into the soft ground. She could feel the facsimile of quicksand underneath her body – the mix of dirt with her own procreative secretions. The fall almost seemed calculated, done in such a way to put her in the most perfect objectifying position to unleash the remaining touch of Uume’s ancient magicks. Her tits flopped; her back cracked. For the first time ever, she felt like a slut – a feeling she despised. Heat stoked inside Ashley’s anus, spilling out in waves of animalistic arousal that blanketed the circumference of her ass cheeks. While letting out a moan that she would even be ashamed to utter alone in bed at night, Ashley’s wide but firm ass lifted up and pushed out. The immediately recognizable sound of growth accompanied the stretching of flesh. Her rear expanded ever larger, pushing from out of the bottom of her skirt, showing the crotch of her stained, ivory panties to any eyes that would be cast from behind the two ripe pieces of ham. The massive buttocks shook with fatty oomph; Ashley looked almost laughable with two massive jugs on one end and a whale of an ass on the other, connected by the otherwise hot body of a young babe. Ashley had been warped into an exaggerated personification of sex. A series of throat spasms rocked Ashley. Need seeped up along her esophagus like thick, warm milk. Ashley sputtered before a translucent white glop oozed from her lips. In any other situation at any other time, she would have reached up and panicked, but the vibration of a full body orgasm was beginning to take hold. “Something terrible is happening,” she shouted in her mind; yet she could not get herself to move. Droplets of cloudy ooze gathered at the tips of her sausage-y nipples, accumulating into teardrops before splashing upon the ground. Excitement consumed her massive breasts, but the sensations went beyond the mere jolts that came with fondling or sucking; it felt like her tits were actually being fucked. “But how was that possible,” she thought to herself. She writhed and moaned, while shots of girly jizz began erupting even faster and harder from her overworked pussy. Ashley’s hand reached down between her legs. Her skirt now wrapped around her waist like a belt gave her easy access, and she began to abuse her twat most furiously. She felt her rectum load up and then release eject from itself a sticky torrent. In that instant, she began cumming from her asshole, two streams flowing from out underneath either side of knotted panties. Ashley tried to pull herself forward with the other hand and stand. She couldn’t let Uume degrade her in such a way. But she found getting to her feet almost impossible. The moment she began to put her own weight down, she toppled forward, her left foot slipping from inside of its boot. Ashley, still seemingly cumming from almost every orifice, looked back at her foot. The sock didn’t appear to be gripping the appendage tight. Instead, it looked to be almost drooping over the bottom of her leg. The pit of Ashley’s stomach felt empty. What was happening now? Something was certainly not right. She grabbed the end of the white sock and clamped on her foot. It felt mushy! Pulling the sock off, Ashley was met with the ghastly image of her foot…melting. It looked like the fetishistic centerpiece of a Dali painting. The toes were soft and had a semi-liquid appearance, while teardrop shaped droplets hung from the sole. Despite the horrible transfiguration taking place before her eyes, Ashley felt no pain in her foot. Instead she felt nothing in it but intense, burning need. If there were a such thing as a foot orgasm, she was indeed experiencing it. Her mind now erratic and twisted with both horror and horniness, Ashley attempted to crawl forward, trying to escape the sinister curse that had befallen her. Slapping her hand down, Ashley’s palm simply sunk into the ground, a pool of creamy substance collecting around the disappearing appendage. Beads of sensual fluids ran down her arm like drops of preseminal perspiration. Rows of ropey lines reached between her legs and the ground, like pieces of glue stuck between two separating sheets of paper. The non-ripped part of her shirt darkened before turning slimey with the cum-ifying transmutation of her tummy that was transpiring just beneath the fabric. The rest of Ashley’s arm submerged into the growing lake where her hand originally had been. A buttery pool leached from underneath her body, gathering in a vaguely human shape on the forestry floor. Ashley felt her insides liquefying only to be ejaculated from her spasming pussy. With her other quickly melting arm, she twisted onto her back. While turning, her skirt and panties easily slid from her slippery hips and piled at the bottom of egs that were fast sinking into a velvety cum pond. Torrents of periurethral discharge flowed like little avalanches down the highest parts of Ashley’s giant boobs. However, her rack seemed to sink down quicker than the rest of her into the giant wad of cum that was once her ventral side – two massive flesh mountains falling into the sea. Cum that was once Ashley’s body was strewn throughout her hair. She looked like a sex doll bathed in milk. Her swollen ass sank down next. While parts of her legs and arms remained, they simply stood as independent and unattached lengths in a borderless mass of orgasmic fluids. The look of uninhibited release was practically frozen on Ashley’s face as it succumbed to the pool. Cum poured back down into her mouth and hair melted into a semen-like substance. Visions of shapes and distinct colors gave way to the filtered moonlight as her eyes became covered with a cum blanket. Ashley’s face disappeared under the setting cream. Hips and a still squirting vagina was the only distinct body construct that remained. Ashley’s pussy continued to pump, while the hips lowered into the rest of her. She felt every unrelenting orgasm, every contraction in her loins. With one final spurt, her twat erupted in a fountain of jizz that then consumed it. Ashley had been both literally and figuratively turned into a giant puddle of cum. Every fiber of her being was morphed and changed into the physical manifestation of a wanton orgasm. Uume had used her own pussy against her. Being the most clever virility demon, he even circumvented the most basic tenet of psychology and made dualism a reality: Even though Ashley did not necessary have a brain, her mind was perfectly intact. She felt nothing but a nightlong orgasm. She could contemplate her existence – the truth that she was not a young woman in that moment, she was a puddle of sticky jizz, and Uume had made her into that sad form. The pool of cum that was Ashley lied there for the rest of that hot, Alabama night – just a big goopy puddle with a bunch of ruined clothes strewn within it. *** Ashley snapped up, as if she had awoken from the dead. Blood instantly rushing to her head, she rubbed the sleep from her eyes. The rays of the Southern sun were beating down upon her naked chest. “Di…did that really happen last night,” she thought to herself. Ashley inspected herself. Relief came over her. Ashley’s breasts and butt were normal size. Feeling slightly silly, she pinched herself, making sure that the flesh on her arm was indeed still real and not gooey discharge. She found her skirt and slid it back on. The rear seam had torn slightly, as though it were distended over something that was a size for which it was not made. She pulled on her ripped shirt and held it closed over her breasts with one hand. This was going to be one humiliating walk of shame back home. Upon arriving at the apartment complex, she was relieved that all of her neighbors were apparently still sleeping. Pushing the ajar apartment door the rest of the way open, she saw the evidence from the previous night – the overturned chair, the table up against the counter. The truth stared back at her. She had been spanked, molested, swollen in the most sexual of ways, and then reduced to a puddle of cum. Ashley reached down, finding her car keys on the floor. She looked over and saw the cell phone that was ripped from her hands the night before. Picking it up, she pressed “Return Call.” The phone rang twice. “Ashley! Oh thank heavens! You’re alright.” Ashley tried her damnedest not to break down. “Mabel, Uume…he’s real, and he was here last night.” There was a slight hesitation in Mabel’s voice. “We really need to talk…”
http://docs-lab.com/submissions/1321/consequences-of-the-talking-board-part-iii-ashleys-transformation
2017-02-19T18:41:24
CC-MAIN-2017-09
1487501170249.75
[]
docs-lab.com
public class TransactionTimedOutException extends TransactionException Thrown by Spring's local transaction strategies if the deadline for a transaction has been reached when an operation is attempted, according to the timeout specified for the given transaction. Beyond such checks before each transactional operation, Spring's local transaction strategies will also pass appropriate timeout values to resource operations (for example to JDBC Statements, letting the JDBC driver respect the timeout). Such operations will usually throw native resource exceptions (for example, JDBC SQLExceptions) if their operation timeout has been exceeded, to be converted to Spring's DataAccessException in the respective DAO (which might use Spring's JdbcTemplate, for example). In a JTA environment, it is up to the JTA transaction coordinator to apply transaction timeouts. Usually, the corresponding JTA-aware connection pool will perform timeout checks and throw corresponding native resource exceptions (for example, JDBC SQLExceptions). ResourceHolderSupport.getTimeToLiveInMillis(), Statement.setQueryTimeout(int), SQLException, TransactionTimedOutException(String msg) msg- the detail message public TransactionTimedOutException(String msg, Throwable cause) msg- the detail message cause- the root cause from the transaction API in use
http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/TransactionTimedOutException.html
2017-02-19T19:00:22
CC-MAIN-2017-09
1487501170249.75
[]
docs.spring.io
Groovy is a powerful tool. Like other powerful tools (think of a chainsaw) it requires a certain amount of user expertise and attention. Otherwise, results may be fatal. Following code fragments are allowed in Groovy but usually result in unintended behaviour or incomprehensible problems. Simply don't do this and spare yourself some of the frustration that new and unskilled Groovy programmers sometimes experience. 1. Accessing an object's type like a property Using .class instead of .getClass() is ok - as long as you know exactly what kind of object you have. But then you don't need that at all. Otherwise, you run in the risk of getting null or something else, but not the class of the object. 2.. (One measure is to put constant expressions in comparisons always before the equals operator.) 5. Specifying the wrong return type when overriding a method (Groovy version < 1.5.1 only) Such a method may be considered as overloaded (a different method with the same name) and not be called as expected..
http://docs.codehaus.org/pages/viewpage.action?pageId=233051235
2014-04-16T16:33:46
CC-MAIN-2014-15
1397609524259.30
[]
docs.codehaus.org
User Guide Local Navigation smartphone memory or on a media card. For information about the maximum size of files that you can send and receive, contact your administrator. Next topic: Send a file Previous topic: Add an emoticon Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/40182/About_files_Stime_377614_11.jsp
2014-04-16T16:44:29
CC-MAIN-2014-15
1397609524259.30
[]
docs.blackberry.com
Help Center Local Navigation I cannot type a passkey on a Bluetooth enabled device If you cannot type a passkey on a Bluetooth® enabled device, the passkey might already be defined. On your BlackBerry® device, in the Enter passkey for <device name> field, try typing 0000. Previous topic: Troubleshooting: Bluetooth technology Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/14928/I_cannot_type_a_passkey_on_Bluetooth_device_26049_11.jsp
2014-04-16T17:00:21
CC-MAIN-2014-15
1397609524259.30
[]
docs.blackberry.com
The, and all are updated with Java 5 support such as generics and varargs. NamedParameterJdbcTemplate wraps a JdbcTemplate to provide named parameters instead of the traditional JDBC "?" placeholders. This approach provides better documentation and ease of use when you have multiple parameters for an SQL statement. SimpleJdbcTemplate combines the most frequently used operations of JdbcTemplate and NamedParameterJdbcTemplate.cTemplate class and the related SimpleJdbcInsert and SimpleJdbcCall classes. Another subpackage named org.springframework.jdbc.core.namedparam contains the NamedParameterJdbcTemplate class and the related support classes. See Section 13.2, “Using the JDBC core classes to control basic JDBC processing and error handling”, Section 13.4, “JDBC batch operations”, and Section 13.5, in-memory database instances using Java database engines such as HSQL and H2. See Section 13.3, “Controlling database connections” and Section 13.8, “Embedded database support” The org.springframework.jdbc.object package contains classes that represent RDBMS queries, updates, and stored procedures as thread safe, reusable objects. See Section 13.6, “Modeling JDBC operations as Java objects”.This approach is modeled by JDO, although of course objects returned by queries are Section 13.2.4, “SQLExceptionTranslator”.). This section provides some examples of JdbcTemplate class usage. These examples are not an exhaustive list of all of the functionality exposed by the JdbcTemplate; see the attendant Javadocs for that. Here is a simple query for getting the number of rows in a relation: int rowCount = this.jdbcTemplate.queryForInt("select count(*) from t_actor"); A simple query using a bind variable: int countOfActorsNamedJoe = this.jdbcTemplate.queryForInt( "select count(*) from t_actor where first = ? where id = ?", "Banjo", 5276L); this.jdbcTemplate.update( "delete from actor where id = ?", Long.valueOf(actorId)); @Components namedParameters = Collections.singletonMap("first_name", firstName); return this.namedParameterJdbcTemplate.queryForInt(sql, namedParameters); } is a source of named parameter values to a SqlParameterSource Section 13.2.1.2, “JdbcTemplate best practices” for guidelines on using the NamedParameterJdbcTemplate class in the context of an application. The SimpleJdbcTemplate class wraps the classic JdbcTemplate and leverages Java 5 language features such as varargs and autoboxing. The value-add of the SimpleJdbcTemplate class in the area of syntactic-sugar is best illustrated with a before-and-after example. The next code snippet shows data access code that uses the classic JdbcTemplate, followed by a code snippet that does the same job with Object[] {specialty, age}, mapper); } Here is the same method, with the SimpleJdbcTemplate. // use of varargs since the parameter values now come // after the RowMapper parameter return this.simpleJdbcTemplate.queryForObject(sql, mapper, specialty, age); } See Section 13.2.1.2, “JdbcTemplate best practices” for guidelines on how to use the SimpleJdbcTemplate class in the context of an application. acual database you are using are used. The SQLErrorCodeSQLExceptionTranslator applies matching rules in the following sequence: Any custom translation implemented by a subclass. Normally the provided concrete SQLErrorCodeSQLExceptionTranslator is used so this rule does not apply. It only applies if you have actually provided a subclass implementation. Any custom implementation of the SQLExceptionTranslator interface that is provided as the customSqlExceptionTranslator property of the SQLErrorCodes class. The list of instances of the CustomSQLErrorCodesTranslation class, provided for the customTranslations property of the SQLErrorCodes class, are searched for a match. Error code matching is applied. Use the fallback translator. SQLExceptionSubclassTranslatoroplate;))"); } } Some query methods return a single value. To retrieve a count or a specific value from one row, use queryForInt(..), queryForLong(..) or"/> The DataSourceUtils class is a convenient and powerful helper class that provides static methods to obtain connections from JNDI and close connections if necessary. It supports thread-bound connections with, for example, DataSourceTransactionManager.. is an AbstractDataSource base class for Spring's abstract DataSource implementations that implements code that is common to all DataSource implementations. You extend the AbstractDataSource class if you are writing your own DataSource implementation.! Sometimes you need to access vendor specific JDBC methods that differ from the standard JDBC API. This can be problematic if Javadocs for more details. Most JDBC drivers provide improved performance if you batch multiple calls to the same prepared statement. By grouping updates into batches you limit the number of round trips to the database. This section covers batch processing using both the JdbcTemplate and the SimpleJ) { int[] updateCounts =(); } } ); return updateCounts; } // ...(final List<Actor> actors) { SqlParameterSource[] batch = SqlParameterSourceUtils.createBatch(actors.toArray()); int[] updateCounts = namedParameterJdbcTemplate.batchUpdate( "update t_actor set first_name = :firstName, last_name = :lastName where id = :id", batch); return updateCounts; } // ...); } int[] updateCounts = jdbcTemplate.batchUpdate( "update t_actor set first_name = ?, last_name = ? where id = ?", batch); return updateCounts; } // ... additional methods } All of the above batch update methods return an int array containing the number of affected rows for each batch entry. This count is reported by the JDBC driver. If the count is not available, the JDBC driver returns a -2 value.) { Collection<Object[]> batch = new ArrayList<Object[]>(); for (Actor actor : actors) { Object[] values = new Object[] { actor.getFirstName(), actor.getLastName(), actor.getId()}; batch.add(values); } the batch size provided for all batches except for the last one that might be less, depending on the total number of updat objects provided. The update count for each update stament is the one reported by the JDBC driver. If the count is not available, the JDBC driver returns a -2 value. executeReturning executeReturningKeyHolder method. You can limit the columns for an insert by specifying a list of column names with CaseInsensitiveMap from the Jakarta Commons project. To do the latter, you create your own JdbcTemplate and set the setResultsMapCaseInsensitive property to true. Then you pass this customized JdbcTemplate instance into the constructor of your SimpleJdbcCall. You must include the commons-collections.jar. To define a parameter for the SimpleJdbc classes and also for the RDBMS operations classes, covered in Section 13.6, = ?"); super I nOut parameters, parameters that provide an in value to the procedure and that also return a value. For i oracle.jdbc org.springframework.jdbc.core.RowMapper; import java.sql.ResultSet; import java.sql.SQLException; import com.foo.domain.Title; superclass' untyped execute(Map parameters) method (which has protected access); for example: import oracle.jdbc.OracleTypes; import org.springframework.jdbc.core.SqlOutParameter; import org.springframework.jdbc.core.SqlParameter; import org.springframework.jdbc.object.StoredProcedure; import javax.sql.DataSource; import java.sql.Types; import java.util.Date;); } } Common problems with parameters and data values exist in the different approaches provided by the Spring Framework JDBC.emplate take an additional parameter in the form of an int array. This array is used to indicate the SQL type of the coresponding parameter using constant values from the java.sql.Types class. Provide one entry for each parameter. You can use the SqlParameterValue class classes BeanPropertySqlParameterSource or MapSqlParameterSource. They both have methods for registering the SQL type for any of the named parameter values. You can store images, other binary objects, and large chunks of text. These large object are called BLOB for binary data and CLOB class, through the getLobCreator method, used for creating new LOB objects to be inserted. The LobCreator/LobHandler provides the following support for LOB inputCallbac k. It implements one method, setValues. This method provides a LobCreator that you use to set the values for the LOB columns in your SQL insert statement. For this example we assume that there is a variable, lobHandle r,, you use a JdbcTemplate with the same instance variable l ob SimpleJ. final TestItem - new TestItem(123L, "A test item", new SimpleDateFormat("yyyy-M-d").parse("2010-12-31"););; } }; The org.springframework.jdbc.datasource.embedded package provides support for embedded Java database engines. Support for HSQL, H2, and Derby is provided natively. You can also use an extensible API to plug in new embedded database types and DataSource implementations. An embedded database is useful during the development phase of a project because of its lightweight nature. Benefits include ease of configuration, quick startup time, testability, and the ability to rapidly evolve SQL during development.. The EmbeddedDatabaseBuilder class provides a fluent API for constructing an embedded database programmatically. Use this when you need to create an embedded database instance in a standalone environment, such as a data access object unit test: EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder(); EmbeddedDatabase db = builder.setType(H2).addScript("my-schema.sql").addScript("my-test-data.sql").build(); // do stuff against the db (EmbeddedDatabase extends javax.sql.DataSource) db.shutdown() Spring JDBC embedded database support can be extended in two ways: Implement EmbeddedDatabaseConfigurer to support a new embedded database type, such as Apache Derby. Implement DataSourceFactory to support a new DataSource implementation, such as a connection pool, to manage embedded database connections. You are encouraged to contribute back extensions to the Spring community at jira.springframework.org.. Spring also supports Apache Derby 10.5 and above. To enable Derby, set the type attribute of the embedded-database tag to Derby. If using the builder API, call the setType(EmbeddedDatabaseType) method with EmbeddedDatabaseType.Derby. Embedded databases provide a lightweight way to test data access code. The following is a data access unit test template that uses an embedded database: public class DataAccessUnitTestTemplate { private EmbeddedDatabase db; @Before public void setUp() { // creates a HSQL in-memory db populated from default scripts classpath:schema.sql and classpath:test-data.sql db = new EmbeddedDatabaseBuilder().addDefaultScripts().build(); } @Test public void testDataAccess() { JdbcTemplate template = new JdbcTemplate(db); template.query(...); } @After public void tearDown() { db.shutdown(); } } runs the two scripts specified against the database: the first script is a schema creation, and the second is a test data set insert. The script locations can also be patterns with wildcards in the usual ant style used for resources in Spring (e.g. classpath*:/com/foo/**/sql/*-data.sql). If a pattern is used the scripts are executed in lexical order of their URL or filename. The default behaviour of the database initializer is to unconditionally execute the scripts provided. This will not always be what you want, for instance if running against an existing database that already has test data in it. The likelihood of accidentally deleting data is reduced by the commonest pattern (as shown above) that creates the tables first and then inserts the data - the first step will fail if the tables already exist. However, to get more control over the creation and deletion of existing data, the XML namespace provides a couple more options. The first is flag to switch the initialization on and off. This can be set according to the environment (e.g. to pull a boolean value from system properties or an environment bean), e.g. , e.g. <jdbc:initialize-database <jdbc:script </jdbc:initialize-database> In this example we are saying we expect that sometimes the scripts will be run against an empty dtabase drops, followed by a set of CREATE statements. The ignore-failures option can be set to NONE (the default), DROPS (ignore failed drops) or ALL (ignore all failures). If you need more control than you get from the XML namespace, you can simply use the DataSourceInitializer directly, and define it as a component in your application. data source instance and runs the scripts provided in its initialization callback (c.f. init-method in an XML bean definition or up data from the database on application startup. To get round this issue you two options: change your cache initialization strategy to a later phase, or ensure that the database initializer is initialized first. The first option might be easy if the application is in your control, and not otherwise. Some suggestions for how to implement this are Make the cache initialize lazily on first usage, which improves application startup time Have your cache or a separate component that initializes the cache implement Lifecycle or SmartLifecycle. When the application context starts up a SmartLifecycle can be automatically started if its autoStartup flag is set, and a Lifecycle can be started manually by calling ConfigurableApplicationContext.start() on the enclosing context. Use a Spring ApplicationEvent or similar custom observer mechanism to trigger the cache initialization. ContextRefreshedEvent is always published by the context when it is ready for use (after all beans have been initialized), so that is often a useful hook (this is how the SmartLifecycle works by default). The second option can also be easy. Some suggestions on how to implement this are Rely on Spring BeanFactory default behaviour, which is that beans are initialized in registration order. You can easily arrange that by adopting the common practice of a set of <import/> elements that order your application modules, and ensure that the database and database initialization are listed first Separate the datasource and the business components that use it and control their startup order by putting them in separate ApplicationContext instances (e.g. parent has the datasource and child has the business components). This structure is common in Spring web applications, but can be more generally applied. Use a modular runtime like SpringSource dm Server and separate the data source and the components that depend on it. E.g. specify the bundle start up order as datasource -> initializer -> business components.
http://docs.spring.io/spring/docs/3.1.2.RELEASE/spring-framework-reference/html/jdbc.html
2014-04-16T17:17:47
CC-MAIN-2014-15
1397609524259.30
[]
docs.spring.io
The BottomCommandArea Module is used to set the parameters of the Bottom Command Area. The bottom command area is a region at the bottom of the screen designed to hold RhoElements controls such as the SIP button or Zoom button to separate them from the rest of the user application. Items listed in this section indicate parameters, or attributes which can be set. Images can be specified as local to the device or on an HTTP / FTP server, just specify the required protocol as part of your URL (file://\, HTTP:// and FTP://). Image will be scaled to the size of the command area. JPEG and GIF images are only supported on WM devices. Both CE and WM support BMP files. All controls are designed to be shown on top of RhoElements. If you require to switch to an application other than RhoElements you should minimize RhoElements to ensure the buttons do not remain shown. (Not applicable to Enterprise Tablet) When the screen orientation changes, either using the ScreenOrientation tag or by rotating a device with hardware support, the command areas will automatically move and resize to fit the new layout. However the buttons themselves are not moved and in some cases this may result in them being off the screen or not in the expected position. If so they must be moved manually by detecting the ScreenOrientationEvent. The following example shows the BottomCommandArea, sets the height to 100 pixels and background color to red. <META HTTP- <META HTTP- <META HTTP- The following example shows the BottomCommandArea, sets the height to 100 pixels and displays image bca.gif on it (resizing the image if necessary). <META HTTP-
http://docs.rhomobile.com/en/2.2.0/rhoelements/bottomcommandarea
2014-04-16T16:25:23
CC-MAIN-2014-15
1397609524259.30
[]
docs.rhomobile.com
User Guide Local Navigation Set the default country code and area code - From the Home screen, press the Send key. - Press the Menu key. - Click Options. - Click Smart Dialing. - Set the Country Code and Area Code fields. - If necessary, set the Local Country Code and International Dialing Digits fields. - In the National Number Length field, set the default length for phone numbers in your country. - Press the Menu key. - Click Save. Next topic: Set options for dialing extensions Previous topic: About smart dialing Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/11298/Set_the_default_country_code_and_area_code_47_504533_11.jsp
2014-04-16T16:27:40
CC-MAIN-2014-15
1397609524259.30
[]
docs.blackberry.com
How to Upgrade Note SONAR-5037 - Remove the plugin "Upgrade" and "Uninstall" buttons on the "System Updates" tab because it cannot work ( Open) ): - Read the the plugins with compatible versions if necessary, see step #1) -.
http://docs.codehaus.org/pages/viewpage.action?pageId=237371747
2014-04-16T16:11:34
CC-MAIN-2014-15
1397609524259.30
[]
docs.codehaus.org
Search results Search results are represented by the SearchResult class, an instance of which is returned by the Search method of the Index class. The Search method of the IndexRepository class also returns an instance of the SearchResult class. The SearchResult class contains the following members: - The DocumentCount property returns the number of documents found. - The OccurrenceCount property returns the total number of occurrences found. - The Truncated property returns a value indicating that the result is truncated due to limits specified in the search options. - The Warnings property returns a warnings describing the result, for example, a warning about the presence of stop word in a search query. - The NextChunkSearchToken property returns a chunk search token to search for the next chunk. For details on search by chunks, see the Search by chunks page. - The StartTime property returns the start time of the search. - The EndTime property returns the end time of the search. - The SearchDuration property returns the search duration. - The GetFoundDocument method returns the found document by index. - The GetEnumerator method returns an enumerator that iterates through the collection of the documents found. The found document is represented by an instance of a FoundDocument class. The FoundDocument class contains the following members: - The DocumentInfo property returns the document info object containing the file path, the file type, the format family, and the inner document path for items of container documents. - The Relevance property returns the relevance of the document in the search result. - The OccurrenceCount property returns the number of occurrences found in the document. - The FoundFields property returns the found document fields. - The Terms property returns the found terms. The value is evaluated each time the property is accessed based on the data for each document field found. - The TermSequences property returns the found term sequences. - The Serialize method serializes the current found document instance to a byte array. - The Deserialize method deserializes an instance of found document from a byte array. The found document field is represented by an instance of a FoundDocumentField class. The FoundDocumentField class contains the following members: - The FieldName property returns the field name. - The OccurrenceCount property returns the number of occurrences found. - The Terms property returns the terms found. - The TermsOccurrences property returns the occurrences of the found terms. - The TermSequences property returns the term sequences found. - The TermSequencesOccurrences property returns the occurrences of the found term sequences. - The Serialize method serializes the current found document field instance to a byte array. - The Deserialize method deserializes an instance of found document field from a byte array. The following example shows how to print information on the documents found in the console. C# string indexFolder = @"c:\MyIndex\"; string documentFolder = @"c:\MyDocuments\"; // Creating an index Index index = new Index(indexFolder); // Indexing documents from the specified folder index.Add(documentFolder); // Creating search options SearchOptions options = new SearchOptions(); options.FuzzySearch.Enabled = true; // Enabling the fuzzy search options.FuzzySearch.FuzzyAlgorithm = new TableDiscreteFunction(3); // Setting the maximum number of differences to 3 // Search for documents containing the word 'Einstein' or the phrase 'Theory of Relativity' SearchResult result = index.Search("Einstein OR \"Theory of Relativity\"", options); // Printing the result Console.WriteLine("Documents: " + result.DocumentCount); Console.WriteLine("Total occurrences: " + result.OccurrenceCount); for (int i = 0; i < result.DocumentCount; i++) { FoundDocument document = result.GetFoundDocument(i); Console.WriteLine("\tDocument: " + document.DocumentInfo.FilePath); Console.WriteLine("\tOccurrences: " + document.OccurrenceCount); for (int j = 0; j < document.FoundFields.Length; j++) { FoundDocumentField field = document.FoundFields[j]; Console.WriteLine("\t\tField: " + field.FieldName); Console.WriteLine("\t\tOccurrences: " + document.OccurrenceCount); // Printing found terms if (field.Terms != null) { for (int k = 0; k < field.Terms.Length; k++) { Console.WriteLine("\t\t\t" + field.Terms[k].PadRight(20) + field.TermsOccurrences[k]); } } // Printing found phrases if (field.TermSequences != null) { for (int k = 0; k < field.TermSequences.Length; k++) { string sequence = string.Join(" ", field.TermSequences[k]); Console.WriteLine("\t\t\t" + sequence.PadRight(30) + field.TermSequencesOccurrences.
https://docs.groupdocs.com/search/net/search-results/
2021-04-10T22:07:43
CC-MAIN-2021-17
1618038059348.9
[]
docs.groupdocs.com
WP Simple Pay offers a settings area to control how Customers can manage their Subscriptions. These settings can be found in Simple Pay Pro → Settings → Customers → Subscription Management Note: Subscription capabilities are included with a Plus or higher license. Note: WP Simple Pay does not store any Customer records on-site, nor does it create WordPress user accounts for any purchases. Customers referenced in this article refer to Stripe Customer records. WP Simple Pay’s cart-free and account-free nature means there is no “my account” area to retrieve and manage past purchases. To ensure your customers can access their previous purchases ensure you have enabled the “Upcoming Invoice” email and utilized the {update-payment-method-url} template tag which provides Customers with a unique URL that can be used to manage the Subscription. Subscription Management Control how Customers can manage their Subscription. - None When a Customer returns to their Payment Receipt only the original receipt details will be available. The Subscription is only managed by the site owner. - On-Site Allow Customers to update their Subscription’s Payment Method through one of the Payment Form’s available methods. - Stripe Customer Portal Manage the subscription through Stripe’s hosted Customer Portal. The Customer Portal can be configured to allow the Payment Method to be updated and the Subscription Plan to be managed. Cancel Subscriptions If using the “On-Site” method for updating Subscriptions you can choose to give Customers the ability to cancel their own Subscription (this setting is managed in Stripe if using the Customer Portal). - Cancel immediately Immediately cancels the Subscription which cannot be reactivated. - Cancel at end of billing period Cancels the Subscription at the end of the current billing period. Before the billing period ends the Subscription can be reactivated for continued billing. Allowing Subscription Management for Subscriptions Created Prior to Version 3.7.0 Subscriptions that were purchased prior to WP Simple Pay 3.7.0, or created manually in Stripe, will not receive Upcoming Invoice email reminders. These Subscriptions will not contain the necessary identifying information WP Simple Pay uses to determine the validity of a Subscription. In order to send emails for these Subscriptions the Subscription must contain the following pieces of Metadata: simpay_form_id simpay_subscription_key When viewing the Subscription in the Stripe dashboard, click “Edit” in the “Metadata” section: Add the aforementioned pieces of metadata. simpay_form_id should match the ID of the payment form the Subscription originated from. simpay_subscription_key can be any arbitrary unique string. simpay_subscription_key is used by WP Simple Pay to create a URL to the Payment Confirmation page containing the Update Payment Method form.
https://docs.wpsimplepay.com/articles/customers/
2021-04-10T22:56:55
CC-MAIN-2021-17
1618038059348.9
[]
docs.wpsimplepay.com