content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Managing Global Block Lists
If you globally block an email address or domain, then mail from that email address or domain to any account will be blocked before it reaches a user's inbox. No spam checking is performed on blocked entries, however, block list testing is performed after a message is received, so front line tests such as RBL, SPF or greylisting may still apply.
Go to Filter Rules > Global Block List to manage block list entries.
Adding a Global Block List Entry
Go to Filter Rules > Global Block List > Blocked Email Addresses to add an email address to the global block list, or go to Filter Rules > Global Block List > Blocked Domains to add a domain.
Click Add... and the Add window displays.
Enter the Sender Email: in the form of [email protected] or Sender Domain: in the form of example.com.
For a domain entry, check Include Subdomains: for subdomains to also be blocked.
Enter any optional comments in the Comments: field.
Click Save.
Deleting a Global Block List Entry
To delete an individual email address or domain, click the delete
icon in the Options column to the right of the listing. To delete multiple entries at once, check the box
to the left of the listings you want to delete.
Click Delete… under Blocked Email Addresses or Blocked Domains.
Importing Global Block List Entries
Create a single text file containing the entries to import. Both email addresses and domains can be imported together from the same text file. The file must have one email address or domain (preceded by the '@' sign) per line. For example:
[email protected]
@example.com
Note
All lines starting with a '#' or ';' are treated as comments and ignored in an import file.
Click Import… and select the text file to import. Click Open.
Email addresses will be imported to the Blocked Email Addresses and domains will be imported to the Blocked Domains. | https://docs.titanhq.com/en/9006-managing-global-block-lists.html | 2021-09-16T18:18:38 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['image/uuid-c2329093-57cb-45e3-9f37-4b7ca0edc9b2.jpg',
'STG-global_blocklist.jpg'], dtype=object) ] | docs.titanhq.com |
Editing page layouts
The structure of every portal page template is determined by a page layout. Page layouts consist of layout code and web part zones that specify regions where designers can place web parts. Page layouts allow you to define the basic layout and design of your website.
There are two general types of page layouts:
- Custom - used only by one specific page template.
- Shared - stored as separate objects that you can assign to any number of page templates. Modifying a shared layout affects all templates that use it.
Editing layouts
To edit the layout of a page:
- Open the Pages application.
- Select the page in the content tree.
- Switch to the Design tab.
- Right-click the green template header and click Edit layout in the menu.
- Modify the layout code as required.
Note: when removing web part zones from a layout, make sure you remove all the web parts in the zone first.
The Layout type selector allows you to choose between two types of layout code:
Previewing layouts
You can preview page layouts by clicking Preview in the header of their editing dialog. You can then write the layout code side-by-side with a preview of how the changes affect the live site version of the page.
See also: Previewing design changes
Removing web part zones from page layouts
Removing layouts
Example - Layout code
Page layouts are composed of standard HTML elements, which means you have full control over how the system renders the page. You can choose between table and CSS‑based layouts.
The following sample page layout uses a table to define a two-column structure:
<table> <tr> <td> <cms:CMSWebPartZone </td> <td> <cms:CMSWebPartZone </td> </tr> </table>
The following layout code defines the same two-column structure, but using DIV elements and CSS styles:
<div style="width: 100%;"> <div style="width: 50%; float: left;"> <cms:CMSWebPartZone </div> <div style="width: 50%; float: right;"> <cms:CMSWebPartZone </div> </div>
Adding CSS styles to layouts
Page layouts allow you to directly define any CSS classes used within the layout code.
Requirement: Enable the Allow CSS from components setting in Settings -> System -> Performance.
- Click Add CSS styles below the page layout's code. The CSS styles editor appears.
- Enter the definitions of the required CSS classes.
- Click Save.
All pages that use the layout automatically load the specified styles (in addition to the website or page‑specific stylesheet).
See also: Adding CSS to page components
Creating conditional layouts
When editing the code of ASCX page layouts, you can Insert Conditional layout elements. This allows you to create flexible layouts that display content based on certain criteria. The page layout renders the content between the CMSConditionalLayout tags only if the conditions specified by the properties are fulfilled.
For example:
<div class="padding"> <cms:CMSConditionalLayout <cms:CMSWebPartZone </cms:CMSConditionalLayout> <cms:CMSConditionalLayout <cms:CMSWebPartZone </cms:CMSConditionalLayout> <cms:CMSConditionalLayout <cms:CMSWebPartZone </cms:CMSConditionalLayout> </div>
This sample layout displays one of three possible web part zones based on the roles of the user viewing the page. Gold partners see the content of the zGold zone, Silver partners see the zSilver zone and all other users see zDefault.
You can configure the following properties for conditional layouts:
Creating pages with shared layouts
When creating a new page, you can select the Create a blank page with layout option and choose from a number of predefined page layouts.
If you leave the Copy this layout to my page template option at the bottom of the selection dialog checked, the system creates a custom copy of the layout specifically for the page template. Otherwise the template uses the shared layout directly. If you disable the option and then modify the layout code, the changes affect all pages with templates that use the shared page layout. Leave the option enabled unless you wish to create pages with a shared layout that can be edited in one place.
Managing shared page layouts
You can manage the pre-defined (shared) page layouts in the Page layouts application. When editing a layout on the General tab, you can modify its code and also configure the following properties:
On the Page templates tab, you can check which templates currently use the given layout. Templates with a custom page layout are not included here, even if they were created as a copy based on the currently edited shared layout.
Using layout web parts
You can alternatively define the layout of page templates by adding special web parts designed for this purpose — Layout web parts.
This approach allows you to set up the structure of page templates and add web part zones without writing or editing the page layout code. Simply create a page containing a single zone, add a layout web part, and then configure the required layout via the web part's properties dialog or even directly on the Design tab. | https://docs.xperience.io/k9/developing-websites/developing-websites-using-the-portal-engine/editing-page-layouts | 2021-09-16T18:40:33 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.xperience.io |
Date: Tue, 07 Feb 2012 15:21:58 +0100 From: Damien Fleuriot <[email protected]> To: [email protected] Subject: Re: on hammer's, security, and centrifuges... Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <CAE7N2ke-eEg3QqD3OfD_AJ6Yx78wwhOiApwVYsDQXhxU14JgAQ@mail.gmail.com> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On 2/7/12 2:29 PM, Steve Bertrand wrote: > On 2012.02.07 07:03, Henry Olyer wrote: > >> Look, I'm going to use FreeBSD as long as both it and I am around, it's >> just the best choice for me, for my user's. But we need to improve >> security. > > I'm very happy with the security and stability of FreeBSD, and praise > the sec team and contributors to make it so. > > I've run literally hundreds of FreeBSD boxes, mostly in a busy ISP > environment since 4.3, and never have been hacked after normal system > protections are in place. > >> For now, until I remake my laptop, I'm going to disable the ath0 >> wireless. >> >> How? What's the best method to make certain that my wireless chip is >> turned off? > > Comment out the configuration lines for the ath interface in rc.conf, or > to remove it completely, recompile the kernel after removing 'device ath'. > Also, make sure to NOT build the kernel module for ath...
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=460610+0+archive/2012/freebsd-questions/20120212.freebsd-questions | 2021-09-16T20:10:00 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.freebsd.org |
Rocket Docs
What is Ramifi?
And how does it fight inflation?
Ramifi is a project whose aim is to take on the role of money in the new decentralized economy being built. There have been many clever attempts, with each growing progressively more sophisticated than the last from USDT to DAI to Ampleforth. USDT gave us an easy off ramp to escape the volatility inherent to the crypto markets. DAI did the same without the need to trust that a 3rd party had the reserves to make good on its debts. AMPL took it a step further without the need for over collateralization of assets for it to be produced.
Stablecoins are all aiming to become a decentralized store of value, yet they all fall short with being pegged to the US Dollar, whose value is continuously decreasing. The USD is a global medium of exchange recognized across the globe and is a denomination that people understand, an understandable choice for stablecoins.
The solution then is not to try to create a new medium of exchange, but rather to continue using it while implementing a built in hedge that ignores any further increases in USDs supply, and the resulting loss of purchasing power it inherits.
This can be done by simply taking a "snapshot" of what the products and goods we use today cost, and then adjusting our stable coins relative USD value via supply constriction to ensure it continues to have that purchasing power. Put simply, equal and opposite deflation to counter act USD inflation.
Inflation is taxation without legislation.
Milton Friedman | https://docs.ramifi.org/ | 2021-09-16T18:45:52 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.ramifi.org |
Hide Menu
Hide the menu list field on the left.
Change View
There are options to switch between the Desktop-Mobile-Tablet views to the Client screen. Automatically selects according to the input device.
It redirects to the form defined as the landing page for the user.
Tasks
It shows users tasks from BPM and Messages info.
Exit
User log out with this button.
| https://docs.xpoda.com/hc/en-us/articles/360011677919-Toolbar | 2021-09-16T19:10:17 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['/hc/article_attachments/360008565360/xpoda888.png',
'xpoda888.png'], dtype=object)
array(['/hc/article_attachments/360008565340/xpoda555.png',
'xpoda555.png'], dtype=object)
array(['/hc/article_attachments/360008565380/xpoda6666.png',
'xpoda6666.png'], dtype=object)
array(['/hc/article_attachments/360008565320/xpoda222.png',
'xpoda222.png'], dtype=object)
array(['/hc/article_attachments/360008565300/xpoda333.png',
'xpoda333.png'], dtype=object) ] | docs.xpoda.com |
This is the type of authority that does not allow any changes to be made to the objects on the form screen. The users to whom this type of authorization applies will also be defined in this section.
Properties
Fields: All fields within the form are listed and multiple selections can be made from the list.
Users Type: Selected (Users selected in the Users list) – Unselected (Users not selected from the Users list) – All Users (All users registered to the system)
Users: All registered users are listed and multiple selections can be made from the list. | https://docs.xpoda.com/hc/en-us/articles/360011678659-It-can-not-change-fields | 2021-09-16T18:49:45 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['/hc/article_attachments/360008573059/y2.jpg', 'y2.jpg'],
dtype=object) ] | docs.xpoda.com |
Apple
Configuring Apple as a Social Provider
In this section, we will show you how to provide an option to login with Apple login, on your cidaas Login page.
When you configure apple as a social provider in cidaas, you will get a new option called Login with Apple in login page of your cidaas application and Signup with Apple in the registration page of your cidaas application.
Overall process would be
- On Apple Developer console : Sign in to Apple Developer account
- On Apple Developer console : Create an App ID
- On Apple Developer console : Create a Services ID
- On Apple Developer console : Create a Private Key for Client Authentication
- On Apple Developer console : Generate client secret
- On cidaas admin portal : Add Apple app id and client secret in cidaas application
- On cidaas admin portal : Select appropriate applications for which you want to enable Apple as a social provider
We'll guide you through the process — it's pretty easy.
If you are in this section, we assume you already have an active apple developer account, if not create a new account, before proceeding.
1. Login to your Apple developer account.
Create an App ID
In this section, you'll find steps to register a new identifier in the Apple developer portal to create app id and how to enable "sign in with apple" option for the app id.
1. Click on Certificates, Identifiers and Profiles option.
2. From the sidebar, choose Identifiers then click on Add button, as shown below.
3. Choose App IDs and click on Continue.
4. In the next screen, enter a description and Bundle ID for the App ID.
Then you need to scroll down through the list of capabilities and check the box next to Sign In with Apple.
Then click on Continue.
5. Review your app id setup and then click on Register button.
Create a Services ID
In this section, you'll find steps to create service id and how to enable "sign in with apple" option for the service id and how to define the domain in which your app is running on and the redirect URLs used during the login flow.
1. From the sidebar, choose Identifiers then click on Add button, as shown below.
2. Choose Services IDs and click on Continue.
3. In the next screen, enter a description and Identifier for the Service ID.
Make sure to check the Sign In with Apple checkbox. Click on the Configure button next to Sign In with Apple.
4. In the Web Authentication Configuration screen that appears, choose your App id in the Primary App ID dropdown. And also enter the domain name of your app and enter the redirect URL for your app as well. Click on Next.
5. Review your service id details and then click on Save.
Create a Private Key
In this section, you'll find steps to create private key by configuring your recently created app id.
1. From the sidebar, choose Keys then click on Add button, as shown below.
2. Give suitable name for your key and check the Sign In with Apple checkbox. Then click on the Configure button.
3. Select your primary App ID you created earlier and click on Save.
4. In the next step, review your key details and click on Register button.
5. Make note of the Key ID which is required to generate client secret. Click on Download button to download your private key.
Generate client secret
Now you need to generate client secret from the private key obtained.
1. To generate client secret, use the following node js script
const fs = require("fs"); const jwt = require("jsonwebtoken"); const privateKey = fs.readFileSync("AuthKey_2AKZJ3L7T5.p8").toString(); //your downloaded private key path const jwtToken = jwt.sign({}, privateKey, { algorithm: "ES256", expiresIn: "150d", audience: "", subject: "de.cidaas.testservice", //your service id issuer: "**********", //your 10-character Team ID which you obtained during app id creation header: { alg: "ES256", kid: "2AKZJ3L7T5" //your Key ID which you obtained during private key creation } }); console.log("secret:", jwtToken, "\n");
If you run this script, you will get your client secret, as shown below. Make note of this client secret.
secret: eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IjlWUlVCQTRaNDgifQ.eyJpYXQiOjE1ODg2NzY1NjIsImV4cCI6MTYwMTYzNjU2MiwiYXVkIjoiaHR0cHM6Ly9hcHBsZWlkLmFwcGxlLmNvbSIsImlzcyI6IkJXTU03MlE1TTYiLCJzdWIiOiJkZS5jaWRhYXMudGVzdC1jZGMtcHJvZC1zZXJ2aWNlIn0.XPtxASA__aRBvz1rUfokMVbyZi_OVYQKQy9zyFrtmtNLzkzzvFmJbdQ09x5B4l9K9TOYP8HSWVBuQRNtn5Xc0Q
Configure social provider setup in cidaas
In this section, you'll find steps on how to add apple client id and apple client secret in your cidaas application and how to choose client applications for which you want to enable Apple as a social provider.
1. Now, navigate to cidaas Admin dashboard -> Settings -> Login Providers -> Social Providers
2. Click on edit icon corresponding to the Apple app from the list.
3. Enter the Service ID which you obtained from Apple as Client ID and enter the Client Secret which you obtained by extracting your apple private key. You can also enable/disable Apple option in cidaas admin portal as well as user portal as per your requirement. Click on Save button.
4. Under Configure Clients for Apple section, you find a list of various applications created in your cidaas account. Select appropriate applications for which you want to enable Apple as a social provider.
5. After mapping all the required clients, click on Save button.
| https://docs.cidaas.de/configuration-settings/social-providers/appleID.html | 2021-02-25T04:54:15 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['../../assets/login_screen_with_social_providers.png', None],
dtype=object)
array(['../../assets/developer_portal_apple.png', None], dtype=object)
array(['../../assets/click_on_identifiers_menu_apple.png', None],
dtype=object)
array(['../../assets/add_identifier_apple.png', None], dtype=object)
array(['../../assets/app_ids_apple.png', None], dtype=object)
array(['../../assets/enter_bundle_id_apple.png', None], dtype=object)
array(['../../assets/sign_in_check_apple.png', None], dtype=object)
array(['../../assets/continue_app_id_apple.png', None], dtype=object)
array(['../../assets/team_id_apple.png', None], dtype=object)
array(['../../assets/add_identifier_apple.png', None], dtype=object)
array(['../../assets/select_service_id_apple.png', None], dtype=object)
array(['../../assets/continue_service_apple.png', None], dtype=object)
array(['../../assets/config_service_apple.png', None], dtype=object)
array(['../../assets/wac_service_config_apple.png', None], dtype=object)
array(['../../assets/save_service_id_apple.png', None], dtype=object)
array(['../../assets/add_key_apple.png', None], dtype=object)
array(['../../assets/key_name_apple.png', None], dtype=object)
array(['../../assets/select_app_id_key_apple.png', None], dtype=object)
array(['../../assets/register_key_apple.png', None], dtype=object)
array(['../../assets/download_pvt_key_apple.png', None], dtype=object)
array(['../../assets/social_provider_details.png', None], dtype=object)
array(['../../assets/social_provider_apple.png', None], dtype=object)
array(['../../assets/app_details_apple.png', None], dtype=object)
array(['../../assets/config_clients_apple.png', None], dtype=object)
array(['../../assets/agenda.png', None], dtype=object)
array(['../../assets/broken-link.png', None], dtype=object)
array(['../../assets/broken-link.png', None], dtype=object)] | docs.cidaas.de |
DataViewBase.PrintDirect() Method
Prints the grid using the default printer.
Namespace: DevExpress.Xpf.Grid
Assembly: DevExpress.Xpf.Grid.v19.1.Core.dll
Declaration
Remarks
Use the PrintDirect method without parameters, to immediately send the grid to a default printer, and the PrintDirect method with the queue parameter, to send the grid to a specific printer. Also, you can use the DataViewBase.Print method to print the grid using custom printing settings, specified by an end-user via the Print dialog.
To display the Print Preview of the grid, use the DataViewBase.ShowPrintPreview and DataViewBase.ShowPrintPreviewDialog methods. To export the grid, use the appropriate ExportTo~ method (e.g. DataViewBase.ExportToHtml, DataViewBase.ExportToPdf, etc.)
NOTE
The grid can be previewed, printed and exported only if the DXPrinting Library is available. | https://docs.devexpress.com/WPF/DevExpress.Xpf.Grid.DataViewBase.PrintDirect?v=19.1 | 2021-02-25T05:30:13 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.devexpress.com |
>>.
This command does not work for future dates.
Syntax
| gentimes start=<timestamp> [end=<timestamp>] [increment=<increment>]
Required arguments
- start
- Syntax: start=<timestamp>
- Description: Specify as start time.
- <timestamp>
- Syntax: MM/DD/YYYY[:HH:MM:SS] | <int>
- Description: Indicate the timeframe, for example: 10/1/2017 for October 1, 2017, 4/1/2017:12:34:56 for April 1, 2017.
Examples! | https://docs.splunk.com/Documentation/Splunk/8.0.2/SearchReference/Gentimes | 2021-02-25T06:02:04 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
package
Implements a service worker for Angular apps. Adding a service worker to an Angular app is one of the steps for turning it into a Progressive Web App (also known as a PWA).
At its simplest, a service worker is a script that runs in the web browser and manages caching for an application. Service workers function as a network proxy. They intercept all outgoing HTTP requests made by the application and can choose how to respond to them.
To set up the Angular service worker in your project, use the CLI
add command.
ng add @angular/pwa
The command configures your app to use service workers by adding the service-worker package and generating the necessary support files.
For more usage information, see the Service Workers guide.
© 2010–2020 Google, Inc.
Licensed under the Creative Commons Attribution License 4.0. | https://docs.w3cub.com/angular~10/api/service-worker | 2021-02-25T04:38:27 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.w3cub.com |
method
Laguerre.mapparms(self)[source]. | https://docs.w3cub.com/numpy~1.17/generated/numpy.polynomial.laguerre.laguerre.mapparms | 2021-02-25T05:37:56 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.w3cub.com |
ClustrixDB breaks each representation (primary key + table or other index) into smaller, more manageable segments called “slices”, each of which is assigned a portion of the representation’s rows. There are multiple copies of each slice, called replicas.
To modify the number of slices for an existing table or index, follow this syntax:
The following global variables impact ClustrixDB slicing.
The default slice size of 8 GB is optimal for most use cases. If a very large table results in more slices than available cores, performance might be impacted and increasing the max slice size may be recommended.
Tables and indexes should have a minimum number of slices equal to the number of nodes, with ALLNODES tables being an exception. Use the following query to identify tables that contain fewer slices than the current number of nodes:
During normal operation, relations are resliced on demand, however it can be advantageous to pre-slice tables for which large data growth is anticipated. Creating or altering a representation to have a slice count commensurate with the expected size will allow the cluster to add data to the representation at maximum speed as slice splitting will be unnecessary. For additional information, see Loading Data onto ClustrixDB.
Use the following equation to determine the optimal number of slices for a table: (expected table size + 10%) / rebalancer_split_threshold_kb)
The Rebalancer automatically splits a table slice or index slice when it reaches the threshold set by the global rebalancer_split_threshold_kb. However, some tables might benefit from more slices (for increased parallelism) or fewer slices (to reduce overhead of slices on the system). | https://docs.clustrix.com/display/CLXDOC/SLICES | 2021-02-25T05:45:44 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.clustrix.com |
All you need is a Kubernetes cluster and a git repo. The git repo contains manifests (as YAML files) describing what should run in the cluster. Flux imposes some requirements on these files.
Installing Flux¶
Here are the instructions to install Flux on your own cluster.
If you are using Helm, we have a separate section about this. | https://docs.fluxcd.io/en/1.21.2/get-started/ | 2021-02-25T05:05:27 | CC-MAIN-2021-10 | 1614178350717.8 | [] | docs.fluxcd.io |
Information of Selling Profiles
Before sending your product to eBay site need to define rules that performed to Prestashop Product to convert it to eBay Product.
Such rules in terms of.
| https://docs.salest.io/article/19-information-of-selling-profiles | 2021-02-25T05:11:05 | CC-MAIN-2021-10 | 1614178350717.8 | [array(['https://involic.com/images/prestabay-manual/prestashop-ebay-module-selling-profiles-57.png',
'PrestaShop ebay module — Selling Profiles PrestaShop ebay module — Selling Profiles'],
dtype=object) ] | docs.salest.io |
Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
When you are an administrator in the account partner organization in an Active Directory Federation Services (AD FS) deployment and you have a deployment goal to provide single-sign-on (SSO) access for employees on the corporate network to your hosted resources:
Employees who are logged on to an Active Directory forest in the corporate network can use SSO to access multiple applications or services in the perimeter network in your own organization. These applications and services are secured by AD FS.
For example, Fabrikam may want corporate network employees to have federated access to Web-based applications that are hosted in the perimeter network for Fabrikam.
Remote employees who are logged on to an Active Directory domain can obtain AD FS tokens from the federation server in your organization to gain federated access to AD FS-secured Web-based applications or services that also reside in your organization.
Information in the Active Directory attribute store can be populated into the employees' AD FS tokens.
The following components are required for this deployment goal:
Active Directory Domain Services (AD DS): AD DS contains the employees' user accounts that are used to generate AD FS tokens. Information, such as group memberships and attributes, is populated into AD FS tokens as group claims and custom claims.
Note
You can also use Lightweight Directory Access Protocol (LDAP) or Structured Query Language (SQL) to contain the identities for AD FS token generation.
Corporate DNS: This implementation of Domain Name System (DNS) contains a simple host (A) resource record so that intranet clients can locate the account federation server. This implementation of DNS may also host other DNS records that are required in the corporate network. For more information, see Name Resolution Requirements for Federation Servers.
Account partner federation server: This federation server is joined to a domain in the account partner forest. It authenticates employee user accounts and generates AD FS tokens. The client computer for the employee performs Windows Integrated Authentication against this federation server to generate an AD FS token. For more information, see Review the Role of the Federation Server in the Account Partner.
The account partner federation server can authenticate the following users:
Employees with user accounts in this domain
Employees with user accounts anywhere in this forest
Employees with user accounts anywhere in forests that are trusted by this forest (through a two-way Windows trust)
Employee: An employee accesses a Web-based service (through an application) or a Web-based application (through a supported Web browser) while he or she is logged on to the corporate network. The employee's client computer on the corporate network communicates directly with the federation server for authentication.
After reviewing the information in the linked topics, you can begin deploying this goal by following the steps in Checklist: Implementing a Federated Web SSO Design.
The following illustration shows each of the required components for this AD FS deployment goal.
See Also
AD FS Design Guide in Windows Server 2012 | https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/design/provide-your-active-directory-users-access-to-your-claims-aware-applications-and-services | 2017-06-22T18:51:04 | CC-MAIN-2017-26 | 1498128319688.9 | [array(['media/31394ea8-fecb-4372-ac3f-cc3cf566ffc9.gif',
'access to your claims'], dtype=object) ] | docs.microsoft.com |
JDocumentFeed::render
The "API17" namespace is an archived namespace. This page contains information for a Joomla! version which is no longer supported. It exists only as a historical reference, it will not be improved and its content may be incomplete and/or contain broken links.
JDocumentFeed::render
Description
Render the document.
public function render ( $cache=false $params=array )
- Returns The rendered data
- Defined on line 176 of libraries/joomla/document/feed/feed.php
See also
JDocumentFeed::render source code on BitBucket
Class JDocumentFeed
Subpackage Document
- Other versions of JDocumentFeed::render
User contributed notes
Code Examples
Advertisement | https://docs.joomla.org/API17:JDocumentFeed::render | 2017-06-22T18:34:51 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.joomla.org |
Merging Pull Requests
Challenges of integrating a Pull Request
When a Pull Request has been reviewed and is ready to be merged it's usually marked with a GG comment meaning Good to Go. At this point any dev with commit rights should be able to merge it in the main repository. However we might need to rewrite the changes to keep the repository clean:
- By updating the commit message if it needs to be shortened or improved. For instance if the patch is related to a work item, the commit message should look like this:
\#12345: Short message A longer message than can span multiple lines and describe the reasoning behind the change. Work Item: 12345
- By rebasing the changes on the top of the branch to keep a more readable changelog.
- By squashing the changes into a single commit.
Doing so can be tedious depending on your level of knowledge on git. To help with this a specific git alias can be used.
GIT Alias to Merge changes from github
- In your user's profile folder (e.g.,
c:\users\sebros) open the file
.gitconfig.
- Anywhere in the file (at the end for instance) add a new alias like this:
[alias] accept-pr = "!f(){ git checkout -b PR $1 && git pull $2 $3 && git rebase $1 && author=`git log -n 1 --pretty='format:%an <%ae>'` && git reset $1 && git checkout $1 && git commit -a -m \"$4\" && git commit --amend --author=\"$author\" --no-edit && git branch -D PR; };f"
- If the
[alias]section already exists, keep it.
This
accept-pr command is now accessible from the git console and will apply all the necessary steps.
Warning
TODO: Orchard has moved to github, the following needs to be updated.
$1: The branch to apply the PR to, e.g.,
1.8.x,
1.x
$2: The url of the remote PR, e.g.,
$3: The branch to pull from the remote PR, e.g.,
issues/20311
$4: The commit message for the squashed commit, e.g.,
$'#1234: Short \n Long \n Work Item: 1234'
The parameters
$2 and
$3 can be found in the modal dialog which appears when clicking on the Accept link of the pull request page on codeplex.
For instance it will show up a line like this:
git pull issues/20797, where
-
$2 is
-
$3 is
issues/20797
Usage
git accept-pr 1.8.x issues/20797 $'#20797: Fixing taxonomy handler exception\n\nWork Item: 20797'
If this command results with an error, it's probably because the PR is too old and there are some merge conflicts. In this case reopen the PR by requesting the user to rebase his changes on the targeted branch or to merge the conflicts, clean you local changes, then try again. If at this point you don't know what you doing or you have a doubt, please contact another committer for help.
Finally, push the commits, and mark the PR as accepted. | http://docs.orchardproject.net/en/latest/Documentation/Merging-Pull-Requests/ | 2017-06-22T18:18:36 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.orchardproject.net |
VMR Configuration Defaults
Unlike a Solace messaging appliance, a VMR starts with a basic configuration that enables many of the common services. The basic configuration can be modified as required.
Note: When the VMR is reloaded (using the CLI
reload command, for example), the current configuration is preserved. However, the Solace CLI command
enable> reload default-config will restore the VMR to its initial basic configuration.
The basic configuration enables most common services on the VMR. In particular, the following is included in the basic configuration:
- Message spool is configured and enabled. The maximum spool usage is set to 1500 MB.
- Message VPN named
defaultis enabled with no client authentication.
- The client username named
defaultin the Message VPN named
defaultis enabled. The client username named
defaulthas all settings enabled.
- All features are unlocked and do not require a product key.
- All services are enabled. The table below lists the default port numbers that are used for those services.
For information on how to modify the router configuration with the Solace CLI, refer to the sections provided in the Router Configuration category of the Solace customer documentation.
Note:
- TLS/SSL services will not become operational until a server certificate is installed.
- In cloud environments (AWS), ports are typically blocked by the default security group. To access these ports, you have to allow access in the security group. | http://docs.solace.com/Solace-VMR-Set-Up/VMR-Configuration-Defaults.htm | 2017-06-22T18:18:11 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.solace.com |
- Storage >
- Storage Engines >
- WiredTiger Storage Engine >
- Change Standalone to WiredTiger
Change Standalone to WiredTiger¶
New in version 3.0: The WiredTiger storage engine is available.
Changed in version 3.2: WiredTiger is the new default storage engine for MongoDB.
This tutorial gives an overview of changing the storage engine of a standalone MongoDB instance to WiredTiger.
Considerations¶
This tutorial uses the
mongodump and
mongorestore
utilities to export and import data. Ensure that these MongoDB package
components are installed and updated on your system. In addition, make
sure you have sufficient drive space available for the
mongodump export file and the data files of your new
mongod instance running with WiredTiger...
mongod --storageEngine wiredTiger --dbpath <newWiredTigerDBPath>
You can also specify the options in a configuration file. To specify the storage engine, use
the
storage.engine setting.
Upload the exported data using
mongorestore.¶
mongorestore <exportDataDestination>
Specify additional options as appropriate. See
mongorestore for available options. | https://docs.mongodb.com/v3.4/tutorial/change-standalone-wiredtiger/ | 2017-06-22T18:27:30 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.mongodb.com |
SkyWise™ Tiles API v2 (Beta)¶
The SkyWise™ Tiles API is an HTTP interface to WDT’s Weather as a Service® platform for interactive mapping applications. This document provides details about the API and its endpoints.
General Information¶
Endpoints¶
- Products
- Forecasts
- Frames
- Map Tiles
- Styles
- Datapoints | http://docs.api.wdtinc.com/skywise-tiles/en/latest/ | 2017-06-22T18:34:42 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.api.wdtinc.com |
SublimeGit Documentation¶
SublimeGit is a full-featured Git plugin for Sublime Text 2. It has been developed to be easy to get started with. If you’re used to Git and dealing with Sublime Text packages, you can probably just install SublimeGit, and get right to work.
If installing Sublime Text packages, or using Git, is new to you, the Quickstart is a great place to start. It will get you set up, so you can go on to the tutorial.
Note
This documentation assumes some familiarity with Git. If you are not familiar with Git, be sure to check out the More Information section which contain links to a couple of resources for learning Git.
Contents¶
- Quickstart
- Tutorial
- Commands Reference
- Plugins
- Keyboard Shortcuts
- Customizations
- Troubleshooting
- More Information | http://sublimegit.readthedocs.io/en/latest/ | 2017-06-22T18:15:55 | CC-MAIN-2017-26 | 1498128319688.9 | [] | sublimegit.readthedocs.io |
You define the columns in a mining structure when you create the mining structure, by choosing columns of external data and then specifying how the data is to be used for modeling. Therefore, mining structure columns are more than copies of data from a data source: they define how the data from the source is to be used by the mining model. You can assign properties that determine how the data is discretized, properties that describe how the data values are distributed
Mining structure columns are designed to be flexible and extensible, because each algorithm that you use to build a mining model may use different columns from the structure to interpret the data. Rather than have one set of data for each model, you can use a single mining structure and use the columns in it to customize the data for each model.
Defining Mining Structure Columns
The basic data types and content types that define structure columns are derived from the data source that you use to create the structure. You can change these settings within the mining structure, and you can also set modeling flags and set the distribution for continuous columns.
The definition of a mining structure column must contain the following information:
ID: The unique name of the column, often the same as the name. This cannot be changed after you create the mining structure, whereas the name can be changed.
Name: A name or alias for the column.
Content: An enumeration that describes whether the data is discrete or continuous.
Type: An enumeration that indicates the general data type.
Distribution: An enumeration that describes the expected distribution of values. A distribution is included if the column is continuous.
Modeling flags: An enumeration that indicates how to handle missing values and so forth. Modeling flags can also be defined on the mining model, but the model flags are different than the flags used on structure columns.
Bindings: Properties that specify the source data.
Third-party algorithms may also include custom properties that can be defined on the mining structure column.
For more information about the data mining structure and the data mining model, see Mining Structures (Analysis Services - Data Mining).
Related Content
See the following topics for more information about how to define and use mining structure columns.
See Also
Mining Structures (Analysis Services - Data Mining)
Mining Model Columns | https://docs.microsoft.com/ko-kr/sql/analysis-services/data-mining/mining-structure-columns | 2017-06-22T19:23:42 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.microsoft.com |
Workspace¶
Description
This package provides a Workspace container that can be used as a project space, team space or community space.
Note
Copied from the docs of the previously stand-alone ploneintranet.workspace. The contents might need some adjustment to recent developments.
Introduction¶
At its core, it’s a Dexterity Container with collective.workspace behavior applied.
On top of that, it provides a policy abstraction and user interface that enables intuitive management of security settings without having to access, and understand, the sharing tab in Plone. The sharing tab and functionality is retained as “advanced” settings to enable per-user exceptions to the default security settings.
Summary¶
This package provides a ‘workspace’ container and content workflow working in conjunction to provide flexible levels of content access in a Plone site.
It aims to provide a flexible team/community workspace solution, allowing.
Basic Usage¶.
Security Design¶
We spent some care in devising a terminology to precisely express the reasoning for all of this. Think of it as a domain-specific language. These terms will be capitalized below.
Personas¶
- Guest: a site user who is not a Participant in the Workspace.
- Participant: a site user with local permissions in the Workspace.
- Workspace Admin: manages users and the Workspace.
- Site Admin: manages users and permissions on the Plone site.
The Policies¶
Three realms of access are controlled via a single ‘policies’ tab on the workspace container:
External visibility¶
Who can see the workspace and its content?
- Secret
- Workspace and content are only visible to workspace members
- Private
- Workspace is visible to non-members
- ‘Published’ Workspace content only visible to workspace members
- Open
- Workspace is visible to non-members
- ‘Published’ Workspace content visible to all members and non-members
“Non-members” refers to users who have an account in the system but are not a member of this specific workspace. In no case is any workspace content exposed to anonymous users.
We take care to ensure that no content in a workspace can be viewable outside of the workspace, if the workspace itself does not allow that. I.e. unless a workspace is “Open” you can never view any of the content in that workspace if you’re not a member of that workspace.
Join policy¶
Who can join / add users to a workspace?
- Admin-managed
- Only workspace administrators can add users
- Team-managed
- All existing workspace members can add users
- Self-Managed
- Any user can self-join the workspace
Participation policy¶
What can members of the workspace do?
- Consumers
- Members can read all published content
- Producers
- Members can create new content, and submit for review
- Publishers
- Members can create, edit and publish their own content (but not the content of others)
- Moderators
- Members can create, edit and publish their own content and content created by others.
Note
Unless the policy is set to Moderators, Members will only see the content created by others if it has been published.
Policy Scenarios¶
These policies are designed to be combined in ways that produce sensible policy scenarios. Some example use cases might be:
- Open + Self-managed + Publishers = Community/Wiki
- Open + Admin-managed + Consumers = Business Division/Department
- Private + Team-managed + Publishers = Team
Integrators can easily create additional combinations to target scenarios like a HR area or a secret project. Such pre-packaged policy combinations may be exposed to users in the form of custom content types that “under the hood” are all just ploneintranet.workspace.
Participant Exceptions¶
While this is all very nice and powerful, there will always be a need to make exceptions. These can be made by linking to the existing sharing tab as ‘advanced policy configuration’ and setting per-user rights there.
It then makes sense to also have an audit viewlet that shows you which Participants have security settings that do not conform to the default policy configuration.
Consistency¶
We’ve audited the settings architecture described above for possible inconsistent settings. These should be caught by some logic in the configuration policy view.
- A Secret Workspace cannot be Self-managed
Technical Architecture and Implementation¶
Like a delicious wedding cake, the security settings are stacked in a layered architecture. This makes it possible to have a simplified configuration management interface frontent and at the same time have a performant and extremely fine-grained security mechanism in the back-end.
- Permissions are the basic building block of Plone’s security. For example: Add Content, Reply to Discussion.
- Roles are combinations of Permissions that make sense as a group. For example: Reader = View Content + View Folder Contents.
- Groups map Roles to users. For example: All users in group Readers get role Reader.
- Meta Groups map Personas to Groups. For example: All Participants are in the group Publisher. These Meta Groups are implemented as dynamic groups per workspace, see below.
Placeful Workflow¶
Note
The workflow on a workspace itself is ploneintranet_workspace_workflow. The workflow on the content within a workspace is ploneintranet_workflow
The implementation uses Plone’s placeful workflow policies to implement all of the above. Reasons for using a placeful workflow are:
- We’re introducing new roles like TeamMember and TeamManager which only apply in the context of this workspace
- We need to block role acquisition and then re-assign basic roles (like Reader, Editor, ...)
- We use the blocked re-assigned basic roles as building blocks for our dynamic Meta Groups (Consumer, Publisher, ...)
We cannot block the acquisition of the global dynamic Member group even when using placeful workflow. Instead we create a new dynamic group TeamMember on install and use that, not Member, to assign local permissions.
In addition to creating the new dynamic group and enabling the dynamic groups PAS plugin, the installer also applies the placeful workflow to all content types in the site, in order to replace the default sitewide workflow in the context of workspaces. As an implication, if you add additional content types to the site after installing ploneintranet.workspace, you’ll have to re-run the ploneintranet.network:default generic setup handler.
There’s some details and intricacies here that are worth highlighting.
First of all, why have a group Readers when you can directly map a user to the role Reader? Doing a local role assignment for a user in the context of a Workspace requires a costly reindex of the Workspace and recursively of all content contained in that Workspace. Assigning role Reader to the group Readers makes this reindex a one-time event. After that, users can be added to the group Readers without requiring a reindex.
As a consequence, a Workspace has local groups for Reader, Contributor, Reviewer and Editor. Additionally, a workspace has a local Meta-Group for Participants. Each of these local groups are of course created separately for each Workspace.
Why have a Meta-Group Participants when you can directly assign users to the groups Reader, Contributor etc? This brings 2 benefits:
- The group Participants manages the default policy for the Workspace. All exceptions to the default policy are made as assignments of users to other local groups via the advanced sharing facility. That way you can keep track of exceptions.
Suppose you did not do this and assigned users directly to local groups. Say the you’d want to add users to Readers + Contributors by default. Then you’d make an exception for Barney the Boss by adding him to Reviewers + Editors as well. If you then change the default policy to Readers + Contributors + Reviewers + Editors you’d have to add all others to those groups as well. If then you change your mind and want to revert the default policy back to only Readers + Contributors, you’d have no way to know that you’d need to demote all uses except Barney the Boss - you would demote Barney as well. Not good.
- Secondly, having a separate Meta-Group Participants allows you to add extra permissions and roles that are not implied by the normal group assignments.
Specifically, in an Open Workspace Guests have the Reader role by virtue of acquiring the global Readers group. Since the Readers group is acquired, we cannot redefine it’s permissions locally. However we want to grant Participants at minimal Consumer permissions, which in addition to Reader include various social interactivity permissions like Add Discussion Item, Create Plonesocial StatusUpdate etc.
Implementation notes¶
- The Participation policies are built on dynamic PAS group/role plugins from collective.workspace
- New ‘self-publisher’ role allows users to publish their own content, but not the content of others (something that cannot be achieved with existing contributor/editor/reviewer roles). This is achieved using a borg.localrole adapter | http://docs.quaive.net/development/components/workspace.html | 2017-06-22T18:31:36 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.quaive.net |
Recently Viewed Topics
Detecting Encrypted and Interactive Sessions
PVS, PVS identifies these sessions for the given port and IP protocol. It then lists the detected interactive or encrypted session as vulnerabilities.
PVS has a variety of plugins to recognize telnet, Secure Shell (SSH), Secure Socket Layer (SSL), and other protocols. Combined with the detection of the interactive and encryption algorithms, PVS may log multiple forms of identification for the detected sessions.
For example, PVS may recognize not only an SSH service running on a high port as an encrypted session, but also recognize the version of SSH and determine any vulnerabilities associated with it. | https://docs.tenable.com/pvs/4_4/Content/DetectingEncryptedAndInteractiveSessions.htm | 2017-06-22T18:37:31 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.tenable.com |
11. Association Updates: Owning Side and Inverse Side¶.
11.1. Bidirectional Associations¶.
11.2. Important concepts¶
“Owning side” and “inverse side” are technical concepts of the ORM technology, not concepts of your domain model. What you consider as the owning side in your domain model can be different from what the owning side is for Doctrine. These are unrelated. | https://doctrine-orm.readthedocs.io/en/latest/reference/unitofwork-associations.html | 2017-06-22T18:17:08 | CC-MAIN-2017-26 | 1498128319688.9 | [] | doctrine-orm.readthedocs.io |
How to create an Aliexpress Affiliate account
STEP 1:
Click on the Sign In button and follow the instructions (complete the required fields).
STEP 2:
Access the following address:
Log in with your Aliexpress previously created account.
STEP 3:
Create a new tracking ID:
After creation, copy and paste the Tracking ID in the Plugin Config.
STEP 4:
Go to Ad Center-> Api Setting
Here you will find the API KEY and Digital signature. Copy and paste them in the Plugin Config area.
That should be all. | http://docs.aa-team.com/wooexpress-woocommerce-aliexpress-affiliates/documentation/how-to-create-an-aliexpress-affiliate-account/ | 2017-06-22T18:30:24 | CC-MAIN-2017-26 | 1498128319688.9 | [array(['http://docs.aa-team.com/wp-content/uploads/2014/12/aliexpress-new-acc-1024x464.png',
'aliexpress-new-acc'], dtype=object)
array(['http://docs.aa-team.com/wp-content/uploads/2014/12/create-tracking-id.png',
'create-tracking-id'], dtype=object)
array(['http://docs.aa-team.com/wp-content/uploads/2014/12/API-settings-1024x608.png',
'API-settings'], dtype=object) ] | docs.aa-team.com |
Our uptime monitoring system has been configured to avoid avoid false positives as much as possible, but in certain cases there’s simply nothing we can do on our side to avoid it.
There can be several case scenarios for these uptime monitoring false positives, we’ll begin with the simpler ones and keep the more complex ones for the end:
1. if you’re monitor regular ping, then your server’s firewall may be blocking multiple simultaneous requests, which can generate false positives, you should take a look at this documentation article for further info and a solution on how to prevent this:
2. if may be possible that your server’s firewall is simply blocking some/all of our monitoring location IPs, in which case you should remove the blocks or just whitelist our IPs, you can find a full list of these IPs here:
3. when adding an uptime monitor, in advanced options, you’ll find a setting called “Number Of Failed Locations”, which basically sets the number of different locations that will need to fail before your uptime monitor is marked as offline.
This means that if (for instance) you’re monitoring from 6 different locations, and 4 of them fail, then your uptime monitor will be marked as offline and you will be notified. In such cases you could go and manually check your website or server and see it as being online, but the fact of the matter is that the target may not be accessible from allover the world, it may be available from just a few locations, which is why 4 of your 6 monitored locations failed.
4. another cause could be the downtimes which last for just 10-30 seconds, these can be very small downtimes, but big enough to trigger a notification in our system. And considering the downtime has been so small, by the time you check your website or server it will already be back online, which would make you think you were falsely notified. In these cases it’s a good idea to check the error messages in our downtime notification, it should give you a better idea of what has happened that triggered the alert.
What should I do if I’ve gone through every step described in this guide and I’m still having issues?
We encourage all of our users to open a support ticket if they encounter any such issues, preferably while the issue is still in progress, because our techs are online 24/7 and debugging an issue while in progress will result in much better results than debugging the issue just from our logs, once it has passed. | https://docs.hetrixtools.com/why-does-my-uptime-monitor-shows-as-offline-while-its-actually-not/ | 2017-06-22T18:32:31 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.hetrixtools.com |
This page will explain the different features of the Report Editor tab.
When all analysis steps have been completed, a report can be generated using the Report Editor. The reports are organized by section: Setup, Network, Forensics, and Code. By default, all findings, screenshots, notes, and output have a checkmark next to them. Any of these items can be omitted from the report by removing the checkbox.
Re-organizing items inside a section is also possible by “dragging and dropping” the specific item to its desired position.
To select the section where you would liketo edit/re-arrange findings, outputs and other results, use the drop-down menu at the top of the Report Editor:
You can also choose to omit specific items from the report by unchecking their associated checkbox. In the example below, we have removed the “Note about Login Process”:
Once you are done re-organizing the items in all of the different sections, you can generate your customized report by clicking on the “Generate Report” button:
Once the reported is generated, you have the option in the preview window to do a Print Preview, Save it as PDF, export it to a XML, JSON or HTML format, or open your report in a web browser window. | https://docs.nowsecure.com/lab-workstation/step-by-step-guide/report/ | 2017-03-23T06:11:54 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.nowsecure.com |
Purpose Quick Start Guide.
If you want some in-depth information on each of the steps, you are in the right place. Both the guides will get you to a working Gluster cluster, so it depends on you how much time you want to spend. The Quick Start Guide.
After you deploy Gluster by following these steps, we recommend that you read the Gluster Admin Guide to learn how to administer Gluster and how to select a volume type that fits your needs. Also, be sure to enlist the help of the Gluster community via the IRC.
What is Gluster
Gluster is a distributed scale out filesystem that allows rapid provisioning of additional storage based on your storage consumption needs. It incorporates automatic failover as a primary feature. All of this is accomplished without a centralized metadata server.
What is Gluster without making me learn an extra glossary of terminology?
- Gluster is an easy way to provision your own storage backend NAS using almost any hardware you choose.
- You can add as much as you want to start with, and if you need more later, adding more takes just a few steps.
- You can configure failover automatically, so that if a server goes down, you don’t lose access to the data. No manual steps are required for failover. When you fix the server that failed and bring it back online, you don’t have to do anything to get the data back except wait. In the mean time, the most current copy of your data keeps getting served from the node that was still running.
- You can build a clustered filesystem in a matter of minutes…it is trivially easy for basic setups
- It takes advantage of what we refer to as “commodity hardware”, which means, we run on just about any hardware you can think of, from that stack of decomm’s and gigabit switches in the corner no one can figure out what to do with (how many license servers do you really need, after all?), to that dream array you were speccing out online. Don’t worry, I won’t tell your boss.
- It takes advantage of commodity software too. No need to mess with kernels or fine tune the OS to a tee. We run on top of most unix filesystems, with XFS and ext4 being the most popular choices. We do have some recommendations for more heavily utilized arrays, but these are simple to implement and you probably have some of these configured already anyway.
- Gluster data can be accessed from just about anywhere – You can use traditional NFS, SMB/CIFS for Windows clients, or our own native GlusterFS (a few additional packages are needed on the client machines for this, but as you will see, they are quite small).
- There are even more advanced features than this, but for now we will focus on the basics.
- It’s not just a toy. Gluster is enterprise ready, and commercial support is available if you need it. It is used in some of the most taxing environments like media serving, natural resource exploration, medical imaging, and even as a filesystem for Big Data.
Is Gluster going to work for me and what I need it to do?
Most likely, yes. People use Gluster for all sorts of things. You are encouraged to ask around in our IRC channel or Q&A forums to see if anyone has tried something similar. That being said, there are a few places where Gluster is going to need more consideration than others. - Accessing Gluster from SMB/CIFS is often going to be slow by most people’s standards. If you only moderate access by users, then it most likely won’t be an issue for you. On the other hand, adding enough Gluster servers into the mix, some people have seen better performance with us than other solutions due to the scale out nature of the technology - Gluster does not support so called “structured data”, meaning live, SQL databases. Of course, using Gluster to backup and restore the database would be fine - Gluster is traditionally better when using file sizes at of least 16KB (with a sweet spot around 128KB or so).
What is the cost and complexity required to set up cluster?
Question: How many billions of dollars is it going to cost to setup a cluster? Don’t I need redundant networking, super fast SSD’s, technology from Alpha Centauri delivered by men in black, etc…?
I have never seen anyone spend even close to a billion, unless they got the rust proof coating on the servers. You don’t seem like the type that would get bamboozled like that, so have no fear. For purpose of this tutorial, if your laptop can run two VM’s with 1GB of memory each, you can get started testing and the only thing you are going to pay for is coffee (assuming the coffee shop doesn’t make you pay them back for the electricity to power your laptop).
If you want to test on bare metal, since Gluster is built with commodity hardware in mind, and because there is no centralized meta-data server, a very simple cluster can be deployed with two basic servers (2 CPU’s, 4GB of RAM each, 1 Gigabit network). This is sufficient to have a nice file share or a place to put some nightly backups. Gluster is deployed successfully on all kinds of disks, from the lowliest 5200 RPM SATA to mightiest 1.21 gigawatt SSD’s. The more performance you need, the more consideration you will want to put into how much hardware to buy, but the great thing about Gluster is that you can start small, and add on as your needs grow.
OK, but if I add servers on later, don’t they have to be exactly the same?
In a perfect world, sure. Having the hardware be the same means less troubleshooting when the fires start popping up. But plenty of people deploy Gluster on mix and match hardware, and successfully. | http://gluster.readthedocs.io/en/latest/Install-Guide/Overview/ | 2017-03-23T06:09:37 | CC-MAIN-2017-13 | 1490218186780.20 | [] | gluster.readthedocs.io |
Ric timed are described in other documents [1,2,3]. The next paragraphs are a brief overview of how the time daemon works. This document is mainly concerned with the administrative and technical issues of running timed at a particular site.
A master time daemon measures the time differences between the clock of the machine on which it is running and those of all other machines. The master computes the network time as the average of the times provided by nonfaulty clocks.[note master network is a network on which the submaster acts as a master. An ignored network is any other network which already has a valid master. The submaster tries periodically to become master on an ignored network, but gives up immediately if a master already exists.
While the synchronization algorithm is quite general, the election one, requiring a broadcast mechanism, puts constraints on the kind of network on which time daemons can run. The time daemon will only work on networks with broadcast characteristics of its machine, a slave can be prevented from becoming the master. Therefore, a subset of machines must be designated). possible for submaster time daemon A to be a slave on network X and the master on network Y, while submaster time daemon B is a slave on network Y and the master on network X. This loop of master time daemons will not function properly.
In order to start the time daemon on a given machine, the following lines should be added to the local daemons section in the file /etc/rc.local:
if [ -f /etc/timed ]; then /etc/timed flags & echo -n ' timed' >/dev/console fi
In any case, they must appear after the network is configured via ifconfig(8).
Also, the file /etc/services should contain the following line:
timed 525/udp timeserver
The flags are:
Timedc(8) is used to control the operation of the time daemon. It may be used to:
See the manual page on timed(8) and timedc(8) for more detailed information.
The date(1) command can be used to set the network date. In order to set the time on a single machine, the -n flag can be given to date(1). | https://docs.freebsd.org/44doc/smm/11.timedop/paper.html | 2017-09-19T20:33:16 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.freebsd.org |
16.4. CVE-2012-5641: Information disclosure via unescaped backslashes in URLs on Windows¶
16.4.1. Description¶.
16.4.2. Mitigation¶
Upgrade to a supported CouchDB release that includes this fix, such as:
All listed releases have included a specific fix for the MochiWeb component.
16.4.3. Work-Around¶
Users may simply exclude any file-based web serving components directly
within their configuration file, typically in local.ini. On a default
CouchDB installation, this requires amending the
httpd_global_handlers/favicon.ico and
httpd_global_handlers/.
16.4.4. Acknowledgement¶
The issue was found and reported by Sriram Melkote to the upstream MochiWeb project. | http://docs.couchdb.org/en/latest/cve/2012-5641.html | 2017-09-19T20:35:49 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.couchdb.org |
The 'Messages' panel gives access to units database. Here you can view messages received from units (coordinates, parameters, speed, etc.) as well as SMS messages received from units, commands sent to units and events registered in units history. Besides, data messages can be exported to a number of formats.
To open the 'Messages' panel, choose a corresponding name in the top panel or click on the necessary item in the main menu customizer. The workspace of the panel can be divided into four sections:
The sectors can be resized (the left ones — in width, the right ones — both in width and height). To do this, click on the border between them with the left mouse button and, while holding it, move the border to the right/left or up/down. At the same time, if less than 10% of the map is left while expanding the lower sector, the map automatically collapses. To return it, press on the line under the top panel.
There is a specially developed app to work with messages — Messages Manager. | http://docs.wialon.com/en/hosting/user/msg/msg | 2017-09-19T20:48:42 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.wialon.com |
Send Docs Feedback
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.?) | http://ja.docs.apigee.com/api-baas/content/retrieving-assets-1 | 2017-09-19T20:34:03 | CC-MAIN-2017-39 | 1505818686034.31 | [] | ja.docs.apigee.com |
setFlexAdSize want to be flexible in using different sizes per your own requirements, such as flexible ad sizes only for certain refreshes, please use the following
AdCallParams API.
//Create an instance of AdCallParams AdCallParams adCallParams = new AdCallParams(); //Set the adsize using the pre-defined list adCallParams.setFlexAdSize(OXMAdSize.BANNER_320x50_300x250); OR //Set a comma-delimited string of desired ad sizes: adCallParams.setFlexAdSize("320x50,200x400");
The supported ad sizes are as follows:
BANNER_320x50 = "320x50"; BANNER_300x250 = "300x250"; BANNER_320x50_300x250 = "320x50, 300x250"; INTERSTITIAL_320x480 = "320x480"; INTERSTITIAL_300x250 = "300x250"; INTERSTITIAL_480x320 = "480x320"; INTERSTITIAL_768x1024 = "768x1024"; INTERSTITIAL_1024x768 = "1024x768"; //Flexible ad sizes for portrait, phone INTERSTITIAL_320x480_300x250 = "320x480, 300x250"; //Flexible ad sizes for landscape, phone INTERSTITIAL_480x320_300x250 = "480x320, 300x250"; //Flexible ad sizes for portrait, tablet INTERSTITIAL_768x1024_320x480_300x250 = "768x1024, 320x480, 300x250"; //Flexible ad sizes for landscape, tablet INTERSTITIAL_1024x768_480x320_300x250 = "1024x768, 480x320, 300x250";
If you do not set these values programmatically, then the values set in the UI will be used for bid requests.
Please see
setDimensions() in
OXMBannerCustomEvent and
OXMInterstitialCustomEvent on how to pass these values to MoPub for display.
adResponse = new AdResponse.Builder() .setDimensions(width, height) ... | https://docs.openx.com/Content/developers/bidder-apps-android/bidder-apps-Android-flex-ads.html | 2017-09-19T20:39:27 | CC-MAIN-2017-39 | 1505818686034.31 | [] | docs.openx.com |
- Synchronizer
Synchronizer is the server used to deliver Virtual Machines (VMs) to DesktopPlayer clients. It manages:). | http://docs.citrix.com/en-us/desktopplayer/synchronizer.html | 2017-07-20T14:23:42 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.citrix.com |
Use VMware Ondisk Metadata Analyser (VOMA) when you experience problems with your VMFS datastore and need to check metadata consistency of VMFS or logical volume backing the VMFS volume.
The following examples show circumstances in which you might need to perform a metadata check:
You experience SAN outages.
After you rebuild RAID or perform a disk replacement.
You see metadata errors in the vmkernel.log file.
You are unable to access files on the VMFS datastore that are not in use by any other host.
Results
To check metadata consistency, run VOMA from the CLI of an ESXi host version 5.1 or later. VOMA can check both the logical volume and the VMFS for metadata inconsistencies. You can use VOMA on VMFS3 and VMFS5 datastores. VOMA runs in a read-only mode and serves only to identify problems. VOMA does not fix errors that it detects. Consult VMware Support to resolve errors reported by VOMA.
Follow these guidelines when you use the VOMA tool:
Make sure that the VMFS datastore you analyze does not span multiple extents. You can run VOMA only against a single-extent datastore.
Power off any virtual machines that are running or migrate them to a different datastore.
Follow these steps when you use the VOMA tool to check VMFS metadata consistency.
Obtain the name and partition number of the device that backs the VMFS datastore that you703:3
The output lists possible errors. For example, the following output indicates that the heartbeat address is invalid.
XXXXXXXXXXXXXXXXXXXXXXX Phase 2: Checking VMFS heartbeat region ON-DISK ERROR: Invalid HB address Phase 3: Checking all file descriptors. Phase 4: Checking pathname and connectivity. Phase 5: Checking resource reference counts. Total Errors Found: 1
The VOMA tool uses the following options. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-6F991DB5-9AF0-4F9F-809C-B82D3EED7DAF.html | 2017-07-20T14:48:22 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.vmware.com |
When you upgrade hosts, you must understand and follow the best practices process for a successful upgrade.
For a successful ESXi upgrade, follow these best practices:
Make sure that you understand the ESXi upgrade process, the effect of that process on your existing deployment, and the preparation required for the upgrade.
If your vSphere system includes VMware solutions or plug-ins, make sure they are compatible with the vCenter Server version that you are upgrading to. See the VMware Product Interoperability Matrix at.
Read Upgrade Options for ESXi 6.0 Upgrade Options for ESXi 6.0.
Make sure that the system hardware complies with ESXi requirements. See Upgrade.
Depending on the upgrade option you choose, you might need to migrate or power off all virtual machines on the host. See the instructions for your upgrade method.
After the upgrade, test the system to ensure that the upgrade completed successfully.
Apply a host's licenses. See Applying Licenses After Upgrading to ESXi 6. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.upgrade.doc/GUID-712F3F65-A2C8-4B5C-8E99-0C935CAA8C9A.html | 2017-07-20T14:48:55 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.vmware.com |
Regardless of the deployment option and inventory hierarchy that you select, you have to set up your network before you can start configuration. To set the foundation for the vCenter HA network, you add a port group to each ESXi host, and add a virtual NIC to the vCenter Server Appliance that later becomes the Active node.
Before you begin
The vCenter Server Appliance that later becomes the Active node, is deployed.
You can access and have privileges to modify that vCenter Server Appliance and the ESXi host on which it runs.
During network setup, you need static IP addresses for the management network. The management and cluster network addresses must be IPv4 or IPv6. They cannot be mixed
About this task
After configuration is complete, the vCenter HA cluster has two networks, the management network on the first virtual NIC and the vCenter HA network on the second virtual NIC.
Management network
The management network serves client requests (public IP). The management network IP addresses must be static.
vCenter HA network
The vCenter HA network connects the Active, Passive, and Witness nodes and replicates the appliance state. It also monitors heartbeats.
The vCenter HA network IP addresses for the Active, Passive, and Witness nodes must be static.
The vCenter HA network must be on a different subnet than the management network. The three nodes can be on the same subnet or on different subnets.
Network latency between the Active, Passive, and Witness nodes must be less than 10 milliseconds.
You must not add a default gateway entry for the cluster network.
Procedure
- Log in to the management vCenter Server and find the ESXi host on which the Active node is running.
- Add a port group to the ESXi host.
This port group can be on an existing virtual switch or, for improved network isolation, you can create a new virtual switch. It must be on a different subnet than the management network on Eth0.
- If your environment includes the recommended three ESXi hosts, add the port group to each of the hosts.
What to do next
What you do next depends on the type of configuration you select.
With a Basic configuration, the wizard creates the vCenter HA virtual NIC on each clone and sets up the vCenter HA network. When configuration completes, the vCenter HA network is available for replication and heartbeat traffic.
With an Advanced configuration.
You have to first create and configure a second NIC on the Active node. See Create and Configure a Second NIC on the vCenter Server Appliance.
When you perform the configuration, the wizard prompts for the IP addresses for the Passive and Witness nodes.
The wizard prompts you to clone the Active node. As part of the clone process, you perform additional network configuration.
See Configure vCenter HA With the Advanced Option. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.avail.doc/GUID-9B176C8A-4EEE-4A28-A3C1-24656D6402CF.html | 2017-07-20T14:49:06 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.vmware.com |
The health alert list is all the generated alerts that are configured to affect the health of your environment and require immediate attention. You use the health alert list to evaluate, prioritize, and immediately begin resolving the problems.
How Health Alerts Work
All the health.
Where You Find Health Alerts
In the left pane, select.
Health Alerts Options Health Alerts data grid provides a list of generated alerts that you use to resolve problems in your environment. | https://docs.vmware.com/en/vRealize-Operations-Manager/6.5/com.vmware.vcom.core.doc/GUID-41EE6436-F113-47E6-965B-B407A00DCB2D.html | 2017-07-20T14:49:26 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.vmware.com |
Migrating a WordPress CodebaseMigrating a WordPress Codebase
This guide covers how to migrate a typical WordPress project):
aws-rekognition
delegated-oauth
elasticpress
extended-cpts
gaussholder
hm-gtm
hm-redirects
meta-tags
msm-sitemap
query-monitor
smart-media
stream
two-factor
workflows
wp-seo
Assuming your project uses Chassis for local development, we’ll be removing the local Chassis install, and installing the Altis module. If you have a setup script (such as
.bin/setup.sh) you should remove any Chassis setup / installation steps.
Once you have cleaned out Chassis, install the
altis/local-chassis composer package as a dev dependency.
composer require --dev altis/local-chassis
Once completed, install and start your local server with
composer chassis init and then
composer chassis start. You should now be able to navigate to to see the site, where "my-project" is your project directory name.
Chassis alternativeChassis alternative
We also recommend installing the new docker based local environment. This environment has a few extra developer tools such as Kibana and avoids issues where Chassis and extension versions can get out of sync across your team's machines.
To install the docker environment run:
composer require --dev altis/local-server
To start the docker server run
composer local-server start. You should now be able to see the site at where "my-project" is the project directory name.. Altis is always configured to be a WordPress multisite, as such any sites that are not installed as multisite already, will need converting via the
multisite-convert WP CLI command.
As always, be sure to test the migration and deployment in
development or
staging environments before rolling out to production. | https://docs.altis-dxp.com/v5/guides/migrating-from-wordpress/ | 2022-09-25T04:29:07 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.altis-dxp.com |
Iaas
VMRecovery Point. Recovery Point Time Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets or sets time at which this backup copy was created.
[Newtonsoft.Json.JsonProperty(PropertyName="recoveryPointTime")] public Nullable<DateTime> RecoveryPointTime { get; set; }
member this.RecoveryPointTime : Nullable<DateTime> with get, set
Public Property RecoveryPointTime As Nullable(Of DateTime)
Property Value
- System.Nullable<System.DateTime>
- Attributes
- Newtonsoft.Json.JsonPropertyAttribute | https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.management.recoveryservices.backup.models.iaasvmrecoverypoint.recoverypointtime?view=azure-dotnet | 2022-09-25T04:58:39 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.azure.cn |
Metadata Management
A Couchbase Cluster’s metadata describes its configuration. Some categories of metadata are maintained by means of a consensus protocol; others by means of gossip replication.
Understanding Couchbase-Server Metadata-Management
In Couchbase Server version 7.0 and later, metadata is managed by means of Chronicle; which is a consensus-based system, based on the Raft algorithm. Chronicle manages:
The node-list for the cluster.
The status of each node.
The service-map for the cluster, which indicates on which nodes particular services have been installed.
Bucket definitions, including the placement of scopes and collections.
The vBucket maps for the cluster.
The process whereby this metadata is maintained is described below, in Consensus-Based Metadata-Management.
Additional cluster-metadata is handled by Couchbase Server’s legacy metadata-management system, which is based on gossip-replication. This legacy-managed metadata includes:
Compaction settings.
Security settings.
XDCR replications.
Settings and metadata for all services.
Per-node configuration settings.
Changes to the legacy-managed metadata are asynchronously replicated across the nodes with eventual consistency, by means of gossip replication. This means that when a metadata change occurs on any node, that node attempts to replicate the change to all other nodes. Additionally, each node periodically pulls the configuration of some other, randomly selected node; in order to update its own configuration in cases where it has somehow missed an earlier change-notification. By these means, configuration generally spreads rapidly and reliably through the cluster.
When a cluster-wide activity such as rebalance or failover needs to occur, this is performed by the cluster’s orchestrator (its master node). To do so, the orchestrator obtains a lease on metadata changes; to ensure that no topology-related metadata changes can be made by any other node, while the cluster-wide activity is in process.
Consensus-Based Metadata-Management
Starting with version 7.0, Couchbase Server provides consensus-based metadata management, by means of Chronicle; which is a methodology based on the Raft algorithm. Chronicle is:
Couchbase-Server administrators are not required to be familiar with the internal workings of Chronicle. However, for informational purposes, a sketch of how Chronicle works is provided below. See the Raft specification for a complete account of the algorithm on which Chronicle is based.
Logs, Leaders, and Followers
Each node maintains a log, into which metadata change-commands are saved. Across the cluster, logs are maintained with consistency. At intervals, to control log-size and ensure availability, on each node, compaction is performed, and snapshots are taken.
Each node is considered to be in the role of either leader or follower; with at most one node the leader at any time. Only the leader can transmit metadata change-commands. The role of leader may periodically be exchanged between nodes, by the process described below.
Leadership and Election
The leader is responsible for communicating with clients: this includes providing the current topology of the cluster, and receiving requests for topology change. The leader distributes clients' requests to the followers, as change-commands, which are to be appended to the log-instance on each node.
In the event of the leader becoming non-communicative (say, due to network failure), a follower can advertise itself as a candidate for leadership. Followers, receiving the communication, respond with votes. Every vote received by the candidate, including its own, constitutes one node’s support for the candidacy. If a majority of nodes are supportive, a new term is started, with the elected node as leader.
Nodes are given the right to vote only when fully integrated into the cluster. Nodes not fully integrated (as is the case, for example, during the process of their addition) are considered replicas. Replicas participate in the exchange and commitment of information (see immediately below), but do not vote. Once a node’s addition is complete, the node is able to vote.
Data Exchange and Commitment
When the leader transmits a change-command to followers, each follower appends the command to its own instance of the log. Once a majority of nodes confirm that they have done so, the leader performs a commit; and informs the other nodes. Once informed, the other nodes also commit.
Committing means updating the Replicated State Machine, according to the command. This is described below.
Following commitment, the leader returns the execution-result to the client.
Network Failures and Consequent Inconsistencies
When network failures prevent change-commands from reaching a majority of nodes, the information is appended to the logs of those nodes that have received it; but is not committed. Whenever a leader commits a new log entry, it also commits all preceding uncommitted entries in its log; and each commitment is applied, across the cluster, to all instances of the Replicated State Machine.
Replicated State Machine
The Replicated State Machine is a key-value store, resident on each node, that contains the cluster’s topographical data. Clients that require such data receive it from the leader’s key-value store. The key-value pairs in the store are generated and updated based on change-commands appended to the replicated log. A change-command may:
Add a key-value pair.
Update the value of a key-value pair.
Update the value of a key-value pair, based on logical constraints.
Update the values of key-value pairs transactionally.
Each key-value pair has a revision number that corresponds to the sequence of change-commands applied to the store.
Quorum Failure
A quorum failure occurs when half or more of the nodes in the cluster cannot be contacted. In this situation, commitment is prohibited; until either the communication problem is remedied, or the non-communicative nodes are failed over unsafely. In consequence, prior to remedy or failover, for the duration of the quorum failure:
Buckets, scopes, and collections can neither be created nor dropped.
Nodes cannot be added, joined, failed over safely, or removed.
See Performing an Unsafe Failover, for information on failing over nodes in response to a quorum failure. | https://docs.couchbase.com/server/current/learn/clusters-and-availability/metadata-management.html | 2022-09-25T05:26:18 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.couchbase.com |
Documentation
Documentation
Rara Magazine Documentation
Quick Links:
Translate and Get a Premium theme for FREE
Getting Started
Introduction
WordPress Requirements and Checklists
Theme Installation & Activation
How to Install & Activate Rara Magazine WordPress Theme?
Recommended Plugins
What are the Recommended Image Sizes?
How to Import Demo Content?
Configure Header & Footer Section
How to configure Site Logo/ Name & Tagline to your website?
How to Configure Header Settings?
How to Create & Edit Navigation Menu?
How to Add Footer Widgets?
Appearance Settings
How to Configure Website Colors?
How to Configure Background Image?
How to Configure Category Color Settings?
How to Configure Color Scheme Settings?
Homepage Settings
How to Setup a Static Front Page?
How to Configure Featured Post Section?
How to Configure Top News Section?
How to Configure Category Section?
Pages Settings
How to Configure Blog Page Settings?
General Settings
How to Configure AD Settings?
How to Configure Social Settings? Configure Footer Settings? | https://docs.rarathemes.com/docs/rara-magazine/ | 2022-09-25T04:29:15 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.rarathemes.com |
Documentation
Documentation
The Schema Pro Documentation
Quick Links:
Translate and Get a Premium theme for FREE
Getting Started
Introduction
WordPress Requirements and Checklists
Theme Installation and Activation
How to Install & Activate The Schema Pro WordPress Theme?
How to Activate Rara Theme License?
What are the Recommended Image Sizes?
How to Import Demo Content?
Recommended Plugins
Configure Header and Footer Section
How to configure Site Logo/ Name & Tagline to your website?
How to make Header Sticky while scrolling down the page?
How to Create & Edit Navigation Menu?
How to Add Footer Widgets?
How to Configure Newsletter on Banner Section?
Appearance Settings
How to Change Website Colors?
How to Change Website Background?
How to Configure General Sidebar Layouts?
How to configure social sharing on your website?
How to change Home page Layouts?
How to Configure Typography Settings?
How to change Blog page Layouts?
How to Configure Sidebar Settings?
Homepage Settings
How to Set up the Front/Landing/Home Page and Blog Page?
How to Configure Banner Section?
How to Configure About Section?
How to Configure Client Section?
How to Configure Services Section?
How to Configure Blog Section?
How to Set Up Newsletter in Your Website?
How to Sort Homepage Section?
General Settings
How to configure Posts & Pages Settings?
How to configure Misc settings?
How to Configure Pagination Settings?
How to Configure Header Settings?
How to Configure Social Media Settings?
How to create a Post/Page?
How to Enable Elementor Page Builder?
How to Configure Newsletters on Footer?
How to Reset Customizer?
SEO and Performance Settings
How to Add Google Analytics to your website?
How to Optimize Your Website Performance?
How to Configure SEO Settings of your website?
FAQs
How to add Custom Codes?
How to change the Logo Size of your website?
Why is the Customizer not showing up?
How to Update Theme & Plugins?
Why are my website images not loading?
How can I translate my site to my local language?
How to Add Footer Copyright Information? | https://docs.rarathemes.com/docs/the-schema-pro/ | 2022-09-25T04:33:50 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.rarathemes.com |
Structure
They have three elements:
The «start joint» named root or head.
The «body» itself.
And the «end joint» named tip or tail.
With the default armature in Edit Mode, you can select the root and the tip, and move them as you do with mesh vertices.
Both root and tip (the «joints») define the bone by their respective position.
They also have a radius property, only useful for the envelope deformation method (see below).
Roll
Activating the Axes checkbox will show local axes for each bone’s tip. The Y axis is always aligned along the bone, oriented from root to tip, this is the «roll» axis of the bones.
Bones Influence
A bone in Envelope visualization, in Edit Mode.
Basically, a bone controls a geometry when vertices «follow» the bone. This is like how the muscles and skin of your finger follow your finger-bone when you move a finger.
To do this, you have to define the strength of influences a bone has on and
the root’s radius and the tip’s radius.
Our armature in Envelope visualization, in Pose Mode.
All these influence parameters are further detailed in the skinning pages. | https://docs.blender.org/manual/nb/dev/animation/armatures/bones/structure.html | 2022-09-25T05:46:54 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['../../../_images/animation_armatures_bones_structure_bones-elements.png',
'../../../_images/animation_armatures_bones_structure_bones-elements.png'],
dtype=object)
array(['../../../_images/animation_armatures_bones_structure_envelope-edit-mode.png',
'../../../_images/animation_armatures_bones_structure_envelope-edit-mode.png'],
dtype=object)
array(['../../../_images/animation_armatures_bones_structure_envelope-pose-mode.png',
'../../../_images/animation_armatures_bones_structure_envelope-pose-mode.png'],
dtype=object) ] | docs.blender.org |
Azure Active Directory documentation: API guide, Authentication
Before you begin, verify that you have set up an OAuth 2.0 connection to Azure. You must register the app to retrieve your client ID, tenant ID, and client secret. Use the following steps to register your app in Azure:
- Log in to the Azure portal.
- Click Add > App registration.
- Name the app and click Register at the bottom of the page.
- Click API Permissions in the left navigation menu.
- Click Add a permission.
- Click Azure Storage on the Request API permissions page.
- Check the user_impersonation checkbox in the Permissions section at the bottom of the page, and click Add permissions.
- Click App roles in the left navigation menu, then click Create app role.
- Create a new app role named Storage Blob Data Contributor (or select it if it has already been created). Verify that the app role has Applications selected in the Allowed member types section, and a Value of
Task.Write.
- If you encounter an authorization permission mismatch, add the Storage Queue Data Contributor permission.
- When setting permissions, the Contributor permission is required, but if you only want to use only read access you can check the Leave files on server checkbox on the export. This connector does not support files with filenames that contain a forwardslash (/) or a backslash (\), and only supports the first 5000 files in the Container.
A. Set up an Azure Blob Storage connection
Start establishing a connection to Azure Blob Storage in either of the following ways:
- From the Resources menu, select Connections. Then, click + Create connection at the top right.
– or –
- While working in a new or existing integration, you can add an application to a flow by clicking a source or destination. In the resulting Application list, select Azure Blob Storage.
B. Describe the Azure Blob Storage connection
Edit the General settings specific to your account and this connection resource.
Azure Blob Storage account information
At this point, you’re presented with a series of options for providing Azure Blob Storage authentication.
Storage account name (required): Enter the name of the Azure storage account which contains the data you want to access with this connection.
Tenant ID (required): Specify the tenant ID that identifies the Azure Active Directory tenant used for authentication. Log in to Microsoft Azure and click the Overview page for the app you created, and us the value displayed in the Directory (tenant) ID field.
iClient (required): Select the iClient pair that stores the client ID and client secret provided to you by Microsoft Azure. To add an iClient and configure your credentials, click the plus (+) button. Click the edit (
) button to modify a selected iClient. Be sure to give the iClient a recognizable name for use in any other connections.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/4405704367771 | 2022-09-25T05:44:46 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['/hc/article_attachments/4405756177563/AzAddAppReg.png',
'AzAddAppReg.png'], dtype=object)
array(['/hc/article_attachments/4405751164571/APIPermissions.png',
'APIPermissions.png'], dtype=object)
array(['/hc/article_attachments/4405751213595/AddAPerm.png',
'AddAPerm.png'], dtype=object)
array(['/hc/article_attachments/4405751263515/AZStorage.png',
'AZStorage.png'], dtype=object)
array(['/hc/article_attachments/4405751318043/user_imp.png',
'user_imp.png'], dtype=object)
array(['/hc/article_attachments/4405751406491/AZAppRoles.png',
'AZAppRoles.png'], dtype=object)
array(['/hc/article_attachments/4409614662427/SotrageBlobDataContr.png',
'SotrageBlobDataContr.png'], dtype=object)
array(['/hc/article_attachments/4405723692443/AZBlob1.png', 'AZBlob1.png'],
dtype=object)
array(['/hc/article_attachments/4405723812891/AZBlob2.png', 'AZBlob2.png'],
dtype=object) ] | docs.celigo.com |
.
CCPClient.initApp(appId: APP_ID)
Note:
CCPClient.initAppshould be called only once across the entire application. The
application:didFinishLaunchingWithOptions:method in your application is typically a good place to initialize..
CCPClient.connect(uid: USER_ID) { user, error in if error == nil { //You are connected to ChatCamp backend now. } }
Connecting with Access Token
A more secure way is to create user via ChatCamp API with an access token. This access token would then be required while connecting to ChatCamp in iOS iOS application and then use it while connecting to ChatCamp.
CCPClient.connect(uid: USER_ID, accessToken: ACCESS_TOKEN) { user, error in if error == nil { //You are connected to ChatCamp backend now. } }
Disconnecting from Chat
A user needs to disconnect from ChatCamp Cloud if they do not wish to receive chat related data. It can be used when the user is logging out of your app.
CCPClient.disconnect() { (error) in //You are disconnected to ChatCamp backend now. }
Updated less than a minute ago | https://docs.chatcamp.io/docs/ios-chat-authentication | 2022-09-25T04:06:11 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.chatcamp.io |
Open Estimated reading: 1 minute 97 views Open activity is used to open an existing pdf file. *Pdf nameGiven the reference name of the PDF file which is going to be open.*Pdf file nameThe reference name of opened PDF file.PasswordUsed if there is a password in the PDF file. Otherwise, no need to be used.Note: * Fields selected with are required, others are optional. Pdf name example You can use the Pdf name as shown in the example. robustaPdf Pdf file name example You can use the Pdf file name as shown in the example. C:\Robusta\robusta.pdf | https://docs.robusta.ai/docs/documentation-2021-12/robusta-rpa-components/pdf/open-pdf/ | 2022-09-25T04:28:04 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.robusta.ai |
Process Management¶
In this chapter you will learn how to work with processes.
Objectives : In this chapter, future Linux administrators will learn how to:
Recognize the
PID and
PPID of a process;
View and search for processes;
Manage processes.
process, linux
Knowledge:
Complexity:
Temps de lecture : 20 minutes
Generalities¶
An operating system consists of processes. These processes are executed in a specific order and are related to each other. There are two categories of processes, those focused on the user environment and those focused on the hardware environment.
When a program runs, the system will create a process by placing the program data and code in memory and creating a runtime stack. A process is therefore an instance of a program with an associated processor environment (ordinal counter, registers, etc...) and memory environment.
Each process has:
- a PID : Process IDentifier, a unique process identifier;
- a PPID : Parent Process IDentifier, unique identifier of parent process.
By successive filiations, the
init process is the father of all processes.
- A process is always created by a parent process;
- A parent process can have multiple child processes.
There is a parent/child relationship between processes. A child process is the result of the parent process calling the fork() primitive and duplicating its own code to create a child. The PID of the child is returned to the parent process so that it can talk to it. Each child has its parent's identifier, the PPID.
The PID number represents the process at the time of execution. When the process finishes, the number is available again for another process. Running the same command several times will produce a different PID each time.!!! abstract Note Processes are not to be confused with threads. Each process has its own memory context (resources and address space), while threads from the same process share this same context.
Viewing processes¶
The
ps command displays the status of running processes.
ps [-e] [-f] [-u login]
Example:
# ps -fu root
Some additional options:
Without an option specified, the
ps command only displays processes running from the current terminal.
The result is displayed in columns:
# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Jan01 ? 00:00/03 /sbin/init
The behaviour of the control can be fully customized:
# ps -e --format "%P %p %c %n" --sort ppid --headers PPID PID COMMAND NI 0 1 systemd 0 0 2 kthreadd 0 1 516 systemd-journal 0 1 538 systemd-udevd 0 1 598 lvmetad 0 1 643 auditd -4 1 668 rtkit-daemon 1 1 670 sssd 0
Types of processes¶
The user process:
- is started from a terminal associated with a user;
- accesses resources via requests or daemons.
The system process (demon):
- is started by the system;
- is not associated with any terminal, and is owned by a system user (often
root);
- is loaded at boot time, resides in memory, and is waiting for a call;
- is usually identified by the letter
dassociated with the process name.
System processes are therefore called daemons (Disk And Execution MONitor).
Permissions and rights¶
When a command is executed, the user's credentials are passed to the created process.
By default, the actual
UID and
GID (of the process) are therefore identical to the actual
UID and
GID (the
UID and
GID of the user who executed the command).
When a
SUID (and/or
SGID) is set on a command, the actual
UID (and/or
GID) becomes that of the owner (and/or owner group) of the command and no longer that of the user or user group that issued the command. Effective and real UIDs are therefore different.
Each time a file is accessed, the system checks the rights of the process according to its effective identifiers.
Process management¶
A process cannot be run indefinitely, as this would be to the detriment of other running processes and would prevent multitasking.
The total processing time available is therefore divided into small ranges, and each process (with a priority) accesses the processor in a sequenced manner. The process will take several states during its life among the states:
- ready: waiting for the availability of the process;
- in execution: accesses the processor;
- suspended: waiting for an I/O (input/output);
- stopped: waiting for a signal from another process;
- zombie: request for destruction;
- dead: the father of the process kills his son.
The end of process sequencing is as follows:
- Closing of the open files;
- Release of the used memory;
- Sending a signal to the parent and child processes.
When a parent process dies, its children are said to be orphans. They are then adopted by the
init process which will destroy them.
The priority of a process¶
The processor works in time sharing with each process occupying a quantity of processor time.
The processes are classified by priority whose value varies from -20 (the highest priority) to +19 (the lowest priority).
The default priority of a process is 0.
Modes of operation¶
Processes can run in two ways:
- synchronous: the user loses access to the shell during command execution. The command prompt reappears at the end of the process execution.
- asynchronous: the process is processed in the background. The command prompt is displayed again immediately.
The constraints of the asynchronous mode:
- the command or script must not wait for keyboard input;
- the command or script must not return any result on the screen;
- quitting the shell ends the process.
Process management controls¶
kill command¶
The
kill command sends a stop signal to a process.
kill [-signal] PID
Example:
$ kill -9 1664
Signals are the means of communication between processes. The
kill command sends a signal to a process.
!!! abstract Tip The complete list of signals taken into account by the
kill command is available by typing the command :
$ man 7 signal
nohup command¶
nohup allows the launching of a process independently of a connection.
nohup command
Example:
$ nohup myprogram.sh 0</dev/null &
nohup ignores the
SIGHUP signal sent when a user logs out.
!!! abstract Note "Question"
nohup handles standard output and error, but not standard input, hence the redirection of this input to
/dev/null.
[CTRL] + [Z]¶
By pressing the CTRL + Z keys simultaneously, the synchronous process is temporarily suspended. Access to the prompt is restored after displaying the number of the process that has just been suspended.
& instruction¶
The
& statement executes the command asynchronously (the command is then called job) and displays the number of job. Access to the prompt is then returned.
Example:
$ time ls -lR / > list.ls 2> /dev/null & [1] 15430 $
The job number is obtained during background processing and is displayed in square brackets, followed by the
PID number.
fg and
bg commands¶
The
fg command puts the process in the foreground:
$ time ls -lR / > list.ls 2>/dev/null & $ fg 1 time ls -lR / > list.ls 2/dev/null
while the command
bg places it in the background:
[CTRL]+[Z] ^Z [1]+ Stopped $ bg 1 [1] 15430 $
Whether it was put in the background when it was created with the
& argument or later with the CTRL +Z keys, a process can be brought back to the foreground with the
fg command and its job number.
jobs command¶
The
jobs command displays the list of processes running in the background and specifies their job number.
Example:
$ jobs [1]- Running sleep 1000 [2]+ Running find / > arbo.txt
The columns represent:
- job number;
- the order in which the processes run
- a
+: this process is the next process to run by default with
fgor
bg;
- a
-: this process is the next process to take the
+;
- Running (running process) or Stopped (suspended process).
- the command
nice and
renice commands¶
The command
nice allows the execution of a command by specifying its priority.
nice priority command
Example:
$ nice -n+15 find / -name "file"
Unlike
root, a standard user can only reduce the priority of a process. Only values between +0 and +19 will be accepted.
!!! abstract Tip This last limitation can be lifted on a per-user or per-group basis by modifying the
/etc/security/limits.conf file.
The
renice command allows you to change the priority of a running process.
renice priority [-g GID] [-p PID] [-u UID]
Example:
$ renice +15 -p 1664
-g|
GIDof the process owner group. | |
-p|
PIDof the process. | |
-u|
UIDof the process owner. |
The
renice command acts on processes already running. It is therefore possible to change the priority of a specific process, but also of several processes belonging to a user or a group.
!!! abstract Tip The
pidof command, coupled with the
xargs command (see the Advanced Commands course), allows a new priority to be applied in a single command:
$ pidof sleep | xargs renice 20
top command¶
The
top command displays the processes and their resource consumption.
$ top PID USER PR NI ... %CPU %MEM TIME+ COMMAND 2514 root 20 0 15 5.5 0:01.14 top
The
top command allows control of the processes in real time and in interactive mode.
pgrep and
pkill commands¶
The
pgrep command searches the running processes for a process name and displays the PID matching the selection criteria on the standard output.
The
pkill command will send the specified signal (by default SIGTERM) to each process.
pgrep process pkill [-signal] process
Examples:
- Get the process number from
sshd:
$ pgrep -u root sshd
- Kill all
tomcatprocesses:
$ pkill tomcat | https://docs.rockylinux.org/fr/books/admin_guide/08-process/ | 2022-09-25T04:22:31 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.rockylinux.org |
Edit Files & Folders Details
Rename Files or Folders
In order to rename a file or folder there are a few rule to keep in mind:
The file cannot be checked out by another user
You must have full access to the folder or file
The file or folder cannot be the same name of a file or folder that is already existing in the same location.
The file or folder cannot contain a restricted character.
To rename a file or folder
Open your project in Trimble Connect for Browser.
Navigate to the Explorer page.
Select a file or folder you want to rename.
The detail panel will open on the right side.
In the detail panel, click the Edit button shown next to the file or folder name.
The panel will change to Edit Mode.
Type in the new name.
Click the Save button.
Edit File Attributes
Projects owned by accounts with a file attribute template will have file attributes that can be edited by all project users. File attributes are defined on the account level. Project ownership is determined by the license that has been applied to a project.
Learn more about project ownership ›
Prerequisites
These instructions assume that you have already have a project in Trimble Connect for Browser which has file attributes applied to it.
To edit file attributes
Open your project in Trimble Connect for Browser.
Navigate to the Explorer page.
Select the file you want to edit.
The detail panel opens on the right side.
On the detail panel, click the Edit button shown next to the file name.
The panel will change to Edit Mode.
Edit the file attributes.
Click the Save button. | https://docs.browser.connect.trimble.com/files-folders/edit-files-folders-details | 2022-09-25T03:57:45 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.browser.connect.trimble.com |
Using the toolbar
The following operations are available in the Graph Browser toolbar menu:
File settings
Add: Opens a dialog with the following options:
Open a saved graph: Select from the existing saved graphs.
Add data from a dashboard: Select a dashboard from the list to add it to the graph.
Add entity identifiers: Select a type of entity identifier to add as nodes in the graph.
Save: Save the current graph.
To image: Save the current graph to an image in PNG format.
To IBM i2: Save the current graph with all of the lenses applied and download it as a file that is compatible with IBM i2 Analyst’s Notebook. You must configure the i2 export feature in order to use it. For more information, see Exporting a graph to IBM i2 Analyst’s Notebook.
Undo: By default, the Graph Browser saves the last five states. With this function you can go back one step at a time, until there are no more steps available. To configure the number of steps, see Advanced Graph Browser settings.
Redo: Revert an undo.
Layout settings
Standard: In this layout setting, the selected nodes are presented in a linear format, keeping link lengths consistent and preserving the relative position of the nodes.
Hierarchy: In this layout setting, nodes are displayed top-down, according to their connections. This setting requires at least one node to be selected. Selected nodes are placed at the top of the hierarchy.
Radial: Arranges nodes in concentric circles.
Advanced: This menu button offers a range of layout options:
Organic: Arranges nodes in a fan-like pattern, with the larger components in the center. This option is useful for big data sets, because it prioritizes system performance.
Sequential: Arranges nodes in a tree-like, hierarchical structure, to minimize the number of crossed links.
Lens: Arranges nodes in a circle-like grid with connected nodes next to each other.
Structural: Places nodes that are structurally similar together in the network.
Tweak: Adjusts node positions in a force-directed, arrow-like layout.
Fit: Fits all of the nodes onto the canvas.
Selection settings
Group: Select multiple nodes and click Group to work with them as a collection.
Ungroup: If nodes have been grouped together, you can split them apart so they can be worked with individually.
Removal settings
Crop: Removes every element that is not selected.
Delete: Deletes the currently selected nodes. Delete all clears the graph completely.
Action settings
Expand: Expands the currently selected nodes, so that you can view their relations to other entities.
Node position settings: You can change the default policy to move all nodes on expansion, and choose to either move only specific nodes or to fix the position of all nodes by default.
Expand by relation: Perform an expansion that is determined by the relations that are selected in the Expansion tab in the sidebar.
Filter: Adds a filter based on the nodes that you select in the graph. This allows you to:
Do your investigation on the graph, select the nodes that you are interested in, activate a filter, pin it, and go back to the related dashboard to get more detailed information about those entities.
If you have other visualizations in the same dashboard, it will let you have more information on the selected nodes. For example, if the current dashboard is associated with a companies entity table, you can do your investigation in the graph, activate the filter, select some vertices and get the visualizations to show information on the selected vertices.
For more information, see Filtering data.
View settings
Map: Select this button to switch the Map mode on or off. The Map mode will move the nodes geographically on an interactive map. You must set up a script to configure the geographic properties of the nodes and configure the Graph Browser to include the script type Add geo-locations for map visualization.
Heatmap: You can switch the heatmap on or off.
Time: Select this button to switch the Timebar mode on or off. This setting displays a time bar at the bottom of the screen that enables time-based filtering of nodes. You must set up a script to configure the time property of the nodes and configure the Graph Browser to include the script type Add time fields.
Show nodes without time field:
Time filter: Select or deselect the node types or dashboards that you want to be displayed in the time bar.
Options: The following additional options are available:
Highlight connected nodes: Makes the connections between nodes more visible.
Invert relations: Shows the inverse of the current relation. | https://docs.siren.solutions/siren-platform-user-guide/12.1/graph-browser/t_using_toolbar.html | 2022-09-25T04:45:12 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['_images/graph-browser-toolbar-1.png',
'File settings in the toolbar'], dtype=object)
array(['_images/graph-browser-toolbar-2.png',
'Layout settings in the toolbar'], dtype=object)
array(['_images/graph-browser-toolbar-3.png',
'Selection settings in the toolbar'], dtype=object)
array(['_images/graph-browser-toolbar-4.png',
'Removal settings in the toolbar'], dtype=object)
array(['_images/graph-browser-toolbar-5.png',
'Action settings in the toolbar'], dtype=object)
array(['_images/graph-browser-toolbar-6.png',
'View settings in the toolbar'], dtype=object)] | docs.siren.solutions |
Need help fixing your forum?
0 Members and 2 Guests are viewing this board.
Replies: 0
Views: 664
Replies: 6
Views: 2,021
Replies: 0
Views: 2,639
Replies: 0
Views: 301
Replies: 0
Views: 187
Replies: 6
Views: 1,317
Replies: 3
Views: 915
Replies: 2
Views: 727
Replies: 15
Views: 1,597
Replies: 25
Views: 2,137
Replies: 7
Views: 1,080
Replies: 9
Views: 932
Replies: 3
Views: 723
Replies: 12
Views: 1,236
Replies: 6
Views: 1,367
Replies: 4
Views: 1,168
Replies: 6
Views: 1,320
Replies: 3
Views: 907
Replies: 5
Views: 1,801
Replies: 11
Views: 2,883
Poll
Moved Topic
Locked Topic
Sticky Topic
Topic you are watching
Page created in 0.076 seconds with 12 queries. | https://www.docskillz.com/docs/index.php?PHPSESSID=9a73a91bc8a903bf264d0b3d6f514af3&board=35.0;wap2 | 2022-09-25T05:13:32 | CC-MAIN-2022-40 | 1664030334514.38 | [] | www.docskillz.com |
By default Swift Performance will bypass caching for POST requests and all GET requests where the query string is not empty.
However there are some special situations, where cache shouldn’t be bypassed, even if there is a dynamic parameter in the GET request.
For example every visit from Facebook, or Google Ads will contain a dynamic parameter (fbclid, and gclid), however the server should serve the cached version of the page.
The following GET parameters are ignored by default:
utm_source, utm_campaign, utm_medium, utm_expid, utm_term, utm_content, fb_action_ids, fb_action_types, fb_source, fbclid, _ga, gclid, age-verified | https://docs.swiftperformance.io/knowledgebase/ignore-get-params/ | 2022-09-25T05:04:48 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.swiftperformance.io |
Upgrading DCE¶
Upgrading to latest version 2.0 is simple, when you are at least on TYPO3 8.7 and DCE 1.5.
DCE provides some upgrade wizards in install tool of TYPO3, which pop up when necessary.
With composer¶
Just change your requirements section to
"t3/dce": "^2.7"
and perform
composer update.
Then go to TYPO3 Install Tool and check (and perform) the upgrade wizards and database compare!
Without composer¶
Because DCE 2.0 changed namespaces an update may occure error messages, like:
Fatal error: Class 'T3\Dce\ViewHelpers\ArrayGetIndexViewHelper' not found
When you already have this error, you can simply delete the
typo3conf/autoload folder. TYPO3 will recreate it.
To avoid this error before it happens, perform these steps:
Uninstall DCE in extension manager
Perform update (manual upload or TER update)
Reinstall DCE
Go to install tool and perform the upgrade wizards and database compare | https://docs.typo3.org/p/t3/dce/main/en-us/AdministratorManual/Upgrading.html | 2022-09-25T04:40:07 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.typo3.org |
Planning and Prerequisites
Overview
AppsAnywhere is an app store-style platform that gives your students and staff easy access to the software they need, on and off-campus, including BYOD.
Multiple delivery methods are supported, so you can define the best way to deliver individual apps to your end-users, based on the device they are using and any app license restrictions.
This section of our documentation is designed to provide all the details you might need when preparing to deploy AppsAnywhere.
Deployment
During new customer on-boarding, the AppsAnywhere team will discuss your specific requirements and provide a tailored AppsAnywhere Deployment Guide.
A Technical Call will then be arranged to answer any questions. Support is available from a dedicated consultant throughout your initial deployment.
To avoid additional effort, please do not deploy any servers or infrastructure until after you have received your Deployment Guide and discussed it with your Technical Consultant.
Infrastructure Components
A typical AppsAnywhere infrastructure will include servers and services as shown in Infrastructure Diagrams .
All AppsAnywhere and Cloudpaging infrastructure can be provisioned in the AppsAnywhere hosted solution.
Parallels RAS must be hosted on customer domain joined devices. See Parallels Remote Application Server (RAS) deployment documentation for more information.
AppsAnywhere
AppsAnywhere provide AppsAnywhere servers as a virtual appliance, ready to import to your chosen hypervisor.
End-users will connect to your AppsAnywhere Portal to access and launch applications.
These servers also provide the Admin Portal where your app and system administrators can configure settings and deploy applications.
Please see the Server Requirements section for more details.
Network and Load Balancing
End-users will need to be able to connect to your AppsAnywhere Portal both on and off-site, from any location.
Unless you are configuring a single test instance, you will likely require multiple AppsAnywhere Servers for load balancing, fault tolerance, and to facilitate future upgrades.
Please see the Connectivity Requirements section for more details.
Directory
A connection to Active Directory (LDAPS) is required to verify which apps users can access, and as the initial login authentication method.
Once an LDAPS connection has been configured, additional authentication and SSO methods can also be supported.
Please see the Directory Requirements section for more details.
Database
A Microsoft SQL database is required by AppsAnywhere to store configuration details for the applications and delivery methods, as well as usage records.
Please see the Database Requirements section for more details.
Test Machine
A domain joined machine is required for the AppsAnywhere team to test AppsAnywhere, in particular that single sign on functionality is operational.
This can also be used as a management machine for Remote Access for AppsAnywhere Support. | https://docs.appsanywhere.com/appsanywhere/2.12/planning-and-prerequisites | 2022-09-25T06:00:05 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.appsanywhere.com |
User Authentication
Access {sgw} securely to sync from cloud to edge
This content explains how to implement user authentication in sync gateway
Related Security topics: TLS Certificate Authentication | Sync Function | Import filter | Read Access | Write Access
Introduction._1<<
Given a user that has already been created, a new session for that user can be created on the Admin POST /\{db}/_session endpoint.
$ cookie’s expiration time. The endpoint’s API reference contains more information about how the expiration time is automatically extended according to the user session activity..
Unresolved directive in authentication-users.adoc - include::{root-partials}blocklinks-cbl.adoc[]
/\{tkn.
Note: This example is based on a deployment using legacy configuration.
curl --location --request PUT '' \ --header 'accept: application/json' \ --header 'Content-Type: application/json' \ --data-raw '{ oidc: { providers: { google_implicit: { issuer:, client_id:yourclientid-uso.apps.googleusercontent.com, register:true (1) }, }, } }'
Here is a sample sync gateway config file, configured to use the Implicit Flow.
{ "databases": { "default": { "name": "dbname", "bucket": "default", "oidc": { "providers": { "google_implicit": { "issuer":"", "client_id":"yourclientid-uso.apps.googleusercontent.com", "register":true (1) }, }, } } } }
/\{tkn. | https://docs.couchbase.com/sync-gateway/current/authentication-users.html | 2022-09-25T05:34:06 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['_images/static-auth-provider.png', 'static auth provider'],
dtype=object)
array(['_images/custom-auth-flow.png', 'custom auth flow'], dtype=object)
array(['_images/client-auth.png', 'client auth'], dtype=object)] | docs.couchbase.com |
Device Managerand delete the existing drivers if they are already installed. Then plug the flight controller into the USB port. You'll see a message that the new device has been discovered:
Device Managerand set the correct driver. When you open
Device Manageryou'll see SmartAP as an unknown device.
SET DEVMGR_SHOW_NONPRESENT_DEVICES=1
devmgmt.mscinto the open device manager
View>
Show hidden devices
Update Device Driver.
Install this driver software anyway
Device Manageryou'll see that the driver is now installed successfully: | https://docs.sky-drones.com/avionics/legacy-autopilots/drivers | 2022-09-25T04:11:43 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.sky-drones.com |
The Stingray Modelling Interface¶
Stingray provides a custom-built fitting interface, built on top of scipy and emcee as well as a set of general functions and classes that allow the user to perform standard model fitting tasks on Fourier products, but also enable users to implement their own models and classes based on this framework.
Below, we show on some examples how this interface can be used. | https://docs.stingray.science/modeling.html | 2022-09-25T04:00:04 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.stingray.science |
Configuration¶
NOTE that no static files will be created when you are logged into the backend in the same browser session as you are doing the frontend requests with (unless you Ctrl Shift Reload).
Some browsers keep the be_typo_user cookie even when you have logged out of the backend. The cookie will only disappear when you completely close the browser session. Some browsers even store session cookies between stopping and restarting a browser. In such a case you will need to remove the cookies by hand and then restart the browser.
I recommend to use two browsers when testing this extension. One for browsing the frontend and one for configuring the extension in TYPO3 (backend). | https://docs.typo3.org/p/lochmueller/staticfilecache/main/en-us/Configuration/Index.html | 2022-09-25T05:20:45 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.typo3.org |
containerd
Docker 18.09 and up ship with
containerd, so you should not need to install it manually. If you do not have
containerd, you may install it by running the following:
# Install containerd apt-get update && apt-get install -y containerd.io # Configure containerd mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml # Restart containerd systemctl restart containerd
When using
containerd shipped with Docker, the cri plugin is disabled by default. You will need to update
containerd’s configuration to enable KubeEdge to use
containerd as its runtime:
# Configure containerd mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml
Update the
edgecore config file
edgecore.yaml, specifying the following parameters for the
containerd-based runtime:
remoteRuntimeEndpoint: unix:///var/run/containerd/containerd.sock remoteImageEndpoint: unix:///var/run/containerd/containerd.sock runtimeRequestTimeout: 2 podSandboxImage: k8s.gcr.io/pause:3.2 runtimeType: remote
By default, the cgroup driver of cri is configured as
cgroupfs. If this is not the case, you can switch to
systemd manually in
edgecore.yaml:
modules: edged: cgroupDriver: systemd
Set
systemd_cgroup to
true in
containerd’s configuration file (/etc/containerd/config.toml), and then restart
containerd:
# /etc/containerd/config.toml systemd_cgroup = true
# Restart containerd systemctl restart containerd
Create the
nginx application and check that the container is created with
containerd on the edge side:
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml deployment.apps/nginx-deployment created ctr --namespace=k8s.io container ls CONTAINER IMAGE RUNTIME 41c1a07fe7bf7425094a9b3be285c312127961c158f30fc308fd6a3b7376eab2 docker.io/library/nginx:1.15.12 io.containerd.runtime.v1.linux
NOTE: since cri doesn’t support multi-tenancy while
containerd does, the namespace for containers are set to “k8s.io” by default. There is not a way to change that until support in cri has been implemented.
CRI-O
Follow the CRI-O install guide to setup CRI-O.
If your edge node is running on the ARM platform and your distro is ubuntu18.04, you might need to build the binaries form source and then install, since CRI-O packages are not available in the Kubic repository for this combination.
git clone cd cri-o make sudo make install # generate and install configuration files sudo make install.config
Set up CNI networking by following this guide: setup CNI.
Update the edgecore config file, specifying the following parameters for the
CRI-O-based runtime:
remoteRuntimeEndpoint: unix:///var/run/crio/crio.sock remoteImageEndpoint: unix:////var/run/crio/crio.sock runtimeRequestTimeout: 2 podSandboxImage: k8s.gcr.io/pause:3.2 runtimeType: remote
By default,
CRI-O uses
cgroupfs as a cgroup driver manager. If you want to switch to
systemd instead, update the
CRI-O config file (/etc/crio/crio.conf.d/00-default.conf):
# Cgroup management implementation used for the runtime. cgroup_manager = "systemd"
NOTE: the
pause image should be updated if you are on ARM platform and the
pause image you are using is not a multi-arch image. To set the pause image, update the
CRI-O config file:
pause_image = "k8s.gcr.io/pause-arm64:3.1"
Remember to update
edgecore.yaml as well for your cgroup driver manager:
modules: edged: cgroupDriver: systemd
Start
CRI-O and
edgecore services (assume both services are taken care of by
systemd),
sudo systemctl daemon-reload sudo systemctl enable crio sudo systemctl start crio sudo systemctl start edgecore
Create the application and check that the container is created with
CRI-O on the edge side:
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml deployment.apps/nginx-deployment created # crictl ps CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID 41c1a07fe7bf7 f6d22dec9931b 2 days ago Running nginx 0 51f727498b06f
Kata Containers
Kata Containers is a container runtime created to address security challenges in the multi-tenant, untrusted cloud environment. However, multi-tenancy support is still in KubeEdge’s backlog. If you have a downstream customized KubeEdge which supports multi-tenancy already then Kata Containers is a good option for a lightweight and secure container runtime.
Follow the install guide to install and configure containerd and Kata Containers.
If you have “kata-runtime” installed, run this command to check if your host system can run and create a Kata Container:
kata-runtime kata-check
RuntimeClass is a feature for selecting the container runtime configuration to use to run a pod’s containers that is supported since
containerd v1.2.0. If your
containerd version is later than v1.2.0, you have two choices to configure
containerd to use Kata Containers:
- Kata Containers as a RuntimeClass
- Kata Containers as a runtime for untrusted workloads
Suppose you have configured Kata Containers as the runtime for untrusted workloads. In order to verify whether it works on your edge node, you can run:
cat nginx-untrusted.yaml apiVersion: v1 kind: Pod metadata: name: nginx-untrusted annotations: io.kubernetes.cri.untrusted-workload: "true" spec: containers: - name: nginx image: nginx
kubectl create -f nginx-untrusted.yaml # verify the container is running with qemu hypervisor on edge side, ps aux | grep qemu root 3941 3.0 1.0 2971576 174648 ? Sl 17:38 0:02 /usr/bin/qemu-system-aarch64 crictl pods POD ID CREATED STATE NAME NAMESPACE ATTEMPT b1c0911644cb9 About a minute ago Ready nginx-untrusted default 0
Virtlet
Make sure no libvirt is running on the worker nodes.
Steps
Install CNI plugin:
Download CNI plugin release and extract it:
$ wget # Extract the tarball $ mkdir cni $ tar -zxvf v0.2.0.tar.gz -C cni $ mkdir -p /opt/cni/bin $ cp ./cni/* /opt/cni/bin/
Configure CNI plugin:
$ mkdir -p /etc/cni/net.d/ $ cat >/etc/cni/net.d/bridge.conf <<EOF { "cniVersion": "0.3.1", "name": "containerd-net", "type": "bridge", "bridge": "cni0", "isGateway": true, "ipMasq": true, "ipam": { "type": "host-local", "subnet": "10.88.0.0/16", "routes": [ { "dst": "0.0.0.0/0" } ] } } EOF
Setup VM runtime: Use the script
hack/setup-vmruntime.shto set up a VM runtime. It makes use of the Arktos Runtime release to start three containers:
vmruntime_vms vmruntime_libvirt vmruntime_virtlet | https://v1-4.docs.kubeedge.io/zh/docs/advanced/cri/ | 2022-09-25T06:06:13 | CC-MAIN-2022-40 | 1664030334514.38 | [] | v1-4.docs.kubeedge.io |
At this point we assume that you have fulfilled plugin settings and have enabled at least one provider. If you have not done so, please follow the Setting Up the Price Comparer Plugin tutorial.
1. Creating Table
To create a price comparision table go to WordPress admin panel, then from left menu choose “Price Tables -> Add New”. A new page with form and settings will be opened.
At the very top you can give some name to this table eg. “Watches” if we going to create a comparision table with watches. This name is only for you and will not be visible on the frontend.
In the next field you can enter some description of table, it will be displayed above the table. You can use this field to to enter review of product or any other details. This field is optional so you can easily leave it blank.
2. Table Details
- Assign to Post – Allows you to enter the ID of post under which this table should be displayed.
- Search for – Allows you to enter the comma separated list of words that product MUST contain.
- Exclude – Allows you to enter the comma separated list of words that product MUST NOT contain.
- Minimum Price – Allows you to enter the minimum price of product.
- Maximum Price – Allows you to enter the maximum price of product.
Table Details metabox contain settings that allows you to refine search of products available in each provider API. Only the “Search for” is a required field that you must fill.
Once you filled criterias of product refinement, click the “Save Draft” or “Publish” button, your settings will be saved, and in Providers metaboxes will be displayed products that match your criteria, as shown below:
Tick the checkbox on the left side of each product that you would like to add into price comparision table, once completed, click the “Save Draft” or “Publish” button to save your selection.
All selected products will be moved to the “Selected Products” metabox, as shown below:
Please note that you can change search criteria multiple times to find all products you wish to include into table.
3. Displaying Table
To get the newly created table displayed on the front end, you can do it in 2 ways:
- Shortcode – copy shortcode visible on “Edit Table” page, open some Post or Page for editing, and simply paste this shortcode into description field.
- Assign to Post option – on “Edit Table” page, in “Table Details” metabox, you find the “Assign to Post” option. Click the “Find Post” button, and in popup select post to which this table should be assigned.
4. Final Result
Like this tutorial? Subscribe and get the latest tutorials delivered straight to your inbox or feed reader. | https://docs.appthemes.com/tutorials/creating-a-price-comparision-table/ | 2017-03-23T00:16:39 | CC-MAIN-2017-13 | 1490218186530.52 | [] | docs.appthemes.com |
8. Advanced Topics
This chapter covers a number of advanced topics. If you're new to Icinga, you can safely skip over things you're not interested in.
8.1. Downtimes
Downtimes can be scheduled for planned server maintenance or any other targeted service outage you are aware of in advance.
Downtimes will suppress any notifications, and may trigger other downtimes too. If the downtime was set by accident, or the duration exceeds the maintenance, you can manually cancel the downtime. Planned downtimes will also be taken into account for SLA reporting tools calculating the SLAs based on the state and downtime history..
8.1.1. Fixed and Flexible Downtimes!).
8.1.2. Scheduling a downtime
You can schedule a downtime either by using the Icinga 2 API action schedule-downtime or by sending an external command.
8.1.2.1. Fixed Downtime
If the host/service changes into a NOT-OK state between the start and
end time window, the downtime will be marked as
in effect and
increases the downtime depth counter.
| | | start | end trigger time
8.1.2.2. Flexible Downtime
8.1.3. Triggered Downtimes.
8.1.4. Recurring Downtimes
ScheduledDowntime objects can be used to set up recurring downtimes for services.
Example:
apply ScheduledDowntime "backup-downtime" to Service {" } assign where "backup" in service.groups }
8.2. Comments.
8.3. Acknowledgements.
8.3.1. Sticky Acknowledgements
The acknowledgement is removed if a state change occurs or if the host/service recovers (OK/Up state).
If you acknowlege a problem once you've received a
Critical notification,
the acknowledgement will be removed if there is a state transition to
Warning.
OK -> WARNING -> CRITICAL -> WARNING -> OK
If you prefer to keep the acknowledgement until the problem is resolved (
OK
recovery) you need to enable the
sticky parameter.
8.3.2. Expiring Acknowledgements.
8.4. Time Periods.
Note
If you are familiar with Icinga 1.x, these time period definitions are called
legacy timeperiodsin Icinga 2.
An Icinga 2 legacy timeperiod requires the
ITLprovided template
legacy-timeperiod." { import "legacy-timeperiod"" { import "legacy-timeperiod" display_name = "Icinga 2 8x5 TimePeriod" ranges = { "monday" = "09:00-17:00" "tuesday" = "09:00-17:00" "wednesday" = "09:00-17:00" "thursday" = "09:00-17:00" "friday" = "09:00-17:00" } }
Furthermore if you wish to specify a notification period across midnight, you can define it the following way:
object Timeperiod "across-midnight" { import "legacy-timeperiod"" { import "legacy-timeperiod" display_name = "Standby" ranges = { "2016-09-30 - 2016-10-30" = "00:00-24:00" } }
Please note that the spaces before and after the dash are mandatory.
Once your time period is configured you can Use the
period attribute
to assign time periods to
Notification and
Dependency objects:
object Notification "mail" { import "generic-notification" host_name = "localhost" command = "mail-notification" users = [ "icingaadmin" ] period = "workhours" }
8.4.1. Time Periods Inclusion and Exclusion supressed:) } }
In addition to that the time period
weekends defines an additional
time window which should be excluded from notifications:
object TimePeriod "weekends-excluded" { import "legacy-timeperiod" ranges = { "saturday" = "00:00-09:00,18:00-24:00" "sunday" = "00:00-09:00,18:00-24:00" } }
The time period
prod-notification defines the default time ranges
and adds the excluded time period names as an array." } }
8.5. Advanced Use of Apply Rules.
8.6. Use Functions in Object Configuration.
8.6.1. Use Functions in Command Arguments set_if } }} }
8.6.2. Use Functions as Command Attribute = [ SysconfDir + "/icinga2/scripts/" + mailscript ] log(LogCritical, "me", cmd) return cmd }} env = { } }
8.6.3. Use Custom Functions as Attribute }
8.6.4. Use Functions in Assign Where Expressions") }
8.7. Access Object Attributes at Runtime
The Object Accessor Functions can be used to retrieve references to other objects by name.
This allows you to access configuration and runtime object attributes. A detailed list can be found here. }} }
The following example sets time dependent thresholds for the load check based on the current time of the day compared to the defined time period.
object TimePeriod "backup" { import "legacy-timeperiod" } }} }
8.8. Check Result Freshness
In Icinga 2 active check freshness is enabled by default. It is determined by the
check_interval attribute and no incoming check results in that period of time.
threshold = last check execution time + check interval
Passive check freshness is calculated from the
check_interval attribute if set.
threshold = last check result time + check interval
If the freshness checks are invalid, a new check is executed defined by the
check_command attribute.
8.9. Check Flapping
The flapping algorithm used in Icinga 2 does not store the past states but
calculates the flapping threshold from a single value based on counters and
half-life values. Icinga 2 compares the value with a single flapping threshold
configuration attribute named
flapping_threshold.
Flapping detection can be enabled or disabled using the
enable_flapping attribute.
8.10. Volatile Services
By default all services remain in a non-volatile state. When a problem
occurs, the
SOFT state applies and once
max_check_attempts attribute
is reached with the check counter, a
HARD state transition happens.
Notifications are only triggered by
HARD state changes and are then
re-sent defined by the
interval attribute.
It may be reasonable to have a volatile service which stays in a
HARD
state type if the service stays in a
NOT-OK state. That way each
service recheck will automatically trigger a notification unless the
service is acknowledged or in a scheduled downtime. | https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/advanced-topics | 2017-03-23T00:13:38 | CC-MAIN-2017-13 | 1490218186530.52 | [] | docs.icinga.com |
In addition to using plugins, some customers like to further customize their theme. This tutorial explains the correct way to add modifications to your AppThemes theme (or any theme).
The Wrong Way
Editing the theme files and making changes there. What’s wrong with that? Well, the next time you update your theme to a new version, ALL your custom code changes are blown away. What a waste!
The Right Way
Create a theme-specific plugin and put all your customizations in there. That way, your changes aren’t overwritten on theme updates. It’s also much easier and less intimidating than creating a child theme and it’s only one file to manage.
Ok, now that you understand the logic behind this, let’s get started!
Create Your Plugin
Put on your developer’s hat and let’s get started. In this example, we’re going to write a custom plugin for the Vantage theme.
- Open your file explorer and navigate to your website directory (e.g.
/www/vantage/wp-content/plugins/)
- Create a new directory called,
vantage-custom-code
- Go into that new directory and create a file called,
vantage-custom-code-plugin.php
- Edit that new file in any text editor and paste in the below template code
- Change the author and author uri to your information
Now, any new functions you wish to add should now be placed in here! Well, that’s great but what sort of things could I change?
You can override different styles, existing functions, or add a completely new function.
Here’s a simple example of a custom function which should get you started. It removes the WordPress version number from your website’s source code which can deter hackers from trying to penetrate your site (e.g. if a specific WordPress version has a vulnerability, they can target you).
Here’s the final result of what your plugin file should look like.
That’s it! Now save your file and upload it to your website (if you haven’t already). The next time you visit your site, view your source code and you should no longer see the following entry:
Like this tutorial? Subscribe and get the latest tutorials delivered straight to your inbox or feed reader. | https://docs.appthemes.com/tutorials/wordpress-theme-specific-plugin/ | 2017-03-23T00:16:05 | CC-MAIN-2017-13 | 1490218186530.52 | [] | docs.appthemes.com |
Versions
Build a version
Description
NEM Community Client (NCC)
Build Status
NCC is the initial client provided with NEM. It provides a web interface for managing wallets and interacting with the NEM Infrastructure Server (NIS).
NCC Packages
There are two NCC maven packages:
nem-client-api: Contains all NCC functionality as well as the web UX. nem-monitor: Monitors NCC and local NIS, provides visual feedback on actual status of those apps. There two more packages
nem-client-download: Used by WebStart to download NCC and NIS. (deprecated, switched to installer version) nem-console: A command-line tool providing utility functionality. Building
nem.core is required to build NCC. Most recent version, can be found here nem.core documentation can be found here
Running NCC Locally
In order to run the client with full functionality, a NEM Infrastructure Server (NIS) instance should be running on the local machine.
The NCC client can be started by running the org.nem.deploy.CommonStarter class.
The monitor programm is started via org.nem.monitor.NemMonitor
NCC REST API
The NCC API is available as a swagger.json file here.
A rendered version is available here.
(The deprecated version of the NCC REST API can be found here).
Generating JavaDoc Documentation
The javadoc documentation can be created via the maven goal "javadoc:javadoc" on the project "nem-client-api".
Pull Requests
NCC is fully open-sourced and looking for contributors. Please take a fork and add a feature :).
The NEM core development team will be managing pull requests into master. Please try to follow the guidelines outlined here.
Coding Guidelines
Please use the intellij settings checked in under settings/nem_project_settings.jar. A non-comprehensive list of style guidelines follow, but the checked in settings should take precedence.
Member Naming
Use lowerCamelCase. Prefix booleans with "is" / "has" / "are". Precede access of instance members with "this.". Camel case acronyms at least three letters (i.e. prefer "Nis" to "NIS"). Braces
Always use braces (even for single line statement bodies). Follow '}' with a blank line. Do not precede '}' with a blank line. Imports
Wildcard import package if more than one class is used from a package. Sort imports alphabetically. Documentation
Document all public and protected members. Getter documentation should start with "Gets". All documentation should start with capital letter and end with period (for members documentation too). Unit Tests
Try to avoid testing composite classes. Use Act / Arrange / Assert. Other
Use the final keyword aggressively. Avoid the use of trailing whitespace. Keep functions short and understandable :). Do not introduce consecutive blank lines.
Repository
Last Built
2 days ago passed
Owners
Badge
reStructuredText
.. image:: :target: :alt: Documentation Status
Markdown
[]()
HTML
<a href=''> <img src='' alt='Documentation Status' /> </a>
nemapi, nis, blockchain, nisapi, nem
Project Privacy Level
Public
Short URLs
nem-core-api.readthedocs.io
nem-core-api.rtfd.io
Default Version
latest
'latest' Version
master | https://readthedocs.org/projects/nem-core-api/ | 2017-03-23T00:17:42 | CC-MAIN-2017-13 | 1490218186530.52 | [array(['https://readthedocs.org/projects/nem-core-api/badge/?version=latest',
'Documentation Status'], dtype=object) ] | readthedocs.org |
Options for Managing Tags
Note: Any Tag changes will apply to ALL Tags across ALL tasks in your current Space.
In Task View, access the Tag editor by clicking the Tags icon. Click on the ellipses (...) that appears on the top right of each existing Tag and select from the following options:
- Delete: Deletes the Tag from all tasks in your Space.
- Rename
- Change color
In order to remove a Tag from the current task only (not all tasks), click the x to the right of the Tag name.
You can also make these changes by hovering over tasks in List and Board Views!
Tags are unique to each Space, but Tags with the same string/name will be treated as the same Tag when filtering and viewing all Spaces. Just click the filter button in List, Board, or Calendar View then, click "Tags" and enter the Tags you'd like to filter for! | https://docs.clickup.com/en/articles/1280247-how-do-i-manage-tags | 2019-12-06T02:20:38 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.clickup.com |
The ASPxPanel control represents a container area for other controls. It is useful when you wish to generate controls programmatically, hide/show a group of controls, etc.
The ASPxPanel control can hide its content and expose it only when a user clicks the expand button.
To enable this behavior, set the ASPxCollapsiblePanel.Collapsible property to true.
You can specify the panel control's adaptive behavior using the ASPxCollapsiblePanel.SettingsAdaptivity property. It provides the following settings: | https://docs.devexpress.com/AspNet/14778/aspnet-webforms-controls/site-navigation-and-layout/panel | 2019-12-06T01:25:51 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.devexpress.com |
Our theme follows standards of woocommerce and we use recommended functions of woocommerce. But sometimes it’s not enough, so, we extend them.
By default, woocommerce set own image sizes for all images in shop. Basically, it has 3 size: Single image on product page (not cropped, full width), image in product loops (cropped, 270 px width) and small gallery images (100px width).
The most common problem is images in product loops (thumbnail images), because each site can have absolutely different image ratio, preferable sizes of images and quality. So, if everything is ok on your site – leave it as is. But, you can change next things:
Crop. You can disable it. For this, go to Customize – Woocommerce – Product images and disable crop or set another ratio for crop
Making custom ratio can help also to make full width image in grid
Size. If changing crop is not enough for you, it’s possible to set own size for images in product grid. For this, go to Theme option – Shop settings – Custom size for loop images and set own size. Please, note, that you must set width and height and “-” between. Example: 300-250
External image plugin fix
Some our clients use plugins for external image urls for featured image. From first look, this can be good idea, but problem is that wordpress can’t resize or optimize such images. This can slow down site and also images can look not equal.
This css fix can help
Add to theme option – general – custom css
.woocommerce .products .product figure img{max-height:160px; width:auto} | http://rehubdocs.wpsoul.com/docs/rehub-theme/shop-options-woo-edd/image-sizes-for-shop-pages/ | 2019-12-06T02:01:42 | CC-MAIN-2019-51 | 1575540482954.0 | [array(['http://rehubdocs.wpsoul.com/wp-content/uploads/2018/02/customize.png',
None], dtype=object) ] | rehubdocs.wpsoul.com |
This doc explains features in ClickUp 2.0. Looking for a 1.0 doc? Check out this link!
When you create a new Workspace, you'll be prompted to make a choice during the initial setup:
What's the difference between these two options?
Me & others
- Supports assigning tasks to other Workplace members
- Supports more advanced collaboration
- All Workplace members will be visible from the team pop-out
It's just me
- New tasks will be automatically assigned to you
- You will not be able to assign others to tasks
- Great for single or personal people
Need to switch from a personal to a normal Workplace?
Navigate to your Settings page by clicking on your profile avatar. Then, flip the toggle to switch from a personal to a normal Workplace.
Note that you will only be able to make this change if you are the sole person on this Workplace!
| https://docs.clickup.com/en/articles/2252493-what-s-the-difference-between-personal-mode-and-team-mode-in-clickup | 2019-12-06T02:21:46 | CC-MAIN-2019-51 | 1575540482954.0 | [array(['https://downloads.intercomcdn.com/i/o/142306276/b45c520cf0e2d04438ab5c1b/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/142306395/e5fde64051755cc931cac464/image.png',
None], dtype=object) ] | docs.clickup.com |
What is "Cart Abandon Rate"?
Cart Abandon Rate is the percentage of your customers who add something to their cart, but do not check out. A cart abandon rate of 49% means that if 100 unique customers add a product to their cart, 49 of them will leave with buying anything, and 51 of them will complete checkout. | https://docs.recapture.io/article/12-what-is-cart-abandon-rate | 2019-12-06T00:54:11 | CC-MAIN-2019-51 | 1575540482954.0 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55fb1fb2c697913bc9927e29/images/5601d1ecc6979126e5ae18c7/file-j0R6vns2kM.jpg',
None], dtype=object) ] | docs.recapture.io |
API Gateway 7.5.3 Installation Guide Save PDF Selected topic Selected topic and subtopics All content Install the API Gateway server The API Gateway server is the main runtime environment consisting of an API Gateway instance and a Node Manager. For more details on API Gateway components and concepts, see the API Gateway Concepts Guide. Note It is not necessary to install the API Gateway server on the API Gateway appliance because this component is preinstalled on the appliance..5.3_Install_linux-x86-32_BN<n>.run --mode unattended --setup_type advanced --enable-components apigateway --disable-components nodemanager,qstart,analytics,policystudio, apitester,configurationstudio,apimgmt,cassandra,packagedeploytools --licenseFilePath mylicense.lic Before you start API Gateway Note Before you can start the the API Gateway manually, follow these steps: Open a command prompt in the following directory: Windows INSTALL_DIR\apigateway\Win32\bin UNIX/Linux INSTALL_DIR/apigateway/posix/bin Run the startinstance command, for example: startinstance -n "Server1" -g "Group1" Note On UNIX/Linux, you must ensure that the startinstance has execute permissions. To manage and monitor the API Gateway Analytics Install Policy Studio Install API Tester Install Configuration Studio Install API Manager Install the Admin Node Manager Related Links | https://docs.axway.com/bundle/APIGateway_753_InstallationGuide_allOS_en_HTML5/page/Content/InstallGuideTopics/install_gateway.htm | 2019-12-06T01:34:18 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.axway.com |
Most ClickUp users prefer to spend their time in either List or Board View, so it's a good idea to be aware of the different features they provide to increase productivity!.
Additional Benefits of List View
- Task Sorting: Arrange your tasks according to due date, priority, assignee, and more.
- View Everything: Get a high-level perspective of every task within your Workspace, and then drill-down to see exactly what you need.
- Make changes in bulk: With the multitask toolbar, you can quickly select multiple tasks to assign them, change their status, delete them, and more!
Board View
This is the ideal view for agile teams due to its powerful drag and drop interface.
Tasks in Board View are arranged vertically according to status, assignee, tags, due date, or priority— it's up to you to decide!
As in List View, you have the option to view all tasks within a Space, or even view everything within your Workspace.
You can even drag and drop tasks between columns to make quick changes!
Additional Benefits of Board View
- Quick-create Tasks: Click the
+symbol at the top of a column to quickly add a new task.
- Add a Cover Image: Pin an attached image within a task so it will be displayed when seeing the task in Board View.
- Make changes in bulk: You can also use the Multitask Toolbar in Board View to make quick changes to groups of tasks!
Viewing Folders in List and Board Views
List View
List View breaks down Folders by their individual lists in a vertical manner, making sure you can correlate tasks to specific objectives within a List.
Board View
Board View places all tasks in vertical columns based on the grouping you apply.
Be sure to check out Box View for great insight into what each member of your Workspace is working on! | https://docs.clickup.com/en/articles/1234668-list-view-vs-board-view | 2019-12-06T02:18:53 | CC-MAIN-2019-51 | 1575540482954.0 | [array(['https://downloads.intercomcdn.com/i/o/142569196/e74ebc4901b319b4266f7eb4/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/64460910/e466fea7d21e91b5ab7206dc/Board_View.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/142570598/cd4959d78af6e4c0b81e5da8/image.png',
None], dtype=object) ] | docs.clickup.com |
Beginning with LifeKeeper 8.0, logging is done through the standard syslog facility. LifeKeeper supports three syslog implementations: standard syslog, rsyslog, and syslog-ng. During package installation, syslog will be configured to use the “local6” facility for all LifeKeeper log messages. The syslog configuration file (for example, instead..
Post your comment on this topic. | http://docs.us.sios.com/spslinux/9.4.0/en/topic/logging-with-syslog | 2019-12-06T00:13:00 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.us.sios.com |
New Features
View All Spaces List View (all tasks for an entire team)
- Sort and filter every single task in your team!
- Excellent for seeing exactly what needs to be done next.
Edit tasks without leaving List and Board views
- Rename, copy, merge, delete, and more without opening the task
- Just right-click on any task or click the ellipses menu
Drag and Drop tasks to the sidebar
- Now you can move tasks to different Projects and Lists by simply dragging and dropping them into the Sidebar
- Available in List and Board View
Other improvements
- ClickUp has an all-new purple color! We hope you enjoy :)
- Specific Start and Due Times are now displayed in List and Board View
- Quick task creation task gives you the option to create one or multiple tasks (or subtasks) when pasting multiple lines in List view
- Use the
esckey to close tasks and other windows
- Sorting by assignees now splits tasks for each assignee into groups | https://docs.clickup.com/en/articles/1388851-release-1-22 | 2019-12-06T02:19:59 | CC-MAIN-2019-51 | 1575540482954.0 | [array(['https://downloads.intercomcdn.com/i/o/42672163/a0764ca0a1f3a829eee23769/list-view-all.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/42675546/f0ab4d6a6f84f009d84134b4/list-task-management.gif',
None], dtype=object) ] | docs.clickup.com |
Profiles are so freaking cool! We invented Profiles to give you unprecedented insight into:
- What people are working on
- What people should work on next
- What people did recently
- What tasks people have that aren't scheduled (in their backlog)
A major goal with Profiles (and the associated Inbox) is to solve the biggest problem in management...
PEOPLE FORGETTING THINGS!
Profiles give you a window into every person's responsibilities so you can add reminders, make adjustments, or see what they're working on next.
When you delegate reminders or tasks, you'll be able to track them and make sure they actually get done.
Opening a Person's Profile
Access a person's profile from anywhere in the app. This includes when you're selecting a task's assignee, reading through your notifications, or looking at task activity.
When you click on a person's name from anywhere in ClickUp, their Profile will expand from the right side of your screen.
Note: Everyone has a Profile— that means you can even see what your guests are up to!
How to Use Profiles
Each profiles gives you some incredibly cool features!
- View a person's description
- Editable by the person who's profile you're viewing or a Workplace admin
- A perfect place to add a role, location, or something clever :)
- Create reminders
- Creating a reminder for someone will add it to their Inbox when it's due (by default it's due 'Today')
- Note that you can only see Reminders on people's profiles that you personally delegated. You aren't able to see reminders people create for themselves, or reminders that other people delegate to them.
- Inbox: This is your work central. It's comprised of 3 areas: Inbox, Next, and Done.
- Inbox: Where people should generally work from as far as their priority is concerned. Inbox contains both tasks and reminders that start or are due today and earlier. It's also the perfect place to manage tasks in and outside of work by adding reminders.
- Next: Tasks and reminders that start or are due in the future
- Done: Tasks and reminders that have been completed (tasks in a 'done' status count as completed as well).
- Recent: Tasks that the person has recently worked on
- Created: tasks recently created
- Time Tracked: tasks where time was recently tracked
- Updated: tasks recently updated
- Unscheduled
- Tasks that have not yet been scheduled, but are assigned to the person
- These is your place, as a manager, to see what's in someone's backlog and move tasks accordingly into their Inbox (by scheduling a task) | https://docs.clickup.com/en/articles/2963916-profiles | 2019-12-06T02:21:09 | CC-MAIN-2019-51 | 1575540482954.0 | [array(['https://downloads.intercomcdn.com/i/o/120054624/ee04e88018313a367394539d/profile.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/123645779/36e3af59151a9bca56359ba9/accessing+profiles.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/123645784/40b5e48e28f343773e935c44/creating+reminders+profiles.gif',
None], dtype=object) ] | docs.clickup.com |
Testing Furhat skills
Introduction
Testing a Furhat skill is more complex than one could think, which has to do with the fact that it's a multi-modal system and that real-time input and output it quite hard to simulate. That said, we try to make it easy for you to test the skills that you build, both when running locally on your developer machines with a Virtual Furhat and when running on a robot.
We recommend you to use the dashboard, explained below, to manually test and monitor your skills. For deployed runs, we have launched a log-viewer that allows you to track interactions remotely. See logging for more information.
The dashboard
There is an interactive dashboard available as a page on your web-interface (by default localhost:8080 on your SDK, or on the robot's IP if you run on a robot - see robot). This dashboard contains a few important components that will help you test and debug your skills;
- Camera feed (only on robots for now) allowing you to see what the robot sees.
- Situation model: a 2d representation (you can toggle between top or side) of the interaction space around the robot. Identified users will show up here, and move around as the camera detects them moving. The system will also highlight if users are speaking, and if so which user is attributed the speech.
- Interaction log: a chat-style message log of utterances spoken by the robot, by the users and important events such as users entering or leaving the interaction space. You will also be able to see what user was attributed each speech action here.
- A log-viewer, showing all your
logger.info()messages.
Logging
Testing NLU
Running through the skill testing various spoken utterances can be tedious, so it makes sense to create unit-tests for your NLU models. Since the NLU is context-dependent (i.e various intents are active depending on which state you are in), the best way is usually to use the
getIntentClassifier() method on a state instance to test specific utterances.
The below script shows how you can run the intent-classifier of a state
Active and classify input from command-line. You might want to create your own unit-tests in a similar fashion where you assert the truth of classification of specific utterances.
while(true) { val utterance = readLine() val results = Active().getIntentClassifier(lang = Language.ENGLISH_US).classify(utterance!!) if (results.isEmpty()) { println("No match") } else { results.forEach { println("Matched ${it.intents} with ${it.conf} confidence") } } } | https://docs.furhat.io/testing/ | 2019-12-06T00:33:42 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.furhat.io |
See the documentation for the log crate for more information about its API.
Enabling logging
Log levels are controlled on a per-module basis, and by default all logging
is disabled except for
error!. Logging is controlled via the
RUST_LOG
environment variable. The value of this environment variable is a
comma-separated list of logging directives. A logging directive is of the
form:
path::to::module=log_level
The path to the module is rooted in the name of the crate it was compiled
for, so if your program is contained in a file
hello.rs, for example, to
turn on logging for this file you would use a value of
RUST_LOG=hello.
Furthermore, this path is a prefix-search, so all modules nested in the
specified module will also have logging enabled.
The actual_level is provided, then the global log
level for all modules is set to this value.
Some examples of valid values of
RUST_LOG are:
helloturns on all logging for the 'hello' module
infoturns on all info logging
hello=debugturns on debug logging for 'hello'
hello,std::optionturns on hello, and std's option logging
error,hello=warnturn on global error logging and also warn for hello
Filtering results
A RUST_LOG directive may include a regex filter. The syntax is to append
/
followed by a regex. Each message is checked against the regex, and is only
logged if it matches. Note that the matching is done after formatting the
log string but before adding any logging meta-data. There is a single filter
for all modules.
Some examples:
hello/footurns on all logging for the 'hello' module where the log message includes 'foo'.
info/f.oturns on all info logging where the log message includes 'foo', 'f1o', 'fao', etc.
hello=debug/foo*footurns on debug logging for 'hello' where the log message includes 'foofoo' or 'fofoo' or 'fooooooofoo', etc.
error,hello=warn/[0-9] scopesturn on global error logging and also warn for hello. In both cases the log message must include a single digit number followed by 'scopes'. | https://docs.rs/crate/env_logger/0.4.3 | 2019-12-06T00:28:14 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.rs |
TOPICS×
Segment IQ Overview
Analysts can spend many hours or even days searching for relevant differences between segments across your organization's metrics and dimensions. Not only is this analysis tedious and time consuming, you can never be sure if a segment's key difference was missed that could make a big impact to your targeted marketing efforts.
Many organizations have found success using features powered by Segment IQ. See Segment comparison use cases for real-world scenarios that have provided organizations valuable insight.
Features
Segment IQ comprises the following features:
- Segment comparison panel: The core feature in Segment IQ. Drag two segments into the panel, and view a comprehensive report that shows statistically significant differences and overlap between the two audiences.
- Comparing segments in fallout: See how different audiences compare to each other in context of a fallout visualization. | https://docs.adobe.com/content/help/en/analytics/analyze/analysis-workspace/segment-iq.html | 2019-12-06T00:38:15 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.adobe.com |
maximum distance at which trees are rendered.
The higher this is, the further the distance trees can be seen and the slower it will run.
See Also: Terrain.treeBillboardDistance.
using UnityEngine;
public class Example : MonoBehaviour { void Start() { Terrain.activeTerrain.treeDistance = 2000; } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Terrain-treeDistance.html | 2019-12-06T02:33:53 | CC-MAIN-2019-51 | 1575540482954.0 | [] | docs.unity3d.com |
Forms¶
Detailed form API reference. For introductory material, see the Working with forms topic guide.
- The Forms API
- Bound and unbound forms
- Using forms to validate data
- Dynamic initial values
- Checking which form data has changed
- Accessing the fields from the form
- Accessing “clean” data
- Outputting forms as HTML
- More granular output
- Customizing
BoundField
- Binding uploaded files to a form
- Subclassing forms
- Prefixes for forms
- Form fields
- Model Form Functions
- Formset Functions
- The form rendering API
- Widgets
- Form and field validation | https://docs.djangoproject.com/en/dev/ref/forms/ | 2017-08-16T17:23:53 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.djangoproject.com |
Migrating to the Amazon CloudSearch 2013-01-01 API
The Amazon CloudSearch 2013-01-01 API offers several new features, including support for multiple languages, highlighting search terms in the results, and getting suggestions. To use these features, you create and configure a new 2013-01-01 search domain, modify your data pipeline to populate the new domain using the 2013-01-01 data format, and update your query pipeline to submit requests in the 2013-01-01 request format. This migration guide summarizes the API changes and highlights the ones that are most likely to affect your application.
Creating 2013-01-01 Amazon CloudSearch Domains
If you created Amazon CloudSearch domains prior to the launch of the 2013-01-01 API,
you can choose which API version to use when you create a new domain. To create a
2013-01-01 domain through the console, select the 2013-01-01 version in the Create
Domain Wizard. To create a 2013-01-01 domain from the command line, download and install
the AWS CLI and run the
aws cloudsearch create-domain command.
Note
To create and interact with 2013-01-01 domains, you must use the AWS CLI tools. To create and interact with 2011-02-01 domains, you must use the v1 tools.
Topics
Configuring 2013-01-01 Amazon CloudSearch Domains
You can configure 2013-01-01 domains through the console, command line tools, or AWS SDKs. 2013-01-01 domains support several new configuration options:
Analysis Schemes—you configure analysis schemes to specify language-specific text processing options for
textand
text-arrayfields. Amazon CloudSearch now supports 33 languages, as well as an option for multi-language fields. For more information, see Configuring Analysis Schemes. For the complete list of supported languages, see Supported Languages.
Availability Options—you can enable the Multi-AZ option to expand a domain into a second availability zone to ensure availability in the event of a service disruption. For more information, see Configuring Availability Options.
Scaling Options—you can set the desired instance type and desired replication count to increase upload or search capacity, speed up search requests, and improve fault tolerance. For more information, see Configuring Scaling Options.
Suggesters—you can configure suggesters to implement autocomplete functionality. For more information, see Configuring Suggesters for Amazon CloudSearch.
Access to the Amazon CloudSearch configuration service is managed through IAM and now enables you to control access to specific configuration actions. Note that the Amazon CloudSearch ARN has also changed. Access to your domain's document and search endpoints is managed through the Amazon CloudSearch configuration service. For more information, see configure access policies.
2013-01-01 domains also support an expanded set of indexing options:
Analysis Scheme—you configure language-specific text-processing on a per field basis by specifying an analysis scheme for each
textand
text-arrayfield. For more information, see Configuring Analysis Schemes.
Field Types—Amazon CloudSearch now supports 11 field types:
date—contains a timestamp. Dates and times are specified in UTC (Coordinated Universal Time) according to IETF RFC3339: yyyy-mm-ddT00:00:00Z. In UTC, for example, 5:00 PM August 23, 1970 is: 1970-08-23T17:00:00Z..
literal—contains an identifier or other data that you want to be able to match exactly.
literal-array—a literal field that can contain multiple values.
text—contains arbitrary alphanumeric data.
text-array—a text field that can contain multiple values.
Highlight—when you enable the highlight option for a field, you can retrieve excerpts that show where the search terms occur within that field. For more information, see Highlighting Search Hits in Amazon CloudSearch.
Source—you can specify a source for a field to copy data from one field to another, enabling you to use the same source data in different ways by configuring different options for the fields.
When configuring your 2013-01-01 domain, there are several things to keep in mind:
By default, when you add a field, all options valid for that field type are enabled. While this is useful for development and testing, disabling options you don't need can reduce the size of your index and improve performance.
You must use the separate array type fields for multi-valued fields.
Only single-value fields can be sort enabled.
Only
textand
text-arrayfields can be highlight enabled.
All fields except
textand
text-arrayfields can be facet enabled.
Literal fields are now case-sensitive.
You no longer have to store floating point values as integers—use a
doublefield.
You can store locations using the new
latlonfield type. For more information, see location-based searching and sorting.
An
intfield is a 64-bit signed integer.
Instead of configuring a default search field, you can specify which fields to search with the
q.optionsparameter in your search requests. The
q.optionsparameter also enables you to specify weights for each of the fields.
When sorting and configuring expressions, you reference the default relevance score with the name
_score. Due to changes in the relevance algorithm, the calculated scores will be different than they were under the 2011-02-01 API. For more information, see Configuring Expressions.
Expressions now support the
logn,
atan2, and
haversinfunctions as well as the
_score(text relevance score) and
_time(epoch time) variables. If you store locations in
latlonfields, you can reference the latitude and longitude values as
FIELD.latitudeand
FIELD.longitude. You can also reference both
intand
doublefields in expressions. The following functions are no longer supported:
cs.text_relevance,
erf,
lgamma,
rand, and
time. For more information, see Configuring Expressions.
For more information about configuring indexing options for a 2013-01-01 domain, see configure indexing options. For more information about configuring availability options, scaling options, text processing options, suggesters, and expressions see Creating and Managing Search Domains.
New Amazon CloudSearch Configuration Service Actions and Options
The following actions have been added to the 2013-01-01 Configuration Service API:
DefineAnalysisScheme
DefineExpression
DefineSuggester
DeleteAnalysisScheme
DeleteExpression
DeleteSuggester
DexcribeAnalysisSchemes
DescribeAvailabilityOptions
DescribeExpressions
DescribeScalingParameters
DescribeSuggesters
ListDomainNames
UpdateAvailabilityOptions
UpdateScalingParameters
The
deployed option has been added to the describe actions for index fields, access policies,
and suggesters. Set the
deployed option to true to show the active configuration and exclude pending changes.
Obsolete Amazon CloudSearch Configuration Service Actions and Options
The following actions are not supported in the 2013-01-01 Configuration Service API:
DefineRankExpression
DescribeRankExpression
DeleteRankExpression
DescribeDefaultSearchField
DescribeStemmingOptions
DescribeStopwordOptions
DescribeSynonymOptions
UpdateDefaultSearchField
UpdateStemmingOptions
UpdateStopwordOptions
UpdateSynonymOptions
Uploading Data to 2013-01-01 Amazon CloudSearch Domains
With the 2013-01-01 API, you no longer have to specify document versions—updates are
applied in the order they are received. You also no longer specify the
lang attribute for each document—you control language-specific text processing by configuring
an analysis scheme for each
text and
text-array field.
To upload your data to a 2013-01-01 domain, you need to:
Omit the
versionand
langattributes from your document batches.
Make sure all of the document fields correspond to index fields configured for your domain. Unrecognized fields are no longer ignored, they will generate an error.
Post the document batches to your 2013-01-01 domain's doc endpoint. Note that you must specify the 2013-01-01 API version. For example, the following request posts the batch contained in
data1.jsonto the
doc-movies-123456789012.us-east-1.cloudsearch.amazonaws.comendpoint.Copy
curl -X POST --upload-file data1.json doc-movies-123456789012.us-east-1. cloudsearch.amazonaws.com/2013-01-01/documents/batch --header "Content-Type: application/json"
The 2013-01-01 API supports prescaling your domain to increase upload capacity. If you have a large amount of data to upload, configure your domain's scaling options and select a larger desired instance type. Moving to a larger instance type enables you to upload batches in parallel and reduces the time it takes for the data to be indexed. For more information, see Configuring Scaling Options.
For more information about formatting your data, see Preparing Your Data.
Searching 2013-01-01 Amazon CloudSearch Domains
Much of the effort required to migrate an existing Amazon CloudSearch search domain to the 2013-01-01 API is updating your query pipeline to submit 2013-01-01 compatible search requests.
Use the 2013-01-01 API version in all requests.
Use the
qparameter to specify search criteria for all requests. The
bqparameter is no longer supported. To use the structured (Boolean) search syntax, specify
q.parser=structuredin the request.
Parameters cannot be repeated in a search request.
The wildcard character (*) is only supported when using the simple query parser. Use the
prefixoperator to perform prefix matching with the structured query parser. For example,
q=(prefix 'oce')&q.parser=structured.
Use the field name
_idto reference the document ID field in a search request. The
docidfield name is no longer supported.
Use the
rangeoperator search a field for a value within the specified range. The
filteroperator is no longer supported.
Use the new range syntax to search for ranges of values, including dates and locations stored in
latlonfields. The double dot (..) notation is no longer supported. Separate the upper and lower bounds with a comma (,), and enclose the range in brackets or braces. A square bracket ([,]) indicates that the bound is included, a curly brace ({,}) excludes the bound. For example,
year:2008..2011is now expressed as
year:[2008,2011]. An open ended range such as
year:..2011is now expressed as
year:{,2011].
Use the
termoperator to search a field for a particular value. The
fieldoperator is no longer supported.
Use the
q.optionsparameter to specify field weights. The
cs.text_relevancefunction is no longer supported. For example,
q.options={fields:['title^2','plot^0.5']}.
Use the
fqparameter to filter results without affecting how the matching documents are scored and sorted.
Use a dot (.) as a separator rather than a hyphen (-) in the prefix parameters:
expr.NAME,
facet.FIELD,
highlight.FIELD.
Use the
facet.FIELDparameter to specify all facet options. The
facet-FIELD-top-N,
facet-FIELD-sort, and
facet-FIELD-constraintsparameters are no longer supported.
Use the
sortparameter to specify the fields or expressions you want to use for sorting. You must explicitly specify the sort direction in the sort parameter. For example,
sort=rank asc, date desc. The
rankparameter is no longer supported.
Use
expr.NAMEto define an expression in a search request. The
rank-RANKNAMEparameter is no longer supported.
Use
format=xmlto get results as XML. The
result-typeparameter is no longer supported.
The 2013-01-01 search API also supports several new features:
Term boosting—use the
boostoption in a structured query to increase the importance of one part of the query relative to the other parts. For more information, see Constructing Compound Queries.
Sloppy phrase search—use the
nearoperator in a structured query to search a
textor
text-arrayfield for multiple terms and find documents that contain the terms within the specified distance of one another. You can also perform a sloppy phrase search with the simple query parser by appending the
~operator and a value to the phrase. For more information, see Searching for Phrases.
Fuzzy search—use the
~operator to perform fuzzy searches with the simple query parser. Append the
~operator and a value to a term to indicate how much terms can differ and still be considered a match. For more information, see Searching for Individual Terms.
Highlighting—use the
highlight.FIELDparameter to highlight matches in a particular field. For more information, see Highlighting Search Hits in Amazon CloudSearch.
Autocomplete—configure a suggester and submit requests to the
suggesterresource to get a list of query completions and the documents in which they were found. For more information, see Getting Autocomplete Suggestions in Amazon CloudSearch.
Partial search results—use the
partial=trueparameter to retrieve partial results when one or more index partitions are unavailable. By default Amazon CloudSearch only returns results if every partition can be queried.
Deep paging—use the
cursorparameter to paginate results when you have a large result set. For more information, see Paginate the results.
Match all documents—use the
matchallstructured query operator to retrieve all of the documents in the index.
New query parsers—use the
q.parserparameter to select the Lucene or DisMax parsers instead of the simple or structured parser,
q.parser=luceneor
q.parser=dismax.
You'll also notice some changes in behavior when searching:
Strings are no longer tokenized on case boundaries and periods that aren't followed by a space are considered part of the term. For more information, see Text Processing in Amazon CloudSearch.
Literal fields are now case-sensitive.
Search responses no longer include the rank, match expression, or CPU time. The only status information returned is the resource ID (rid) and processing time (time-ms).
When you get facet information for an
intfield,
minand
maxvalues are no longer returned.
For more information about searching your data, see Searching Your Data with Amazon CloudSearch and the Search API.
New Parameters and Options in the Amazon CloudSearch 2013-01-01 Search API
The following parameters have been added to the 2013-01-01 Search API:
cursor.FIELD
expr.NAME
facet.FIELD
format
fq
highlight.FIELD
partial
pretty
q.options
q.parser
return
sort
The
~ operator has been added to the simple query language to support fuzzy searches and
sloppy phrase searches.
The following operators have been added to the structured query language:
boost
matchall
near
phrase
prefix
range
term
Obsolete Amazon CloudSearch Search Parameters and Options
The following parameters are no longer supported in the 2013-01-01 search API:
bq
facet-FIELD-top-N
facet-FIELD-sort
facet-FIELD-constraints
rank
rank-RANKNAME
return-fields
result-type
t-FIELD
The following operators and shortcuts are no longer supported in structured queries:
field
filter
-
|
+
*
Updated Limits in Amazon CloudSearch 2013-01-01
This table summarizes the changes and additions to the Amazon CloudSearch limits. For the complete list of Amazon CloudSearch limits, see Limits. | http://docs.aws.amazon.com/cloudsearch/latest/developerguide/migrating.html | 2017-08-16T17:35:09 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.aws.amazon.com |
Welcome to the Zulip documentation!¶
Welcome! Zulip’s documentation is split into four parts:
User documentation, for users and administrators of Zulip organizations.
Installation documentation, for installing and maintaining a production self-hosted Zulip installation.
API documentation, for writing integrations or bots using the Zulip API.
Contributor documentation, for developing the Zulip software, translating, submitting bug reports, or making other contributions to the project.
Zulip has well over 150,000 words of documentation. If you can’t find what you’re looking for, please let us know! Further information on the Zulip project and its features can be found at.
This site contains our installation and contributor documentation. If this is your first time here, you may want to start with Production installation or Contributing to Zulip.
Contents:
- Overview
- Zulip overview
- Contributing to Zulip
- Zulip architectural overview
- Directory structure
- The Zulip roadmap
- Version history
- Zulip in production
- Requirements and scalability
- Installing a production server
- Troubleshooting and monitoring
- Management commands
- Customize Zulip
- Mobile push notification service
- Upgrade or modify Zulip
- Security model
- Authentication methods
- Backups, export and import
- PostgreSQL database details
- File upload backends
- Installing SSL certificates
- Outgoing email
- Deployment options
- Incoming email integration
- Video call providers
- Development environment
- Development environment installation
- Recommended setup (Vagrant)
- Requirements
- Step 0: Set up Git & GitHub
- Step 1: Install prerequisites
- Step 2: Get Zulip code
- Step 3: Start the development environment
- Step 4: Developing
- Next steps
- Troubleshooting and common errors
- Specifying an Ubuntu mirror
- Specifying a proxy
- Using a different port for Vagrant
- Customizing CPU and RAM allocation
- Advanced setup (non-Vagrant)
- Using the development environment
- Developing remotely
- Authentication in the development environment
- Developer tutorials
- Writing a new application feature
- Writing views in Zulip
- Life of a request
- A request is sent to the server, and handled by Nginx
- Static files are served directly by Nginx
- Nginx routes other requests between Django and Tornado
- Django routes the request to a view in urls.py files
- Views serving HTML are internationalized by server path
- API endpoints use REST
- Django calls rest_dispatch for REST endpoints, and authenticates
- The view will authorize the user, extract request variables, and validate them
- Results are given as JSON
- Reading list
- Screenshot and GIF software
- Shell tips
- Git guide
- Quick start
- Set up Git
- Zulip-specific tools
- How Git is different
- Important Git terms
- Get Zulip code
- Working copies
- Using Git as you work
- Pull requests
- Collaborate
- Fixing commits
- Reviewing changes
- Get and stay out of trouble
- Git cheat sheet
- Code contribution guide
- Version control
- Code style and conventions
- Reviewing Zulip code
- The chat.zulip.org community
- Using zulipbot
- Accessibility
- Bug report guidelines
- Zulip Code of Conduct
- How to have an amazing summer with Zulip
- Code testing
- Testing overview
- Linters
- Backend Django tests
- JavaScript/TypeScript unit tests
- Web frontend black-box Puppeteer tests
- Python static type checker (mypy)
- TypeScript static types
- Continuous integration (CI)
- Manual testing
- Testing philosophy
- Subsystems documentation
- Provisioning and third-party dependencies
- Settings system
- HTML and CSS
- Real-time push and events
- Sending messages
- Queue processors
- Custom Apps
- Unread counts and the pointer
- Markdown implementation
- Caching in Zulip
- Realms in Zulip
- Management commands
- Schema migrations
- URL hashes and deep linking
- Emoji
- Hotspots
- Full-text search
- Analytics
- Clients in Zulip
- Logging and error reporting
- Typing indicators
- Users data model
- Upgrading Django
- Zulip server release checklist
- Zulip PyPI package release checklist
- Exporting data from a large multi-realm Zulip server
- UI: input pills
- Thumbnailing
- Presence
- Unread message synchronization
- Billing
- Widgets (experimental)
- Writing documentation
- Documentation systems
- User documentation
- Documenting an integration
- Documenting REST API endpoints
- OpenAPI configuration
- Translating Zulip
- Translation guidelines
- Internationalization for developers
- Chinese translation style guide(中文翻译指南)
- French translation style guide
- German translation style guide (Richtlinien für die deutsche Übersetzung)
- Hindi translation style guide(हिन्दी अनुवाद शैली मार्गदर्शक)
- Polish translation style guide
- Russian translation style guide
- Spanish translation style guide
- Index | https://zulip.readthedocs.io/en/latest/ | 2020-11-24T03:00:44 | CC-MAIN-2020-50 | 1606141171077.4 | [] | zulip.readthedocs.io |
Network Objects
About Network Objects
A network object can contain a host name, a network IP address, a range of IP addresses, a fully qualified domain name (FQDN), or a subnetwork expressed in CIDR notation. Network groups are conglomerates of network objects and other individual addresses or subnetworks you add to the group. Network objects and network groups are used in access rules, network policies, and NAT rules. You can create, update, and delete network objects and network groups using CDO.
Pemitted Values of Network Objects
Viewing Network Objects
Network objects you create using CDO and those CDO recognizes in an onboarded device's configuration are displayed on the Objects page. They are labeled with their object type. This allows you to filter by object type to quickly find the object you are looking for.
When you select a network object on the Objects page, you see the object's values in the Details pane. The Relationships pane shows you if the object is used in a policy and on what device the object is stored.
When you click on a network group you see the contents of that group. The network group is a conglomerate of all the values given to it by the network objects.
Related Aticles: | https://docs.defenseorchestrator.com/Configuration_Guides/Objects/Network_Objects | 2020-11-24T03:59:46 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.defenseorchestrator.com |
, FailureReason describes why it failed. A transform job creates a log file, which includes error messages, and stores it as an Amazon S3 object. For more information, see Log Amazon SageMaker Events with Amazon CloudWatch .
ModelName -> (string)
The name of the model used in the transform job.
MaxConcurrentTransforms -> (integer)
The maximum number of parallel requests on each instance node that can be launched in a transform job. The default value is 1.
ModelClientConfig -> (structure)
The timeout and maximum number of retries for processing a transform job invocation.
InvocationsTimeoutInSeconds -> (integer)The timeout value in seconds for an invocation request.
InvocationsMaxRetries -> (integer)The maximum number of retries when invocation requests are failing.
MaxPayloadInMB -> (integer)
The maximum payload size, in MB, used in the transform job.
BatchStrategy -> (string)ype to Line , RecordIO , or TFRecord .
Environment -> (map)
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
key -> (string)
value -> (string)
TransformInput -> (structure)
Describes the dataset to be transformed and the Amazon S3 location where it is stored.
DataSource -> (structure)
Describes the location of the channel data, which is,.
The following values are compatible: ManifestFile , S3Prefix
The following value is not compatible: AugmentedManifestFile", ... "relative/path/custdata-N" ] The preceding JSON matches the following S3Uris : s3://customer_bucket/some/prefix/relative/path/to/custdata-1 s3://customer_bucket/some/prefix/relative/path/custdata-2 ... s3://customer_bucket/some/prefix/relative/path/custdata)If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None .
SplitType -> (string)
The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None , which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:
- RecordIO
- TFRecord
When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord , Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord , Amazon SageMaker sends individual records in each request.
Note
Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord . Padding is not removed if the value of BatchStrategy is set to MultiRecord .
For more information about RecordIO , see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord , see Consuming TFRecord data in the TensorFlow documentation., batch transform stores the transformed data with an .``out`` suffix`` file.. Choose a format that is most convenient to you. To concatenate the results in binary format, specify None . To add a newline character at the end of every transformed record, specify Line .
KmsKeyId -> (string)
The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KeModel. If you are using built-in algorithms to transform moderately sized datasets, we recommend using ml.m4.xlarge or ml.m5.large instance types.
InstanceCount -> (integer)The number of ML compute instances to use in the transform job. For distributed transform jobs, specify a value greater than 1. The default value is 1 .
VolumeKmsKeyId -> (string)
The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt model data on the storage volume attached to the ML compute instance(s) that run the batch transform job. The VolumeK has been completed, or has stopped or failed. You are billed for the time interval between this time and the value of TransformStartTime .
LabelingJobArn -> (string)
The Amazon Resource Name (ARN) of the Amazon SageMaker Ground Truth labeling job that created the transform or training job.
AutoMLJobArn -> (string)
The Amazon Resource Name (ARN) of the AutoML transform job.
DataProcessing -> (structure) .
InputFilter -> (string)
A JSONPath expression used to select a portion of the input data to pass to the algorithm. Use the InputFilter parameter to exclude fields, such as an ID column, from the input. If you want Amazon SageMaker to pass the entire input dataset to the algorithm, accept the default value $ .
Examples: "$" , "$[1:]" , "$.features"
OutputFilter -> (string)
A JSONPath expression used to select a portion of the joined dataset to save in the output file for a batch transform job. If you want Amazon SageMaker to store the entire input dataset in the output file, leave the default value, $ . If you specify indexes that aren't within the dimension size of the joined dataset, you get an error.
Examples: "$" , "$[0,5:]" , "$['id','SageMakerOutput']"
JoinSource -> (string)
Specifies the source of the data to join with the transformed data. The valid values are None and Input . The default value is None , which specifies not to join the input with the transformed data. If you want the batch transform job to join the original input data with the transformed data, set JoinSource to Input .
For JSON or JSONLines objects, such as a JSON array, Amazon SageMaker adds the transformed data to the input JSON object in an attribute called SageMakerOutput . The joined result for JSON must be a key-value pair object. If the input is not a key-value pair object, Amazon SageMaker creates a new JSON file. In the new JSON file, and the input data is stored under the SageMakerInput key and the results are stored in SageMakerOutput .
For CSV files, Amazon SageMaker combines the transformed data with the input data at the end of the input data and stores it in the output file. The joined data has the joined input data followed by the transformed data and the output is a CSV file.
ExperimentConfig -> (structure)
Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:
- CreateProcessingJob
- CreateTrainingJob
- CreateTransformJob
ExperimentName -> (string)The name of an existing experiment to associate the trial component with.
TrialName -> (string)The name of an existing trial to associate the trial component with. If not specified, a new trial is created.
TrialComponentDisplayName -> (string)The display name for the trial component. If this key isn't specified, the display name is the trial component name. | https://docs.aws.amazon.com/cli/latest/reference/sagemaker/describe-transform-job.html | 2020-11-24T04:57:10 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.aws.amazon.com |
Getting started with Codacy¶
Codacy automatically identifies issues through static code review analysis and notifies you on security issues, code coverage, code duplication, and code complexity in every commit and pull request.
To get started, head to codacy.com and click Get started.
1. Sign up¶
Signing up with a Git provider such as GitHub, GitLab, or Bitbucket links your Codacy user with your Git provider user, making it easier to add repositories to Codacy and invite your teammates.
Codacy will ask you to have access to your Git provider during the authorization flow. Check the permissions that Codacy requires and why.
2. Account details¶
You'll be asked to fill in a few details about your company so we know a bit more about your use case.
3. Choose an organization¶
Now, you'll need to join one or more organizations that contain your repositories. The organization with the same name as your Git provider username contains your personal repositories. The selected organizations will then be synced with Codacy so that managing your team permissions is easy. Read more about what synced organizations do.
Tip
If you can't see the organization you are looking for, follow these troubleshooting instructions.
To start adding your repositories, select one of the organizations and click Go to add repositories.
4. Add repositories¶
As a final step, add the repositories that you wish to analyze. Codacy will start the first analysis and set up everything required to ensure your next commits on those repositories are analyzed.
You're all set! 🎉¶
Codacy begins an initial analysis as soon as you add a repository, and displays an overview of the code quality of your repository when the analysis is complete.
After that, you can continue to explore and configure Codacy for your repository:
- Check the static analysis results on the Issues page
- Configure the code patterns that Codacy uses to analyze your repository
- Configure your quality settings for pull requests
- Add coverage reports to Codacy
- Add a Codacy badge to your repository displaying the current code quality grade or code coverage
| https://docs.codacy.com/getting-started/getting-started-with-codacy/ | 2020-11-24T03:25:43 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['../images/getting-started-choose-organization.png',
'Choosing an organization'], dtype=object)
array(['../images/getting-started-add-repository.png',
'Adding repositories'], dtype=object)
array(['../../repositories/images/repository-dashboard.png',
'Repository dashboard'], dtype=object) ] | docs.codacy.com |
Should you worry about SOS_SCHEDULER_YIELD?
Today.).
Hope this helps,
Joe | https://docs.microsoft.com/en-us/archive/blogs/joesack/should-you-worry-about-sos_scheduler_yield | 2020-11-24T04:00:58 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.microsoft.com |
Transaction Management
OLE DB defines a basic set of interfaces for transaction management. A transaction enables consumers to define units of work within a provider. These units of work have the atomicity, concurrency, isolation, and durability (ACID) properties. Transactions allow the specification of various isolation levels to enable more flexible access to data among concurrent consumers.
Local Transactions
Local transactions refer to transactions running in the context of a resource manager. A provider that supports transactions exposes ITransactionLocal on the session. A call to ITransactionLocal::StartTransaction begins a transaction on the session. A session can be inside or outside of a transaction at any time. When created, a session is outside of a transaction and all the work done under the scope of the session is immediately committed on each OLE DB method call. When a session enters a local or coordinated transaction, all the work done under the session ? between the ITransactionLocal::StartTransaction and ITransaction::Commit or ITransaction::Abort method calls and including other objects created underneath it (commands or rowsets) ? is part of the transaction.
ITransactionLocal::StartTransaction supports various isolation levels that consumers can request when creating a transaction. OLE DB providers do not need to support all possible transaction options defined. A consumer can interrogate the transaction capabilities of a provider through IDBProperties.
For providers that support nested transactions, calling ITransactionLocal::StartTransaction within an existing transaction begins a new nested transaction below the current transaction. Calling ITransaction::Commit or ITransaction::Abort on the session commits or aborts, respectively, the transaction at the lowest level.
Distributed Transactions
OLE DB defines an interface, ITransactionJoin, for consumers to request that providers enlist in a coordinated transaction among multiple (possibly distributed) data providers and other types of resource managers. The details of how providers enlist themselves as resource managers with the transaction coordinator are described in the Component Services (or Microsoft Transaction Server if you are using Microsoft? Windows NT?) documentation.
For more information about transactions, see Transactions. | https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ms715016(v=vs.85) | 2020-11-24T05:05:56 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.microsoft.com |
What We Should Know About A Profitable Real Estate Investment
It is not a wonder to find that many are those who will keep on wondering where they will put their money. This is the right time that we should,. | http://docs-prints.com/2020/10/14/news-for-this-month-18/ | 2020-11-24T03:27:02 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs-prints.com |
. Some of the things that you are supposed to go through is the curriculum vitae of the person but this does not provide you with the information that you need, with that, you can be able to conduct a background check so that you can get full information on them. Background checks can not only be done by a workplace that is hiring but any kind of individual that feels they have got a suspicion about an individual with the fact that they are not satisfied with the info they know about them.. It is easy for one to get this data that they need on their own if they get to conduct independent research on the individual they are looking into. Upon searching for their name, you can be able to get some personal information about the individual you are looking. | http://docs-prints.com/2020/09/19/the-10-rules-of-and-how-learn-more/ | 2020-11-24T03:55:01 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs-prints.com |
An authenticated user will have access to a processes items if the user has started the process or if the user is involved in any of the process’s items. In a network, only items for a process that is inside the given network are returned.
In non-network deployments, administrators can see all tasks and perform all operations on those tasks. In network deployments, network administrators can see all tasks in their network and perform all operations on tasks in their network.
Using the HTTP GET method:-
processes/<processId>/items
The body of the response will be a list containing all the matching items for the specified processId.
entry: { "id": "42eef795-857d-40cc-9fb5-5ca9fe6a0592", "name" : "FinancialResults.pdf" "title" : "FinancialResults" "description" : "the description" "mimeType" : "application/pdf" "createdBy" : "johndoe" "createdAt" : "2010-10-13T14:53:25.950+02:00" "modifiedBy" : "johndoe" "modifiedAt" : "2010-10-13T14:53:25.950+02:00" "size" : 28973 }
Links:
[1] | https://docs.alfresco.com/print/book/export/html/2098960 | 2020-11-24T04:25:59 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.alfresco.com |
Troubleshooting Axure Cloud Connection Issues
There are a number of actions in Axure RP that require a connection to the Axure Cloud servers: logging in to your Axure account, publishing a prototype to Axure Cloud, and any actions related to team projects. If Axure RP is unable to communicate with the Axure Cloud servers while attempting one of these, you'll see this error message:
Unable to connect. Please make sure you have an internet connection and try again.
This could be caused by something as straightforward as an interruption in your internet connection, or it could be caused by a network security mechanism blocking Axure RP or the domains it's trying to access. The troubleshooting steps below address the most common causes of this error.
Note
If you'd rather work directly with Axure Support to find a solution, please feel free to email us at [email protected].
Update Axure RPUpdate Axure RP
Ensure that Axure RP is up to date. To do this, compare the version number listed at Help → About Axure RP with the current version number listed on our website at.
Tip
Once Axure RP is able to communicate with our servers, you can use the Help → Check for Updates dialog instead.
If you aren't running the most recent version, download and install it. Then, try publishing your file or logging in to your account again.
Log In to Axure Cloud in Your Web BrowserLog In to Axure Cloud in Your Web Browser
Check to see whether your computer is able to access the domains that Axure RP needs to communicate with. To do this, try logging in to app.axure.cloud in your web browser.
If you're unable to load the page or to log in, something in your network security may be blocking access. You'll want to reach out to your IT department to get access to the following domains:
app.axure.cloud** accounts.axure.com/user/*
Check Proxy and Firewall SettingsCheck Proxy and Firewall Settings
If you're able to access the Axure Cloud website, something in your network environment may be blocking Axure RP’s internet access. This is often a firewall or, if you’re using a Windows computer, a proxy server.
In the case of a firewall, Axure RP will need to be added to its safe list.
For a proxy server (Windows only), you’ll need to configure Axure RP’s proxy settings at Account → Proxy Settings or by clicking Proxy Settings at the bottom of the Manage License dialog.
Your IT department may need to help you with these.
Note
If you're able to access app.axure.cloud in your web browser, you can upload your prototypes via the web interface.
Check Your Internet ConnectionCheck Your Internet Connection
If none of the steps above has led to a solution, there may be a general internet connectivity issue to blame. This could be at the level of your computer or your network, so you’ll probably want to reach out to your IT department for assistance. | https://docs.axure.com/axure-cloud/reference/troubleshooting-connection-issues/ | 2020-11-24T03:14:15 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/assets/screenshots/axure-cloud/troubleshooting1.png', None],
dtype=object)
array(['/assets/screenshots/axure-cloud/troubleshooting2.png', None],
dtype=object)
array(['/assets/screenshots/axure-cloud/troubleshooting3.png', None],
dtype=object) ] | docs.axure.com |
RenderTargetBitmap tips. | https://docs.microsoft.com/en-us/archive/blogs/jaimer/rendertargetbitmap-tips | 2020-11-24T04:31:40 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/BlogFileStorage/blogs_msdn/jaimer/WindowsLiveWriter/RenderTargetBitmaptips_975E/image_thumb.png',
'image image'], dtype=object) ] | docs.microsoft.com |
Quality Dashboard (CMMI).
Note
You access dashboards through your team project portal. You can access the Quality dashboard only if your that portal has been enabled and is provisioned to use Microsoft Office SharePoint Server 2007. For more information, see Dashboards (Agile) or Access a Team Project Portal and Process Guidance..
Data That Appears in the Dashboard<<.png)
Note
The Test Plan Progress report is available only when the team creates test plans and runs tests by using Test Runner and Microsoft Test Manager. For information about how to define test suites and test plans, see Organizing Test Cases Using Test Suites.
Progress, build, and code charts, reports
through
, do not appear when the data warehouse for the team project is not available.
To learn more about how to interpret, update, or customize the charts that appear in the Quality dashboard, see the topics in the following table.
Required Activities for Monitoring Quality
For the Quality Dashboard to be useful and accurate, the team must perform the activities that this section describes.
Required Activities for Tracking Test Plan Progress
For the Test Plan Progress report to be useful and accurate, the team must perform the following activities:
Define Test Cases and Requirements, and create Tested By links between Test Cases and Requirements.
Define Test Plans, and assign Test Cases to Test Plans. For more information, see Defining Your Testing Effort Using Test Plans.
For manual tests, mark the results of each validation step in the Test Case as passed or failed.
Important
Testers must mark a test step with a status if it is a validation test step. The overall result for a Test Case reflects the status of all the test steps that the tester marked. Therefore, the Test Case will have a status of failed if the tester marked any test step as failed or did not mark it.
For automated tests, each Test Case is automatically marked as passed or failed.
(Optional) To support filtering, assign Iteration and Area paths to each Test Case.
Note
For information about how to define area and iteration paths, see Create and Modify Areas and Iterations.
Required Activities for Tracking Bug Progress and Bug Reactivations.
Required Activities for Tracking Build Status, Code Coverage, and Code Churn
For the Build Status, Code Coverage, and Code Churn reports to be useful and accurate, team members and How to: Gather Code-Coverage Data with Generic.
Troubleshooting Quality Issues
The following table describes specific quality issues that the Quality dashboard can help you monitor and identify actions the team can take.
Customizing the Quality Dashboard:
Ways to customize PivotTable reports
Edit or remove a workbook from Excel Services
Publish a workbook to Excel Services
Save a file to a SharePoint library or another Web location | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/ee461590(v=vs.100) | 2020-11-24T05:14:21 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['images/ee461590.procguid_dashboard_productquality(en-us,vs.100',
'Product Quality Dashboard Product Quality Dashboard'],
dtype=object) ] | docs.microsoft.com |
Create new channels on the Channel Manager page.
- Open the Admin Console, and then click Channel Manager.
On a new installation, there are no existing channels created.
- Click New, and then select the required channel type.
Choose from the following channels:
- Flickr
- SlideShare
- YouTube
- Follow the setup instructions for the channel you choose.Important: When you access Share for the Admin Console, use the correct URL for your Alfresco instance, rather than using. This ensures that the service provider for the relevant channel knows the location of Share when channel authorization is complete. If these are incorrect, then the authorization may fail. | https://docs.alfresco.com/4.1/tasks/adminconsole-channelsman.html | 2020-11-24T04:04:08 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.alfresco.com |
Applies To:
- Workflow Conductor 2.5.2 and higher
- Windows Server 2008
- SharePoint Server 2010
- SharePoint Foundation 2010
Background
In order to use the Solution Deployment Method to deploy workflows using Workflow Conductor on Windows Server 2008, you must disable User Access Control (UAC) Admin Approval Mode or turn off UAC. Alternatively, when using the Simple Publishing Method, UAC does not need to be modified or turned off.
The procedures for disabling Admin Approval Mode and for turning off UAC are both provided below. You only need to follow the one that best fits your environment. For more information about UAC, see TechNet article: User Account Control Step-by-Step Guide. | https://docs.bamboosolutions.com/document/configuring_uac_for_workflow_conductor/ | 2020-11-24T03:14:10 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.bamboosolutions.com |
VS2010: Just My Code
The ‘Just My Code’ feature in the profiler has a few differences to the ‘Just My Code’ feature in the debugger so this post should provide a useful introduction.
Example Program
Here’s a very simple program I’ll use in this post.
Why ‘Just My Code’?:
What is ‘My Code’?’:
- the copyright string for the module does not contain ‘Microsoft’, OR:
- the module name is the same as the module name generated by building any project in the currently open Solution in Visual Studio.
How do I turn JMC on or off?
You can temporarily toggle JMC on or off on the profiler Summary Page in the Notifications area using ‘Show All Code’ or ‘Hide All Code’ (shown in red below):
The default setting may be configured as discussed in the following section.
How do I configure JMC?:
Why is ‘Just My Code’ only available for sampling?
When you instrument binaries for profiling, you have already performed some level of JMC. Only binaries that you instrument and first-level calls into other binaries will show up in the instrumentation report, so JMC is not really necessary. | https://docs.microsoft.com/en-us/archive/blogs/profiler/vs2010-just-my-code | 2020-11-24T03:40:51 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.microsoft.com |
* AjaxProxy is one of the most widely-used ways of getting data into your application. It uses AJAX requests to load* Here we're going to set up a Store that has an AjaxProxy. To prepare, we'll also set up a {@link Ext.data.Model* Ext.data.proxy.Ajax instance, with the url we specified being passed into AjaxProxy's constructor.* create the proxy via the Store - the Store already knows about the Model, and Proxy's default {@link* Now when we call store.load(), the AjaxProxy springs into action, making a request to the url we configured* to customize this - by default any kind of read will be sent as a GET request and any kind of write* AjaxProxy cannot be used to retrieve data from other domains. If your application is running on it* cannot load data from because browsers have a built-in security policy that prohibits domains* If you need to read data from another domain and can't set up a proxy server (some software that runs on your own* domain's web server and transparently forwards requests to, making it look like they actually came* Padding), which can help you get around the problem so long as the server on is set up to support* configuration can be passed in as a simple object, which the Proxy automatically turns into a {@link* AjaxProxy automatically inserts any sorting, filtering, paging and grouping options into the url it generates for* Easy enough - the Proxy just copied the page property from the Operation. We can customize how this page data is sent* Alternatively, our Operation could have been configured to send start and limit parameters instead of page:* AjaxProxy will also send sort and filter information to the server. Let's take a look at how this looks with a more* filters defined. By default the AjaxProxy will JSON encode the sorters and filters, resulting in something like this* (note that the url is escaped before sending the request, but is left unescaped here for clarity):* proxy.read(operation); //GET /users?sort=[{"property":"name","direction":"ASC"},{"property":"age","direction":"DESC"}]&filter;=[{"property":"eyeColor","value":"brown"}]* We can again customize how this is created by supplying a few configuration options. Let's say our server is set up* to receive sorting information is a format like "sortBy=name#ASC,age#DESC". We can configure AjaxProxy to provide* proxy.read(operation); //GET /users?sortBy=name#ASC,age#DESC&filterBy;=[{"property":"eyeColor","value":"brown"}]* If the data is not being loaded into the store as expected, it could be due to a mismatch between the the way that the* To debug from the point that your data arrives back from the network, set a breakpoint inside the callback function* created in the `createRequestCallback` method of the Ajax Proxy class, and follow the data to where the attempts | https://docs.sencha.com/extjs/6.6.0/classic/src/Ajax.js-1.html | 2020-11-24T04:34:17 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.sencha.com |
Upgrading Team Projects to RP 9
If you have team projects that you created with older versions of Axure RP, you can upgrade them to work with Axure RP 9 by following the steps below.
Note
You must have a license for Axure RP Team edition in order to work with team projects.
Upgrading Team Projects from Previous VersionsUpgrading Team Projects from Previous Versions
Ask all collaborators to check in their changes to the project by going to Team → Check In Everything in the previous version of Axure RP. Check in your changes as well.
Still in the previous version of Axure RP, go to Team → Get All Changes from Team Directory.
Export a standalone
.rpfile from the team project by going to File → Export Team Project to File.
If you're upgrading from a version of Axure RP prior to RP 8, open and save the exported
.rpfile in each intermediary version until you get to RP 9. For example, if you're upgrading from RP 7, open and save the
.rpfile in RP 8 and then in RP 9.
You can download previous versions of Axure RP at.
Open the
.rpfile in Axure RP 9 and go to Team → Create Team Project from Current File.
Follow the steps on the Creating and Sharing Team Projects page to create a new team project from the exported
.rpfile.
Tip
To avoid having to re-invite your collaborators, publish your new team project to the same Axure Cloud workspace the old team project is located in.
Ask all collaborators to download local copies of the new team project by going to Team → Get and Open Team Project in RP 9.
Note
Axure RP 9 does not support SVN hosting for team projects. As such, you'll need to move any existing SVN team projects to Axure Cloud via the steps above.
If your team is unable to publish to Axure Cloud, reach out to us at [email protected] for alternative solutions.
Viewing the Old Team Project's HistoryViewing the Old Team Project's History
You can view the old team project's history on Axure Cloud:
Log in to app.axure.cloud or the Axure Cloud desktop app.
Select the workspace the team project is located in and then click on the project's name.
At the top of the the project overview page, switch to the History tab.
For more information on team project revision history, head over to the Team Project History page. | https://docs.axure.com/axure-rp/reference/upgrading-team-projects-to-rp9/ | 2020-11-24T04:11:24 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.axure.com |
Exporting or Importing workflow templates Overview of Workflow Conductor Templates Export a template for safe keeping, to be able to import it to another SharePoint environment, or to send a copy to Bamboo’s Support Team if you are having an issue. Export a template Step Action 1. To export the current workflow template, select Export from the Workflow area of the menu in the Workflow Conductor Studio. NOTE: You can export a workflow that you haven’t yet saved or one that doesn’t yet have a name. 2. A message will appear at the bottom of your Internet Explorer browser, asking if you want to save the *.xoml file. You can choose to save it with the given workflow title, or you can choose to Save As a file with a different name. Import a template Step Action 1. To import a workflow template, select Import from the Workflow area of the menu in the Workflow Conductor Studio. 2. In the dialog box that appears, browse for the *.xoml file to import and then click Import. NOTE: If the template imported refers to hard-coded server URLs and other things that don’t exist in your SharePoint environment, you may need to make a few adjustments to the workflow before publishing it to your list, site, or site collection. | https://docs.bamboosolutions.com/document/exporting_or_importing_workflow_templates/ | 2020-11-24T04:03:16 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.bamboosolutions.com |
API17:JModel::setDbo
From Joomla! Documentation
(Difference between revisions)
Revision as of 21:23, 27 April 2011
Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete.
JModel::setDbo
Description
Method to set the database connector object.
Description:JModel::setDbo [Edit Descripton]
public function setDbo (&$db)
See also
JModel::setDbo source code on BitBucket
Class JModel
Subpackage Application
- Other versions of JModel::setDbo
SeeAlso:JModel::setDbo [Edit See Also]
User contributed notes
<CodeExamplesForm /> | http://docs.joomla.org/index.php?title=JModel::setDbo/11.1&diff=prev&oldid=57351 | 2013-05-18T21:20:25 | CC-MAIN-2013-20 | 1368696382892 | [] | docs.joomla.org |
Git IzPack workflow
Branches and tags conventions
For the Codehaus repository:
masterrefers to the main development branch (equivalent to a Subversion trunk)
a.brefers to a branch for the
a.breleases (e.g.,
5.1is the parent branch to prepare the
5.1.0,
5.1.1and so on releases)
va.b.crefers to a release tag (e.g.,
v5.0.0)
Each developer is free to name his/her topic branches as they want as long as the name remains meaningful. We strongly advise to use the related JIRA issue name if possible (e.g., name your branch
IZPACK-123 if it refers to
IZPACK-123).
Peer-to-peer interactions
Given that Git is a distributed system, anyone can clone a Git repository and maintain changes locally. This is fundamentally different compared to a centralized system like Subversion where only commiters can put their changes under version control.
We distinguish 3 types of Git repositories:
- the blessed repository where only IzPack developers have write access (like with any centralized system)
- IzPack developers repository which allows them to collaborate and publish changes before they can make it to the blessed repository
- external people repositories that can be used to maintain customized versions against upstream changes, and that can
also be used to offer new contributions to the IzPack developers that can easily pull them from their repositories.
The following figure shows how the peer-to-peer interactions happen in the IzPack Git workflow:
As you can see, a distributed system like Git puts various types of people of the IzPack community on equal foot.
Repositories workflow
Anyone can get and maintain his/her very own copy of the IzPack source code with the whole history.
The blessed repository must only feature the development branch (master), the stable release branches and the release tags (an exception is made only for those recovered from the CVS to Subversion then Subversion to Git conversions).
IzPack developers willing to share work in progress and experiments should have a public repository to push their own branches to. GitHub and Gitorious are excellent public repository hosting platforms.
What you can do... or not
3 CommentsHide/Show Comments
Mar 09, 2010
Anthonin Bonnefoy
I thought of some points that might be good to define.
Should we follow a branching model similar to the one described here ?
Should we deactivate the fast-fowarding when merging to keep existence of feature branch?
Mar 09, 2010
Julien Ponge
The model they describe is generally good, although the master / develop branch policy looks cumbersome to me.
Indeed, considering your master branch as the equivalent of a SVN trunk is simpler (which is to say that develop is redundant over master).
On
--no-ff: I would say that this is a matter of personal preferences, but it indeed clearly shows where the code came from. Mercurial branch merges are just like that (i.e., non-fast forwarded).
Mar 13, 2010
Julien Ponge
I gave a try to the master / develop branches strategy and I changed my mind compared to what I said above
This is great to have this develop branch for working, and eventually merge it to master when it is ready to be pushed to Codehaus. It also makes it easy to rebase your feature branches when you merge others work from master.
The only thing is that we should frequently merge our develop branches to master and push to Codehaus, so that integration stays frequent. | http://docs.codehaus.org/display/IZPACK/Git+IzPack+workflow?focusedCommentId=140837021 | 2013-05-18T21:46:28 | CC-MAIN-2013-20 | 1368696382892 | [array(['/download/attachments/140837017/git-izpack-interactions.png?version=1&modificationDate=1268129743470',
None], dtype=object)
array(['/download/attachments/140837017/git-izpack-repositories-workflow.png?version=1&modificationDate=1268129743481',
None], dtype=object)
array(['/download/attachments/140837017/git-izpack-rules.png?version=1&modificationDate=1268129743496',
None], dtype=object)
array(['/s/en_GB/3278/15/_/images/icons/emoticons/wink.png', '(wink)'],
dtype=object) ] | docs.codehaus.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.