content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Add and manage users inSync Private Cloud Editions: Elite Enterprise Overview Users are individuals who use inSync to back up and share data from multiple devices. To use inSync, users must have inSync accounts. Usually, inSync administrators create accounts for users. You can add users individually or add a group of users by importing their information from a CSV file.
https://docs.druva.com/010_002_inSync_On-premise/inSync_On-Premise_5.8/030_Get_Started_Backup_Restore/040_Profile%2C_User%2C_and_Device_Management/020_Add_and_manage_users
2018-07-15T21:26:06
CC-MAIN-2018-30
1531676588972.37
[array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) ]
docs.druva.com
Verify and update security settings for the vRealize Automation appliance as necessary for your system configuration. Configure security settings for your virtual appliances and their host operating systems. In addition, set or verify configuration of other related components and applications. In some cases, you need to verify existing settings, while in others you must change or add settings to achieve an appropriate configuration.
https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.install.upgrade.doc/GUID-44143462-F7A3-434F-AEEC-6D918B18F081.html
2018-07-15T21:28:31
CC-MAIN-2018-30
1531676588972.37
[]
docs.vmware.com
Overview Reepay provides a system that manages billing aspects of your subscription based business, including: - Recurring billing, according to defined subscription plans - Subscription adjustments with additional costs, credits and on-demand invoices - Customer communication like sign up and invoice receipts - Dunning management process with customer notifications and payment retries - Proactive payment method update, e.g. detecting and updating expired credit cards before failed payments Reepay positions itself between your business and your payment gateways as shown in the figure below. Basic concepts Customers A customer is the entity that subscribe and pays for your subscription products. Plans A plan is a subscription product description bundled with pricing and billing schedule. That is, a plan defines “what”, “how much” and “how often”. Subscriptions A subscription ties a customer to a plan. A subscription has a recurring interval called a billing period. The subscription is responsible for generating invoices for billing periods. A customer can have multiple subscriptions. Invoices An invoice in Reepay can be generated by subscriptions, on-demand for subscriptions, or on-demand for customers, without being tied to a subscription. An invoice is collected using transactions. E.g. credit card transactions. In Reepay you will have an organisation with one or more accounts. The organisation have company wide settings while customers, plans, subscriptions and invoices are tied to a specific account. An account uses a single currency and language. An account can either be in test or production mode. Getting started When first signing up to Reepay a number of steps is either required or advisable before going live with your Reepay account. Organisation and account settings Make sure that organisation and account settings are defined and correct. Especially important is timezone and currency for your account, and the Reepay subdomain you would like to use for hosted pages, e.g.…. The currency used on an account cannot be changed once the account is in production mode. Configure plans Plans define your subscription products and tells Reepay how much, and when, to bill your customers signed up to a subscription with the plan. Configure dunning management Dunning management is a process that starts when the payment of an invoice fails. Reepay provides a default dunning management configuration, but you may want to adjust this configuration or provide additional configurations. Customize customer communication Reepay can handle all customer communication by sending emails on behalf of your business. Reepay provides default content for the mails that is sufficient in many cases, but you may want to customize the content by editing the email templates. It is also possible to customize the email sender information so emails clearly originate from your business. Customize. Add a production payment gateway To be able to accept credit card payments or other types of payment methods, you will need a payment gateway that Reepay supports. If you do not have a payment gateway, or are unsure, we will happily support you in the process of obtaining a payment gateway. Notice that the process of obtaining a payment gateway can take up to three weeks for approval and setup, so we recommend starting the application process early. You can always start work on your Reepay integration and testing while you await approval. Integrate and test You will need to test your Reepay solution. If you are doing a technical integration you will furthermore need to implement and test this integration. When you sign up to Reepay your account will automatically be in test mode. An account in test mode allows you to test all aspects of Reepay with a test payment gateway that allows you to trigger different scenarios like failed payments. Notice that all test data like customers, subscriptions and invoices will be deleted once you put your account in production mode. The configuration is not deleted. You can choose to keep a test account with all the test data when you put your account in production mode, and you can at any time create a new test account. User access If multiple teammates are going to manage your Reepay account, you can invite them in the Reepay administration and assign them roles with specific permissions. Go live When you are ready to go live and switch to production mode, you will need to select a Reepay billing plan and provide credit card information. Customer A customer is the entity that buys your subscription products. A customer can have zero or more subscriptions. The customer also holds a wallet of payment methods which can be assigned to subscriptions as payment method for the specific subscription. The customer can choose to use an existing payment method or enter a new payment method when signing up for a subscription. Create To create a customer the following information can be given. The ID is the only mandatory field. Update All data except the customer identifier can be updated. Status The customer has the following subscription status. - Number of active subscriptions - Number of cancelled subscriptions - Number of expired subscriptions The customer has the following status aggregated over all invoices for the customer (see more on invoices) - Number and summed amount of settled invoices - Number and summed amount of dunning invoices - Number and summed amount of failed invoices - Number and summed amount of pending invoices - Number and summed amount of cancelled invoices Deleted A customer delete is possible if the customer has only expired subscriptions and only failed or cancelled invoices. A customer delete does not remove the customer but anonymizes the customer data and removes the customer from listings. It will still be possible to see subscriptions and invoices made by the customer. Notes Notes can be added to a customer for better team cooperation and tracking. E.g. the reason for a refund. Notes are only internal and not exposed to customers. Search Customer searching can be done on most attributes and statuses. In the Reepay administration it is possible to define custom filters. It is also possible to do a free text search which will match against text attributes such as name and address. Payment methods A customer has a wallet with payment methods. When a customer signs up for a subscription a new payment method (e.g. credit card) can be entered, or an existing payment method from the wallet can be used. When a new payment method is entered it is added to the customer wallet for future reuse. Payment methods can at any time be added and removed from the customer wallet. Notice that if a payment method is removed, which is used on an active subscription, invoices generated by the subscription will enter a dunning process trying to obtain a new payment method. If a payment method is used for a payment and fails with an irrecoverable decline, it will be marked as failed and cannot be used again. An irrecoverable decline could be lost or stolen card. (see more under dunning management) Plan A subscription plan defines subscription product information and when and how much to charge for a subscription. An unlimited number of plans can be defined for an account. Create plan Basic information A plan consist of the following basic information. Billing scheduling The billing schedule for plans are very flexible. The billing scheduling is determined by a scheduling type together with three additional attributes. The following schedule types can be used Below examples of how to achieve common billing period scenarios. When a subscription is created an optional date can be provided from which the subscription is eligible to start billing periods. The default is eligible from the creation date. This parameter provides more possibilities to control the billing scheduling. See more under create subscription. Prorated billing for first periods For fixed day scheduling there might be a partial period from the start date to the first fixed billing day. E.g. from the 16th in a month to the 1th of the next month for billing every first day in months. In these cases it can be configured how to handle the first partial period. The options are: Reminder emails Renewal reminder email Renewal reminders can be configured per plan to automatically send an email to notify your customers about their upcoming subscription renewal. It can be configured how many days prior to renewal to send the reminder and the content of the email can be configured under Mail Templates. Trial ending reminder email Trial ending emails can be configured per plan to automatically send an email to notify your customers about their upcoming end of trial and subsequent payment for next billing period. It can be configured how many days prior to end of trial to send the reminder, and the content of the email can be configured under Mail Templates. If a trial ending email is not configured for a plan, but a renewal reminder email is, then the renewal reminder email will be sent. Fixed lifetime By default a subscription will be active until expired on-demand or because of a failed dunning process. A fixed lifetime for subscriptions can be defined on a plan. Trial Subscription plans can have an optional free trial period defined in days or months. If a trial period length of 14 days is defined for at plan with monthly schedule type, and the the subscription is created on the 10th, then there will be a free trial period from the 10th to the 24th, and then a paid billing period from the 24th to the following 24th. Notice that trial is only supported for schedule types Monthly and Daily. For the fixed day schedule types it makes little sense and the objective can be achieved in other ways, e.g. using credits or changing the date from which the subscription is eligible to start first billing period. Update Only a small subset of plan attributes can be updated because changing pricing and/or billing scheduling requires handling for the subscriptions already using the plan. See superseding below. The following parameters can be updated: name, description, vat and dunning plan. Supersede plan Plan pricing and scheduling cannot be changed directly because subscriptions can already be using the plan, and changing pricing and scheduling in the middle of a period is not allowed. Instead new versions of the plan can be created by superseding a plan. If and how existing subscriptions using the plan should change to the new version can be controlled when superseding. A plan is superseded by providing the same attributes as for a subscription plan create, and an additional option to control what to do with existing subscriptions on the plan. The options are: Deleting a plan just mark the plan as deleted and hide it from lists, and disallow it from any future use. Subscriptions using the plan will stay on the plan. Subscription Subscriptions tie customers to plans and are responsible for billing according to the billing schedule defined on the plan. A customer subscribing to a plan is the result of a subscription sign up. A customer can be subscribed to a plan in a number of different ways. Public hosted page Reepay provides hosted pages for plans where customers are created, can enter payment method and sign up for a specific plan. Invite hosted page Reepay can generate links to hosted pages for existing customers where they are invited to sign up to a plan either using an existing payment method or a new payment method. Reepay API A customer can be signed up using the Reepay API. A payment method can be assigned to the subscription in four different ways: - Use an existing payment method from the customer wallet. - Obtain a payment method using the Reepay.js Javascript library on your website. - Let Reepay send an email with link to a hosted page where the customer can choose existing payment method or enter new payment method for the subscription. - You use the hosted page link from the mail above yourself. Either direct the user to the page directly after sign up, or send your own mail with instructions to enter payment method. Reepay Administration Subscribe a customer to a plan directly in the Reepay Administration. A payment method is added to the subscription either by using an existing payment method from the customer wallet, by sending an email with link to a hosted page where payment method can be added, or by sending a link to hosted page manually from own mail client. State A subscription is at all times in one of two states - Active - The subscription is active and will bill for billing periods according to the billing schedule. The first billing period is allowed to from the start date (see below). - Expired - The subscription has expired. Besides the state the subscription has the following status information. - Cancelled - whether the subscription is marked as cancelled and will expire at the end of the current billing period - Renewing - whether this is the last billing period for the subscription. This is the case if the subscription is cancelled but can also be the case if the subscription has fixed number of billing periods. - First period start - The date and time when the first billing period started - Last period start - The data and time when the previous billing period started - Current period start - The date and time when the current billing period started - Next period start - The date and time when the next billing period starts, and also the date and time of the end of the current billing period. - Trial - Status is provided whether the subscription is currently in trial together with information about trial start and trial end. - Scheduled plan change - Information about a possible scheduled plan change. The plan will be changed at the end of the current billing period. The subscription has the following status aggregated over all invoices for the subscription (see invoice for invoice information) - Number and summed amount of settled invoices - Number and summed amount of dunning invoices - Number and summed amount of failed invoices - Number and summed amount of pending invoices - Number and summed amount of cancelled invoices Create A subscription is created with the following information. Start date Start date is by default the subscription creation time. If defined explicitly the allowable values is any date and time in the future, and up to a period length back in time, where the period length is as defined on the plan. That is, a subscription cannot be backward dated more than one period. The start of the first billing period is dependent on start date and the schedule type defined for the plan. The table below shows when the first billing period will start for different scheduling types. Despite its name, start date is both date and time. Billing periods granularity is down to the second, so billing periods will run from date and time to date and time. Billing will be done at the start of a new billing period, so invoices will be generated and tried collected at the date and time defined by start date. If you would rather bill at night at 01:00 then start date can be set to 01:00 on the current date. For the fixed day schedule types, the billing periods will run from midnight to midnight. Cancel An active subscription can be cancelled at any time. A cancelled subscription is active but will expire at the end of the current billing period. A cancelled subscription can be reinstated by cancelling the cancellation. Expire A subscription can expire due the following reasons 1. The subscription is cancelled and the end of billing period has been reached. 2. The subscription has reached the fixed number of billing periods potentially defined by the plan. 3. An invoice created by the subscription has gone through a dunning process without success, and the configured action to take is to expire the subscription (see dunning) 4. The subscription is manually expired Manually expiring a subscription is only allowed if all invoices is either pending or dunning. Manually expiring a subscription for which the current period has been billed can lead to a scenario where the customer should be compensated for the remaining period. Currently this can be done manually by making a refund on the last invoice. Functionality will soon be added to make this refund an automatic option when expiring. Change next period start It is possible to change the start of the next billing period (same as the end of current billing period). The only rule for the new period start is that it has to be in the future. The new billing period start will be the new base for future billing periods. That is, if changed to the 24th, then future billing periods will run from the 24th to the 24th for monthly based schedule type. Notice that changing the next billing period start for plans with fixed day scheduling type does not make much sense, as these are tied to a specific day of month or week. The result of a change is that the next period start is set to the first fixed day after the given next period start. Change plan The plan for a subscription can be changed. The change is not instant but will be planned for the end of the current billing period. The scheduling and pricing will take effect from the next billing period. Changing a plan is equivalent to restarting the subscription with end of current billing period as new start date. Changing between non-fixed day schedule types is not a problem, and changing from fixed day to Monthly and Daily does not represent a problem. But changing from Daily or Monthly to a fixed day schedule type will result in a period that is longer than the expected period length. If for example a subscription with Monthly schedule type and period start the 24th is changed to a monthly fixed day with fixed day 1, it will result in a billing period from the 24th to the 1th of the following month. It is not recommended to make these changes in schedule type from Monthly or Daily, to a fixed day schedule type. Subscription search Subscription searching can be done on most attributes and statuses. In the Reepay administration it is possible to define custom filters. It is also possible to do a free text search which will match against text attributes on the customer and plan. Payment method The payment method used for a subscription can be changed at any time. Either an existing payment method from the customer wallet can be used, or a new payment method can be added using the Reepay.js Javascript library. The payment method can also be removed from the subscription. This will result in a dunning process for the next invoice generated by the subscription. Additional cost Additional costs can be added to a subscription to adjust the invoice generated by the subscription. Additional cost will be added to the invoice generated by the subscription when the customer is billed for a new billing period or invoices generated on-demand. That is, an additional cost does not result in any transactions, but will adjust a subsequent invoice. Additional costs can be used for metered billing where the subscription amount is dependent only or partly on metered usage. Additional costs are pending when they await an invoice to be added to. When added to an invoice they are transferred. Pending additional cost will be shown on the subscription at any time. Pending additional costs can be collected on-demand by creating an on-demand invoice. Create An additional cost is created with the following information. Cancel Pending additional costs can be cancelled before they are transferred. Credit Credits can, like additional costs, can be used to adjust a subsequent invoice. A credit will be deducted from subsequent invoices until the full invoice amount has been deducted. Credits can be used as an alternative to refund when compensating customers. Create A credit is created with the following information. Cancel A credit that has not been deducted from an invoice, or not fully deducted, can be cancelled. Invoice Invoices are created either automatically by subscriptions at the start of a billing period, on-demand for a subscription, or instantly directly for a customer. An invoice sums additional costs, credits and subscription charges and is collected using payment transactions. States An invoice can be in one of five states. - Pending - The invoice has been created and is awaiting collection from a payment transaction. - Settled - The invoice has been settled with a successful payment transaction. - Cancelled - Invoices that have not been settled, and have no pending transactions can be cancelled. - Dunning - An invoice will enter a dunning state if it cannot be settled due to missing or invalid payment methods. - Failed - An invoice will enter a failed state after an unsuccessful dunning process. Transactions Invoices are collected using payment transactions. Payment transactions consists of payment method, amount, currency and transaction type of either settle or refund. Transactions can be in one of five states. - Pending - A transaction is pending before it is sent to payment gateway. - Processing - A transaction is processing while being processed at the payment gateway. - Settled - A settle transaction is settled when it has been successfully processed at the payment gateway. - Refunded - A refund transaction is refunded when it has been successfully processed at the payment gateway. - Failed - A transaction will enter a failed state if the payment method was declined by the payment gateway. Invoices will try to make transactions using the payment method attached to a subscription. If no payment method is attached, or the payment method is not valid any more, no transaction will be created, and the invoice will stay in pending state. The invoice will enter dunning state and a dunning process will be started, possibly after the grace period for the subscription has elapsed, if the subscription is newly created. Payment gateway specific information for a transaction can be retrieved through the Reepay administration or API. Information An invoice contains the following information. Create for subscription A one-time invoice can be created for a subscription with the following information. Create instant for customer A one-time invoice can be created for a customer given the following information. A customer invoice is conditional in the sense that is a success or nothing operation. An invoice will only be created for a successful transaction, and the invoice will be instantly settled. Cancel Invoices that have not been settled, and have no pending transactions can be cancelled. Cancelling an invoice will also cancel a possible dunning process for the invoice. Refund A refund can be performed on a settled invoice. Multiple refunds with partial amount can be performed, but only up the to settled amount. A refund is done using a credit note that is attached to the invoice. A credit note consist of one or more credit note lines with the following information. The Reepay administration provides an easy way to create a credit note using the original order lines from the invoice as credit note order lines. A refund will create and process a refund transaction and link it to the settle transaction for the invoice. The refund operation is synchronously and the result will be instantly available. Manual settle An invoice that is not settled can be settled using an offline manual transfer. This is a transfer that has happened outside Reepay, e.g. a cash or bank transfer. Reactivation A failed or cancelled invoice can be put back to state pending for processing. The invoice will potentially enter a new dunning process. Search Invoice searching can be done on most attributes and statuses. In the Reepay administration it is possible to define custom filters. Dunning management When a payment transaction fails for an invoice, the invoice will enter a dunning state, and a dunning management process will be started. The process consists of payment retries and customer communication. The result of a dunning management process is either a successful update of payment method and successful payment transaction, or a failed invoice and possibly an expired subscription, depending on the dunning management scheme. Schemes Multiple dunning management schemes can be configured for an account. A plan will use one of these configurations. A dunning management scheme has the following properties. The schedule of a dunning management scheme is expressed in how long to wait between each sending of dunning notification emails to the customer, and the final action. An empty schedule will result in going directly to the final action without sending any notification. A schedule with a single wait duration results in sending a notification immediately when a payment transaction fails, and then wait for the duration before taking the final action. The schedule can be shown graphically as below. It is not recommended to use dunning schedules longer than the billing period length, as this can result in multiple invoices with a dunning process for the same subscription. Payment retries A dunning invoice will be tried collected if the payment method on the subscription is updated, or a failing payment method on the subscription becomes eligible for retry. A payment method can be declined with a hard decline or a soft decline. A hard decline is irrecoverable, e.g. lost or stolen credit card. A soft decline is recoverable, e.g. temporary insufficient funds. For soft declines a payment method will be marked eligible for retry after a certain time period. For credit cards we currently use 6 hours. Customer communication Reepay can handle the email communication with your customers such as subscription sign up receipts, invoice receipts and dunning notifications. The emails can be customized and it can be configured exactly which emails to send. Reepay can send the emails below. Emails can only be sent to customers for which an email has been specified. If no email is set for a customer, an email will still be sent to bcc addresses defined in the mail settings. For each mail type it can be configured whether Reepay should send the email on behalf of your business. A sign up receipt is sent when a customer has signed up to a subscription and the sign up method is not to send an email with a request to choose payment method for the subscription. The default content is information about the subscription product and the first billing date. Same as above but is sent instead when the sign up method is to send an email with a request to choose payment method for the subscription. Invoice receipts are sent when an invoice is successfully settled. The receipts are divided into three different mails depending on payment method as the content can vary for different payment methods. The default content contains amount and orderline information and the next billing date. Invoice receipt (credit card) The default content additionally contains the masked credit card number used for the settle transaction. Invoice receipt (manual transaction) A manual transaction could be a bank or cash transfer taken care of by you outside Reepay. A specific mail template allows you to customize the content in this case. Invoice receipt (zero amount) A zero amount invoice is a special case that is settled without any transaction and payment method. You may want to specify special content in this case. Notice that credits can result in zero amount invoices if the credit is higher that the billing amount. Invoice refund receipt (credit card) A receipt is sent when a refund is made for a settled invoice. The default content contains the credit note lines and masked credit card number to which the refund is performed. Invoice refund receipt (manual transaction) Same as above but for refunds on manual transactions. Subscription Renewal Reminder Per plan optional renewal reminder email sent prior to subscription renewal to notify your customers about their upcoming subscription renewal. Subscription Trial Ending Reminder Per plan optional trial ending reminder email sent prior to subscription renewal to notify your customers about an upcoming end of trial. Subscription cancelled Mail sent when a subscription is cancelled. The default content contains information on when the subscription will expire. Subscription un-cancelled Mail sent if a cancelled subscription is un-cancelled. Subscription change Mail sent if subscription pricing or billing periods is changed on a subscription. That is the mail is sent if plan is changed for the subscription or the start of the next billing period is changed. Dunning notification Mail sent as part of the dunning management process. The mail is sent repeatedly Dunning notification Mail sent as part of the dunning management process. The mail is sent repeatedly according to the dunning management scheme. Dunning notification (no payment method added) If a dunning management process is started for a subscription where a payment method has never been added, this mail will be sent instead of the above. The wording in this mail will in many cases be different than the above, as the customer has not responded to requests for adding payment method. Subscription expired Mail sent when a subscription is expired either because of prior cancellation or because a fixed number of billing cycles has been reached. Subscription expired (dunning failed) Mail sent when a subscription is expired due to a failed dunning management process. This mail will often have different content than the expire mail above. Account invite Mail sent when you invite others to join your team. Account invite notification Mail sent when you invite others that are already Reepay users. The following email settings can be configured for an account. All email addresses can either be specified as an RFC 822 address on the form [email protected] or on the form My Company <[email protected]>. Emails are subject to spam filtering. To ensure the highest delivery rate we advise you to use mail.reepay.com as domain in the from address to match the sender domain which is mail.reepay.com. You can use your own email as from address, but some email clients will mark a mail as spam if sender domain does not match the domain in the from address. To be able to receive replies we advice you to set a custom Reply-To address. That is, we recommend to set From and Reply-To as follows: From: Your Company [email protected] Reply-To: Your Company [email protected] The different email types can be customized in a number of ways. Enable or disable email By default will Reepay send all the email types on behalf of your business. It can be configured for each mail whether it should be sent or not. Alternative from address An alternative email from address can be defined for each email. The default from address will be used if no alternative is defined. Adding Cc or Bcc addresses Cc/Bcc addresses can be added on the mails sent to your customers. This is helpful if you would like to be notified of certain events, e.g. when a subscription is expired due to a failed dunning management process. HTML and text version Reepay sends both a plain-text and HTML email to your customers. The customers mail client will show the HTML version if it can, otherwise it will fall back to the text-only version. We recommend to customize both templates if the mail content is equivalent. The content of mails can customized by using your own wording and formatting. Dynamic parameters can be inserted into mails using tags such as {{invoice.amount}} for invoiced amount. A complete list of tags can be found in the Reepay administration which features a template editor with instant preview of the mail using sample data for dynamic parameters. Header and footer A header and footer can be defined for both HTML and text. These are included in mails by using {{{text_header}}}, {{{text_footer}}}, {{{html_header}}} and {{{html_footer}}}. Conditional sections In some cases it is helpful to have parts of a mail depend on a specific condition. E.g. whether a subscription is in trial. Below an example with the subscription sign up receipt mail and the boolean variable in_trial. {{#subscription.in_trial?}} You will be billed {{plan.amount}} {{plan.currency.symbol}} when your subscription trial ends at {{subscription.trial_end.date_short}} {{/subscription.in_trial?}} {{^subscription.in_trial?}} You will be billed {{plan.amount}} {{plan.currency.symbol}} at {{subscription.next_period_start}} {{/subscription.in_trial?}} The ^ character tests that the parameter is false. That is, the second block is shown if the subscription is not in trial. For general parameters (not only boolean) a section can be shown only if the parameter has a value with the technique show below. {{#subscription.next_period_start}} Next payment date: {{subscription.next_period_start.date_short}} {{/subscription.next_period_start}} Invoice order lines Invoice receipt mails contains order lines. To iterate over the lines the following can be used: {{#invoice.orderlines}} {{quantity}} x {{ordertext}} {{amount}} {{currency.symbol}} {{/invoice.orderlines}} Images Images can be drag'n dropped into the HTML templates or you can also reference external images with HTML. All the emails sent by Reepay on behalf of your business is stored and in the Reepay administration you can see what emails that has been sent for customers, subscriptions and invoices. Reepay tracks emails by registering three types of events: - Rejected - The email could not be delivered because the recipient address does not exist, or the email is rejected by the receiving mail server. - Delivered - The email has been sent and accepted by the receiving mail server. - Opened - The HTML version of the email has been opened in a mail client. The opened event is tracked by an invisible image in the HTML version of the mail. Some mail clients block image viewing, and in this case the opened event will not be registered. That is, an opened event is a guarantee that the email has been opened, on the other hand a missing opened event is not a guarantee that the mails has not been opened.. Hosted pages is under development and currently only two pages are featured: - Page to update payment method - linked to in dunning notification emails - Page to add payment method for the first time - linked to in sign up mails sent when subscriptions are created with sign up method email. Payment gateways Payment gateways are used to process transactions and moving funds into your merchant bank account. To be able to accept credit card payments or other types of payment methods, you will need a payment gateway that Reepay supports. If you do not have a payment gateway, or are unsure, we will happily support you in the process of obtaining a payment gateway. Configuration To configure a payment gateway you will need to add it in the payment gateway configuration section of the Reepay administration. For all payment gateways you will need to configure which credit cards the payment gateway supports. Each gateway has gateway specific settings that you received when signing up for a payment gateway, e.g. API key or other types of credentials. Routing You can configure more than one payment gateway. When Reepay chooses gateway it is based on the credit card type. If more than one gateway supports the same card type, the first one defined will be used. We recommend not to configure multiple gateways with overlapping card type support. Switching gateway A new gateway can be added and an old one can be deleted. The deleted payment gateway will still be used for credit cards authorized against the old payment gateway as saved cards are tied to the payment gateway. Payment gateway errors If for some reasons the payment gateway is experiencing problems, Reepay will automatically retry to settle transactions repeatedly. Organisation and account settings In Reepay you will have an organisation with one or more accounts. Each has different settings that configure your accounts and define information that is used in email communication and on hosted pages. Organisation The following settings must be defined for an organisation. Name Company name across all accounts. Subdomain You can define what Reepay subdomain you would like to use for hosted pages, e.g.. The subdomain can be changed in production mode, but it is not recommended unless absolutely necessary as hosted pages URLs with the old subdomain sent in mails will be invalid. Account ID A per organisation unique account identifier. The identifier may only contain alphanumeric characters and the characters (_ . - @). Name A name for the account. The account name is used by default as company name in mail templates. Language The language determines the language of the the default email templates and the formatting used for amounts and dates. Currency An account is tied to a single currency. The currency can only be changed in test mode. Once an account enters production mode, the currency cannot be changed. Default VAT VAT are used on invoices to show how much of the amount is VAT. VAT can be defined on plans and for additional costs. If not provided there, this default VAT will be used. Timezone The timezone is used for formatting date and times in the Reepay administration, on hosted pages and in emails. VAT registration number VAT registration number. Recommended to use both country code and number, e.g. US4345234 or DK2131214. Address Address information for your business. Contact information The contact information information presented to your customers. Includes email, phone number and website URL. Logo You can upload a logo for your business that can be used in emails and will be shown on hosted pages. You can provide a terms of service text for your business that will be shown to customers on hosted pages. Users and roles You can add any number of users from your organisation to handle your Reepay account. You invite the users in the Reepay administration and give them appropriate roles. You can remove users and permissions can be changed. Integration methods To use Reepay you will need an integration to your business. Reepay has the four integrations below. Reepay administration The Reepay administration (admin.reepay.com) is a web application where you can configure your Reepay accounts and manage customers and subscriptions, and monitor your business with statistics and make reports, etc. Hosted pages Hosted pages are web pages hosted by Reepay. The pages can be used by you to interact with your existing and potential new customers. A number of hosted pages exists, e.g. update of payment method and customer sign up to subscription. Using hosted pages for interaction involving a customer entering sensitive credit card information relieves you of the PCI requirements involved when handling card information. See more in the Security section. Reepay Token Reepay Token is a Javascript solution that you can use on your site to obtain customer payment information in a secure manner. Entering sensitive payment information (e.g. credit card number) is handled by Reepay and in exchange you receive a token representing the payment information that can be used when calling our API Reepay.js Reepay.js is a Javascript library that allows you to embed forms on your website through which customer interaction with Reepay can be performed. Using Reepay.js allows you to tightly integrate from your own web application with your own business look and feel. API The Reepay API provides the most flexible integration method. The API gives direct access to the entities such as customers and subscriptions. The API also include the ability to call webhooks on your server to notify of events in the Reepay system. Using the Reepay administration and hosted pages alone represents a completely non-technical integration. From there you can choose how much technical integration is needed, if any. There are a number of considerations to make when choosing integration method listed below. Development time Depending on company resources and the desired timeframe to start subscription billing, you may want to go into production right away, or use more time on integration. The different integration methods has roughly the following development times. - Reepay administration - No development time is required. Customers and subscriptions can be created and billed once a payment gateway has been obtained. - Hosted pages - Hosted pages can be customized in under than 30 minutes. - Reepay Token - Can be embedded into your payment or sign-up forms within a day. - Reepay.js - Javascript forms can be embedded on your website in 2-3 days. - API - Depending on technical resources an integration can be made in under a week. Notice that if a payment gateway must be obtained, up to three weeks should be expected for the approval process, depending on acquirer. Technical skills The different integration methods requires different technical skills as listed below. - Reepay administration - No technical skills are required. - Hosted pages - No technical skills required. - Reepay Token - Some Javascript and HTML skills. - Reepay.js - Javascript and HTML skills. - API - Developers with an understanding of REST API’s and JSON are needed. Look & feel The different integration methods allows different levels of customizable look and feel. - Reepay administration - Interaction with the customer is handled person-to-person or over the phone. Mail communication for obtaining payment method. - Hosted pages - Minimal customization. - Reepay Token - You can make the forms match your own business look and feel but the entering of credit card information will happen in a layer styled by Reepay. - Reepay.js - You can make the forms match your own business look and feel. - API - The complete checkout experience can be customized. PCI compliance Each integration method has requires different levels of PCI compliance. - Reepay administration - All card information is handled directly inside Reepay, making your business eligible for PCI DSS Self-Assessment Questionnaire A. - Hosted pages - All card information is handled directly inside Reepay, making your business eligible for PCI DSS Self-Assessment Questionnaire A. - Reepay Token - All card information is handled by Reepay on a Reepay controlled page, making your business eligible for PCI DSS Self-Assessment Questionnaire A. - Reepay.js - All card information is passed directly to Reepay, but the entering of credit card happens on a page you control, making your business eligible for PCI DSS Self-Assessment Questionnaire EP. - API - If you use API’s that require credit card data to pass through your system to Reepay, you are required to complete the PCI DSS Self-Assessment Questionnaire C. Testing Testing a Reepay integration can be done with an account in test mode. Testing payments and subsequent error scenarios are done using our Test Payment Gateway and a number of test credit cards in combination with some specific cvv codes found here. Security Payment Card Industry Data Security Standard (PCI-DSS) provides a framework for developing a robust security process for credit card transactions. Any merchant or service merchant provider accepting, transmitting, and/or storing cardholder data must be PCI compliant. Reepay is a PCI-DSS Level 1 compliant merchant service provider. Using Reepay can help you meet PCI compliance requirements by handling all sensitive credit card data. A merchant must always be PCI compliant if they accept credit card payments online (even if the card is entered on another site). Below the PCI requirements for the different Reepay integration methods are listed. Reepay administration The Reepay administration does not by itself have any possibility for entering card information. Hosted pages When using hosted pages your customers sensitive card data is handled directly at Reepay without touching your system. As a merchant, this qualifies you for the simplest PCI compliance level, the shortened PCI DSS Self-Assessment Questionnaire A. Reepay Token The Reepay Token solution uses an Iframe solution where the entering of sensitive data is done on a Reepay hosted page so your system never touches the information and it is not possible to steal sensitive information with cross-site scripting. This qualifies you for the simplest PCI compliance level, the shortened PCI DSS Self-Assessment Questionnaire A. Reepay.js When using the Reepay Javascript library, sensitive card data passes directly from the customer’s browser to Reepay, without passing through your servers. But the pages using the Javascript is hosted by you, making you eligible to complete the shortened PCI DSS Self-Assessment Questionnaire A-EP. Your pages must be served by HTTPS and remember to follow best practices to secure your servers and pages from being compromised with cross-site scripting. API Using the API is only in PCI scope if the API is used for passing sensitive card data. This is also the case even if card data is not saved but only passed through. You will be required to complete the PCI DSS Self-Assessment Questionnaire C in this case. If the API integration does not include card data, you are eligible for the shortened PCI DSS Self-Assessment Questionnaire A-EP. PCI best practices -.
https://docs.reepay.com/
2018-07-15T20:52:37
CC-MAIN-2018-30
1531676588972.37
[array(['images/reepay_overview.png', None], dtype=object) array(['images/Basic_concept.png', None], dtype=object)]
docs.reepay.com
Budget items The items that comprise a budget plan come from several other applications in an instance, including Asset Management, Project Management, and Configuration Management. This enables you to budget your expenses for items across your IT infrastructure. Each item in the budget plan has an actual item cost and an account number. The actual item costs roll up to comprise the budget plan's actual budgeted amounts. Starting with the Helsinki release: you can add product catalog items to budget items. See Product Catalog for more information. Expense type will be determine by the Account selected on the budget item. Budget item breakdownsThe total cost in a budget plan is divided into smaller units called breakdowns.Add items to a budget planAfter you create a budget plan, add items to the plan, such as assets and configuration items. You can then track the estimated target cost of all the items in the budget versus the actual costs.
https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/it-finance/concept/c_BudgetItems.html
2018-07-15T21:22:35
CC-MAIN-2018-30
1531676588972.37
[]
docs.servicenow.com
The Horizon Agent component (called View Agent in previous releases) assists with session management, single sign-on, device redirection, and other features. You must install Horizon Agent on all virtual machines, physical systems, and RDS hosts. The types and editions of the supported guest operating system depend on the Windows version. For updates to the list of supported Windows 10 operating systems, see the VMware Knowledge Base (KB) article. For Windows operating systems other than Windows 10, see the VMware Knowledge Base (KB) article. To see a list of specific remote experience features supported on Windows operating systems where Horizon Agent is installed, see the VMware Knowledge Base (KB) article. To use the Horizon Persona Management setup option with Horizon Agent, you must install Horizon Agent on Windows 10, Windows 8, Windows 8.1, Windows 7, Windows Server 2012 R2, Windows Server 2008 R2, or Windows Server 2016 virtual machines. This option does not operate on physical computers or RDS hosts. You can install the standalone version of Horizon Persona Management on physical computers. See Supported Operating Systems for Standalone Horizon Persona Management. To use the VMware Blast display protocol, you must install Horizon Agent on a single-session virtual machine or on an RDS host. The RDS host can be a physical machine or a virtual machine. The VMware Blast display protocol does not operate on a single-user physical computer. For enhanced security, VMware recommends configuring cipher suites to remove known vulnerabilities. For instructions on how to set up a domain policy on cipher suites for Windows machines that run View Composer or Horizon Agent, see Disable Weak Ciphers in SSL/TLS.
https://docs.vmware.com/en/VMware-Horizon-7/7.3/horizon-installation/GUID-B45E1464-92B1-4AA8-B4BB-AD59EDF98530.html
2018-07-15T21:36:19
CC-MAIN-2018-30
1531676588972.37
[]
docs.vmware.com
Important Note: This feature is currently in Beta and is not available for all users at this time. What you will need: - Clickfunnels Actionetics MD is required. - PayPal Business Account. Click here to create a business account with PayPal. Step One: Create PayPal Client ID and Secret Key - Login to your PayPal account. - Scroll down to Rest API apps and click on the Create App button. - Enter an App Name. - Click on Create App. - Click on the Live tab to view the live API Credentials. **Please note: the Live API credentials are required for live transactions. The Sandbox API Credentials will only work with the funnel in test mode.** Step Two: Integrate with Clickfunnels - In Clickfunnels, select Payment Gateways from the account profile menu. - Click on Add New Payment Gateway. - Click on PayPal. - Copy and Paste the Live Client ID and Live Secret ID from PayPal into Clickfunnels. - Click on Create PayPal V2 Account. Step Three: Add PayPal to Order Page Learn how to add PayPal to your ClickFunnels order page If you have any questions about this, please contact our support team by clicking the support icon in the bottom right-hand corner of this page.
http://docs.clickfunnels.com/payment-gateway-integrations-and-product-setup/paypal/paypal-api-integration
2018-07-15T21:24:06
CC-MAIN-2018-30
1531676588972.37
[array(['https://downloads.intercomcdn.com/i/o/58524805/4f1c7d25657d5c91e0898372/PayPal.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/58525560/4e108f6fb97b3dbbc2ebf34b/PayPal_Live.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/58525746/6ad30e689ed6a67488f4cad3/PayPal2.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/58633706/53cf7c815a29bdee3d768847/PayPal_API.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/54638877/98f4f645968199f5a660e64d/rule.png', None], dtype=object) ]
docs.clickfunnels.com
Top Menu > My PRofile > My vehicles. Top Menu > My PRofile > Dashboard > My vehicles > View All This is vehicles listing page for users. Users can see all their vehicles here regardless of vehicle status. Bread Crumbs. This portion is page heading. There is a combo box and button on far right side of heading that are used for sorting and ordering of vehicles. This portion represents an individual vehicle, it has vehicle title (make, model and model year), Vehicle Status, price, fuel consumption, location, condition, mileage, transmission, fuel type and date posted of the vehicle, if the vehicle is featured then it also has exterior color and stock number of vehicle. There are some action buttons on bottom right side, there is button for vehicle edit, sold, add to featured and delete. Clicking on vehicle title will take to vehicle information page. Clicking on edit button will take to form vehicle with the detail of vehicle filled in the fields. Clicking on sold button will mark vehicle as sold. Clicking on delete button will remove the vehicle from system.. When user clicks on add to featured vehicle button. If cost for add to featured is not defined and featured vehicle is auto approved then vehicle will become featured vehicle.If cost for add to featured vehicle is defined then a popup will appear on screen that will have details like total credits of user, credits options for that action with expiries (if defined), credits remaining after proceeding and two buttons proceed and cancel. If user clicks on proceed vehicle name will show featured/waiting tag next to it and credits that was required for featured vehicle (if multiple credits were defined for featured vehicle then credits of option selected) will be deducted from his total credits. Cancel button will close the popup. If user does not have required credits for featured vehicle then he will see a message you do not have enough credits and a link to buy credits. If vehicle had any images uploaded then click on image on vehicle listing will open image viewer with all the images of that vehicle and basic details about the vehicle like price, mileage, fuel type and transmission. Pagination
http://docs.joomsky.com/jscarmanager/vehicles/frontend/myvehicles.html
2018-07-15T20:50:43
CC-MAIN-2018-30
1531676588972.37
[]
docs.joomsky.com
Table of contents Related Articles Introduction Usually, after the Incidents/SR module is operational, the following module to be implemented is Problems. This article addresses the application of the Problem Management process in Octopus. To learn more about the basic ITIL® concepts for this process, see the Problem Management - ITIL® Process article. General concepts There is a fundamental difference between an incident and a problem. Simply put, an incident is an unplanned situation and must be resolved as soon as possible since it directly impacts the users capacity to work. Problems, meanwhile, have a different view of an incident by understanding its root cause, which could also be the cause of other incidents. While assignees involved in Incident Management are busy resolving incidents, those involved in Problem Management primarily look for ways to prevent the occurrence of incidents or reduce their impacts so that users are not affected by interruptions. To help you put these concepts into context, we will use a scenario. «Recently, analysts from the Service Desk have been reporting a higher number of calls for workstations not responding (frozen). One particular user mentions that she did not have this problem in the past, but now the situation happens regularly. Service Desk analysts make the usual troublehooting for this kind of incident and until now, a reboot restores the situation. This is done for all users reporting similar symptoms. Each time, an incident is created and resolved by rebooting the computer.» Understandably, the reduction or elimination of these repeated incidents would benefit, both the IT group and the users experiencing these interruptions. Here are three important rules: - A «Resolved» incident should not be kept open to be analysed later, we proceed with its resolution anyway. However, it is appropriate to create a problem request and link all related incidents to it. - An incident is not closed, without being resolved, because a problem exists for it. The incident must still be resolved. If there is no solution available, the incident must be escalated to a specialised or higher level group, sometimes help can be sought with a supplier if the expertise is not found internally. But you need to keep looking for a solution to eliminate the incident, even if it is a temporary one. The Service Desk will keep creating and resolving incidents that arise, as long as a permanent solution is not deployed (usually with a change). - When a change is required to resolve an incident, a problem is not necessarily created. If the incident analysis was sufficient to find the cause and the solution of the situation, a change can be applied directly. . Do not burden your service with a problem that would not have any added value. A relationship between the incident and the change will explain the situation. While it is more common that a problem is created after the occurrence of several incidents of the same nature, ITIL® mentions that one incident is enough to justify the creation of a problem record, as long as you suspect the existence of an underlying cause. The criteria observed in potential problem identification are multiple and could be: same symptoms, recurrence at a time of the day (week, month), new incidents following a change or a deployment, etc. Here is an illustration that shows the relationship between incident, problem and change. Problem Creation in Octopus Create a problem manually - From the Problems module, click the Create a problem action - Document the problem with pertinent information, identify CI(s) and add related incidents. Note that Group and Subject fields are mandatory for creation - Click OK to save the problem. A unique reference number is automatically allocated by the system Description of the fields Status Octopus manages problem status and problem lifecycle status. Problem status are open and closed; each includes a set of problem lifecycle status: - Open: includes from New to Change in process - Close: includes Closed and Inactive Those problem status are accessible from the Home module, by an advanced search on status field. For example, by selecting Problem as request type and status Open, you could get a list similar to : Problem lifecycle status are specific to the problem record, during its lifecycle. So make sure that advanced search is done at the right place: See below a description of problem lifecycle status. Priority / Impact / Urgency Problems should be prioritized the same way as incidents, by using the same reasons. However, other factors should be considered: - Frequency and impact of related incidents - Availability of a temporary solution (workaround) - Criticality of the problem from a service, customer and infrastructure perspective: - Can the system be recovered, or does it need to be replaced? - How much will it cost? - How many people, with which skills, will be needed to fix the problem? - How long will it take to fix the problem? - How extensive is the problem (how many CIs are affected)? - What are the business impacts? From Octopus, click the blue arrow to select the Impact and Urgency. The Priority is accessible directly. Impact, urgency and priority levels are detailed in the table below. Note that they are not configurable in the Reference Data Management. Categorization / Assignment / Subject Categorization: the problem categorization is similar to the incident categorization. Assignment: A problem is usually assigned to a group with the expertise to find the root cause, the solution to be applied, who is familiar with the activities related to the Problem Management process, and understands the necessary interactions with the Incident, Change and Asset & Configuration Management (CMDB) processes. Source*: List of items identifying the problem source. We proposed the following sources: change result, event logs analysis, incident history, observation, operation activities analysis. Contact*: To document the contact information. Detection: By default, detection is set to Reactive, which corresponds to a problem whose analysis is based on existing data of incidents, events or impacted CI. Select Proactive for a problem created for infrastructure known weaknesses or Information / Warning event types that may generate incidents. Subject: Type in a subject that is significant and clearly describes the problem. * Source and contact fields can be available upon request to the Octopus Service Desk. Documentation Tab Several fields are available to document the problem throughout its lifecycle. The documentation identifies the information already held on the incidents and the problem, namely the symptoms, occurences in time, etc., but also the actions and results that occur as you work on the problem. These areas are particularly important: - Description: describes more precisely the problem, its symptoms. A problem created from an incident copies the symptoms documented in the incident description into the problem description field - Impacts: the impacts that are caused by this problem - Root Cause: identifies the source that causes or could cause incidents - Reproduction Scenario: step by step scenario that reproduces the behavior, the error - Workaround: documents a temporary solution that allows Incident Management to resolve an incident, while Problem Management continues to seek a permanent solution. The problem remains open, until a permanent solution is implemented and the problem solved. In the meantime, you continue to create incidents, apply the workaround and link them to the problem - Permanent Solution: permanent solution, implemented through a change, which is to ensure that no more incidents occur. The problem is ready to be closed For financial reasons, some problems might never be resolved (for example, when the impact is limited and the resolution is costly). In this case, you may decide to leave the problem open and apply the workaround at all times. Once the problem is resolved, you must ensure that the implemented solution has achieved its objectives, namely that incidents will not occur again. If another similar incident appears, it will be necessary to create a new problem and link it to the previous one. Tasks Tab As for service requests and changes, it is possible to add approval, standard and notification tasks to a problem. For more details concerning tasks, consult the Task Management article. Activities Tab Represents the problem activity log, where all activities are logged in chronological order (date and time) throughout the problem management steps. Activities related to Problems module can represent, among other things: - diagnostic activities of the problem - the resolution activity, which is easy to identify if we use an activity type - communications, by sending activities by email to other Octopus users, suppliers, end-users - others We recommend that you enter activitiy efforts because they represent the time worked by an Octopus user for the problem. This time represents costs and is added to the total cost of the problem. With this information, you can estimate if the the time already invested or additionnal estimated time is worth continuing efforts; entering this data contributes to decision making. Request Tab It is possible to link all types of request (incident, problem, event, change) to a problem, and to qualify the relationship you want to establish. For example, if you want to link a change to a problem, the possible relationshipts can be: - Is the cause of - Is the solution of - Is related to Other logical relationships are presented according the request type selected. CI Tab Enter the CI that at the source of the problem. CI in cause identified in an incident is not necessarily the one that causes the problem. Attached files Tab As in all other Octopus modules, you can attach files, add shortcuts, or add an attached file from the content of the clipboard. The Show activity attached files checkbox consolidates all attachments, including the activity ones. History Tab Octopus keeps track of date and time when a problem is created and updated in the History tab. The information presented indicates the data added or updated (the initial value and the new value), the modification date / time and the Octopus user who made it. Problem closure When closing a problem, you can specify a closure categorization by selecting an activity type. From the Activities tab: - Click on Add - Select a type that corresponds to the problem closure - Application update - CI decomissioned - External supplier intervention - Hardware repair - Internal technical intervention - Document the closure activity in the Work breakdown section - Save with OK - Change the problem status to Closed To find out how to configure the activity types, see the Activities in Octopus Wiki article. Create a problem record from an incident - Open an existing incident, from which you want to create a problem - Click on Create a problem from this incident action - category, assignment, subject, description and CI in cause are automatically copied into the problem record - Fill in the problem creation form according to known information, link other incidents if applicable - Click OK to save the problem record - The incident that was the source of the problem is automatically linked to the problem Create a problem record from a problem The action Create a problem from... in Problems Module needs a plugin that can be added to your Octopus database. To get it, you only need to make a request to the Octopus Service Desk. By clicking this action, the system opens a problem creation form and transfers the following informations: - Impact, urgency, priority - Category / subcategory - Assignment - Subject - Content of Description field in Documentation Tab. You can create a new problem from an existing problem, but you could also create problem templates by writing Model in the subject. By excluding those models from your current problems lists, you could create a new list that only contains problem models. Those could represent, for example: - Major Problem - high priority, assignment to the System Administrator group, with a procedure indicated in description - Network Problem - assignment to the Network Administrator group, with a procedure indicated in description - Other Create a change record from a problem The action Create a change from this problem in the Problems Module needs a plugin that can be added to your Octopus Database. To get it, you only need to make a request to the Octopus Service Desk. By clicking this action, the system opens a change creation form and once the change is recorded, the problem request is automatically linked to the change record in Requests Tab. Create a Known Error In Octopus, there is a built in list that displays the Known Errors. This list is based on specific criteria. Here are the steps to create a known error: - The status of the problem is not Closed - The root cause has been identified - A workaround has been identified Once these criteria are met, the problem will appear in the Known Error list and the Octopus users will apply the workaround when appropriate. Contribution to Incident Management Octopus users involved in Incident management can contribute in many ways to Problem Management. The first and most important contribution is undoubtedly the rigorous processing of each incident. This includes: - Proper documentation, of the subject, description, troubleshooting steps, actions taken and resolution applied, for each incident - The confirmation of the categorization and the CI in cause when resolving the incident - The use, when appropriate, of the Potential Problem checkbox for an incident that could be a potential problem Incident marked as Potential Problem Any Octopus user can report an incident as a potential problem by selecting the Potential Problem checkbox and justifying the reason for this action. There is no Octopus permission for this, we recommand to establish an internal procedure to ensure the effectiveness of incident analysis. It is of course necessary to document property in the activies, the symptoms, troubleshooting steps and functional escalations to other support levels. If a workaround temporarily removes the symptoms, it should be mentioned in the incident resolution activity. Link other incidents of similar nateure, as they can be linked to the problem. All these actions and documentation provide elements for analysis and important clues to identify the root cause and possibly a solution to the problem. To report a potential problem from an incident: - Check the the Potential Problem checkbox, located below the Activity Log - Add a note to justify the reason - Link any other incidents of the same nature, if applicable Note that this action does not create a problem request, it merely identifies incident(s) that could be potentially be a problem. Problem Management Contribution Documentation of the problem Workaround field provides Incident Management with a list of temporary solutions to resolve incidents. From the Incidents/SR module, the Find a solution action displays a list of problems for which a workaround is documented. By clicking this action, the system searches for a correspondance with the CI, and/or the manufacturer, the model, the categorization and displays a list of potential workarounds. As we mentioned above, you must decide which problem is worth pursuing until a permanent solution. If you have to modify a CI to resolve a problem, the best practice is to go through a change. Therefore, a problem well documented wtih the impact, urgency, related incidents or events, related CI, efforts and costs will contribute to the evaluation and implementation of the change. Data Analysis Various analysis techniques are available to help teams identify the root cause of an incident and assess the options in the approach to be taken to reduce the impact or resolve. We invite you to consult the Investigation & Diagnosis activity described in the Problem Management - ITIL® Process, article, you will find a short description of these techniques. Using Octopus, employees working in Problem Management will be able to gather data from Octopus to observe what happened in the production environment, identify trends and create appropriate problems. There are several possible sources of data in Octopus, including: - Incidents marked Potential Problem - Statisticmodule: - Problem > Most problematic CI - Problem > Most problematic CI models - Problem > Most problematic CI types - Number of events (alerts) that generated incidents - Activity types, configurated to identify the cause of an incident at resolution - Existing problems with their related incidents - Etc. According to observations and findings, the Octopus user will decide whether to create a problem. But Octopus data will not be the only source in the decision to create a problem. A known unstable CI, a critical CI with no redundancy, or CIs related in «Information» or «Warning» events, a trend that could be corrected would be a reason to create a problem proactively in order to track CIs that may generate incidents in short, medium or long term. Other examples could apply. See the Event Management - Octopus Module or Event Management - ITIL® Process articles to learn more about events. Incidents marked as not significant for problem management During the analysis phase, the Octopus user working in Problem Management identifies incidents that are not significant to problem management in order to remove then from future searchs. - Select one or more existing incidents - Click the Mark as not significant for problem management action - A message informs you that selected incident(s) will be marked as not significant for problem management - this operation is irreversible - The potential problem checkbox is replaced by a note indicating that the incident was marked as non significant for problem management Advanced Search From the advanced search of the Incidents/SR Module you can access the Problem tab. From there, you can filter incidents with the following options: - Incidents not associated to a problem - Exclude incidents not significant for problem management - Incidents marked as potential problems Therefore, the Octopus user can do the incident search according to his needs. The search results will include all incidents, from the «new» to the «closed» status. The advanced search allows to retrieve incidents related to problems and vice-versa. Here are the steps to use: - Open advanced search - Select Request relationship from the list of result types - Enter Type (Request 1) field equal Incident - Enter Type (Request 2) field equal Problem You will get a list of incidents related to a problem, which you can save for future reference.. Permissions - Access the problem management module - Create a problem - Manage the relationship between incidents and problems - Modify a problem's status - Modify a problem - Delete a problem task To see the list of permissions and a brief description, you can download the following document: Octopus Permission Reference. Reports and Lists In the Statistics Module, several reports related to Problem Management are available. Some are management reports, others are used to identify CIs, CI models or CI types that were implicated in incidents or problems, which are useful for Problem Management. Lists also allow extraction of useful information for Problem Management, for example: - Known Errors: display problems with documented Root cause and Workaround. They help Incident Management resolve incidents faster by making solutions available . - CI associated to problems: list of CIs related to problems - Open: list of active problems; by displaying priority and number of incidents columns, this list provide information about a potential priority change (an increase in related incidents could justify it) - Closed: list of resolved problems - Root Cause To Be Identified : problems open for more than one month for which a root cause is not documented; this list is useful to track problems that have not been worked on for some time and is based on exceeding a threshold - Closed Incident Categories : this list serves Incident Management and Problem Management in the analysis of the categorization selected in incidents and problems, in order to adjust (by modifying or adding categories) the categorization structure. This list can be exported in Excel, like all lists created in Octopus. Other Applications Knowledge To support the teams participating in Problem Management, knowledge should be documented to help with the proper operation of the process, including: - investigations, diagnoses, root cause analysis techniques - creating / updating workarounds, temporary fixes and resolution This knowledge can be documented in Configurations module in a document type CI. Thus, teams can refer to and even use analysis documents in a problem. See How to Manage Procedures in Octopus that will guide you in the establishing formalized knowledge for Problem Management. Application Development An organization who has in-house application development, could want to record known errors during the development phase (problem with a root cause identified and a documented workaround). If the known error is resolved before the application go-live, the record can be resolved. Otherwise, we could transfer the known error into production, as it could be used for incident resolutions, by selecting a problem lifecycle status. To distinguish the known errors in development from the known errors in production, separate lists can be created for consultation. If you want to apply this concept to your development department, you must make a request to Octopus Service Desk who will add the In Development status to the Problems Module of your database. Major Problem To know more about Major Problems A major problem is a Problem where the severity or impact is such that management decides to review how the situation was handled. A major problem review includes processes followed, actions of staff, tools used and the environment. This review is a learning activity, it is not punitive or a criticism It aims: - not judging success or failure - to attemp to discover why things happened - to focus directly on tasks and goal that were to be accomplished - to encourage employees to surface important lessons learned - to share lessons learned with others Ths last version of ITIL® introduced a major problem review activity to prevent reoccurrence, to verify that the problems marked as closed have effectively eliminated the error, and to retain lessons for the future. In the Octopus context, the major problem review can be assured by a standard task added to the problem workflow. It should include the following elements: - what was done right - what was done wrong - what could be done better next time - how to prevent the problem from happening again - identification of lessons learned Do not hesitate to consult the Task Management article to get more details on task configuration in Octopus. Thank you, your message has been sent.
https://docs.octopus-itsm.com/en/articles/problem-management-octopus-module
2018-07-15T21:17:17
CC-MAIN-2018-30
1531676588972.37
[array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Inc-prob-chang_Relations_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/CrerProbleme_vPink_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Statuts%20et%20statuts%20de%20cycles%20de%20vie_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Lifecycle%20status3_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Impact_Urgence_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Onglet_Documentation4_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Onglet_Requetes_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Onglet_Fichiers%20joints_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Onglet_Historique_EN.png', None], dtype=object) array(['https://docs.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/CreerProblemeAPartirIncident1_EN.png', None], dtype=object) array(['https://docs.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/CreerProblemeAPartirIncident2_EN.png', None], dtype=object) array(['https://docs.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/CreerProblemeAPartirIncident3_EN.png', None], dtype=object) array(['https://docs.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/SignalerUnProbleme_EN.png', None], dtype=object) array(['https://docs.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Non_signification_GDP_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Recherche%20avancee_EN.png', None], dtype=object) array(['https://docs.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Recherche%20avancee%20Type_EN.png', None], dtype=object) array(['https://docs.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Statistiques_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Cause%20fondamentale%20a%20identifier_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/Gestion des problèmes - Module Octopus/Categories%20incidents%20fermes_EN.png', None], dtype=object) ]
docs.octopus-itsm.com
Troubleshooting inSync Cloud Editions: Elite Plus Elite Enterprise Business Issue: A user is not able to activate inSync Client or access inSync Web from corporate device The group policy object has failed and the user certificate is not installed on the corporate device. Resolution To manually install a user certificate on the corporate device - On the corporate device, launch the Microsoft Management Console and add the user certificate. - Select the template that was configured for enterprise users. For example. the “ADFS user” is the user template that is rolled out from the enterprise CA. The inSync user can now access inSync Web or activate inSync Client from the corporate device.
https://docs.druva.com/001_inSync_Cloud/Cloud/060_Reports_Troubleshoot/120_Reference_Reads/Configure_Geofencing_by_using_ADFS/Troubleshooting
2018-07-15T21:28:02
CC-MAIN-2018-30
1531676588972.37
[array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/cross.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3643/cross.png?revision=2', 'File:/cross.png'], dtype=object) ]
docs.druva.com
Create a Discovery behavior Create a Discovery behavior to determine which probes Shazzam launches and which MID Server is used. Before you beginRole required: discovery_admin Procedure Navigate to Discovery Definition > Behavior and click New. Enter a name. Right-click the form header and select Save. In the Discovery Functionality related list, click New. Discovery Functionality defines what each MID Server in this behavior must do, specifically which protocols to detect. Fill out the form fields: Field Description Phase Enter an integer that represents an arbitrary phase. The phase is used to group one or more functionalities together. All the functionalities within a specified, such as SSH and SNMP. In that example, set one phase for the SSH scan and another phase for the SNMP scan. Active Keep this option selected to apply the discovery functionality. Functionality definition Click the lookup icon, and then select a pre-configured functionality that defines the protocol or list of protocols that each MID Server scans. Match criteria Define criteria here for Windows MID Servers. MID Servers Select one or more MID Servers to perform this functionality for the following Discovery types: IP Scan CI Scan Discovery automatically balances the load when multiple MID Servers are selected. Right-click the form header and select Save. To add criteria that the functionality must meet in order to be triggered, click New in the Functionality Criteria related list, fill out the form fields, and then click Submit. Field Input Value Name The name in the criteria is the variable that passes the following information: mid_server: MID Server that processes the results from the Shazzam probe. win_domain: Windows domain of the target device. Operator Select a logical operator. Value Enter the actual name of the MID Server (mid_server) or domain (win_domain) to pass to Discovery for this criteria. This field can also have a value of mid_domain, which defines the Windows domain of the MID Server that is processing the Shazzam results. Note: following graphic shows an example of functionality criteria. What to do nextCreate a Discovery Schedule of type Configuration Item, and select Use Behavior for the MID Server selection method.
https://docs.servicenow.com/bundle/istanbul-it-operations-management/page/product/discovery/task/create-disco-behavior.html
2018-07-15T21:22:57
CC-MAIN-2018-30
1531676588972.37
[]
docs.servicenow.com
Asking questions In a Tanium™ deployment, asking questions is a fundamental interaction with endpoints. What is a question? Tanium questions help you get key pieces of information from managed enterprise endpoints. The Ask a Question feature is built on a natural language parser that enables you to get started with natural questions rather than a specialized query language. You do not need to enter questions as complete sentences or particularly well formed inquiries. Word forms are not case sensitive and can even include misspellings. The parser interprets your input and suggests a number of valid queries that you can use to formalize the question that is sent to Tanium™ Clients. The following figure shows an example of how natural language input is parsed into proposed queries. First, the user enters the fragment last logged in user and clicks Search. In response, Interact returns a list of queries cast in valid syntax. Basic questions include: - one or more sensor names in the get clause. - all machines (in other words, all Tanium Client host computers) in the from clause. Advanced questions include filter clauses and parameterized sensors. What is a sensor? In essence, a sensor is a script that is executed on an endpoint to compute a response to a Tanium question. Sensors are distributed to clients during registration. Sensors enable you to ask questions about: - Hardware/software inventory and configuration - Running applications and processes - Files and directories - Network connections The Initial Content that is imported during the Tanium Server installation includes sensors to support a wide range of common questions. Additional sensors may be added when you import additional Tanium content packs and Tanium solution modules. If you cannot find a sensor you need within Tanium-provided content, you can create user-defined sensors. For more information, see Sensors. Counting questions and non-counting questions A counting question is designed to return results that can be meaningfully counted. A counting question can have only one sensor. For example, Get Tanium Client Logging Level from all machines is a counting question. The sensor returns the value of the LogVerbosityLevel setting. When a managed endpoint is prompted to add its answer to the answer message, it increments the tally of the answer that its value matches. The Tanium Server maintains a table of answer strings. In many cases, like logging level, there are just a few common answers, so the question has a relatively small footprint. A non-counting question has sensors that return unique strings. For example, Get Tanium Client IP Address from all machines returns IP addresses, which are unique. When a Tanium Client is prompted to add its answer to the answer message, it adds a new string. On the Tanium Server, the data footprint for a non-counting question can be quite large. Questions with multiple sensors Use the AND operator in the get clause to specify multiple sensors. Results are grouped by the first sensor, then by the next sensor, and so on. The following example shows a question that uses multiple sensors. Questions with parameterized sensors A parameterized sensor accepts a value specified at the time the question is asked. The following example shows the File Exists sensor. The parser prompts you to specify a file path and file name. Another example is the High CPU Processes sensor. You can specify a parameter that is the number of CPU processes to return from each machine. Let's say you want to get the top 5 highest CPU utilizing processes. The question has the following syntax: Get High CPU Process[5] from all machines For sensors with multiple parameters, you can pass an ordered list separated by a comma. For example, if you want to get the results of Tanium Action Log number 1 and get 10 lines of results, specify a parameter list as shown in the following example: Get Tanium Action Log[1,10] from all machines Questions with filters You can use filters to craft questions that target fewer computers than "all machines". You often want to work with a set of computers that have a specific process name or value. This is an example of an advanced question. The left side is a complete and valid query; the right side contains a filter—the "with" expression. The filter expression on the right side must evaluate to a Boolean true or false. For example, the expression with Running Processes contains explore evaluates to true if the specified string matches the result string, or false if it does not. A parameterized sensor like File Exists[] returns a string "File Exists: Filename" or "File does not exist", so you must be careful how you cast it in a filter expression. The filter expression with File Exists[c:\a.txt] containing "Exists" evaluates to true when the result is "File Exists: c:\a.txt" and false when the result is "File does not exist", so it can be used to filter the set of responses. Filters in the from clause are the first part of a question that gets processed by the endpoint. If the endpoint data does not match the filter, then the endpoint does not process the question any further. If there are multiple filters, each filter is processed and evaluated. If the evaluation is true, then the sensors on the left side of the question are also executed and returned. Filter expressions can match strings or regular expressions. The following table describes the operators supported in filter clauses. See Reference: Advanced question syntax for examples of complex filter expressions. Using the Question Builder The Question Builder is another way to create a question. It has form fields to help you complete the get statement and the from clause, including any filters. You can launch the Question Builder in either of the following ways: - In the Ask a Question box, click Question Builder in the top right corner. - After you have asked a question and want to refine it, click Copy to Question Builder. The following figure shows the Question Builder. The first text box is for sensor names. Start typing and then use the typeaheads to select sensors. Alternatively, you can use the Browse Sensors dialog box to select sensors. When you use the dialog box, you can review sensor descriptions. The following table provides guidelines for Advanced Sensor Options. In the from clause, you can configure multiple filters, including nested filters. For example, suppose you wanted to investigate the web browsers installed on computers. You can use Boolean ANDs and ORs in the from clause to target "modern" browsers. Question expiration When a dynamic or saved question is issued, the question is assigned a question ID. In your web browser, you will notice the question ID in the URL. The question ID "expires" after 10 minutes, and its corresponding URL becomes invalid. This means that for up to 10 minutes, you can refresh the page or share the link. After 10 minutes, if you navigate to the link, Interact displays a message indicating the question has expired, and it gives you the option to copy the question text to the Question Bar so you can reissue it. Question History Go to Administration > Question History to review a chronology of questions that have been issued. By default, an entry for a question is maintained in the chronology for 7 days. You can change the default limit with the global setting SOAPQuestionHistoryLimitInDays. You can use the Question History to review question syntax and the question expiration timestamps. You can also copy the question to the Question Bar or Question Builder. You must be assigned a role with the Read Question History (Micro Admin) permission to see the Question History page. However, a user with only the microadmin permission cannot load a question from the Question History page. Users assigned the Administrator reserved role can see the Question History page and load a question from the page. Question permissions You must be assigned the Show Interact module permission to see the Ask a Question bar and the Question Builder. You must also have the Ask Dynamic Questions permission (can be assigned in any advanced role). The sensors available for questions are determined by Read Sensor content set permissions. The Administrator reserved role has all of these permissions. The Content Administrator role has all except the Show Interact module permission. Be sure to explicitly assign the Interact permission. Last updated: 7/5/2018 12:23 PM | Feedback
https://docs.tanium.com/integrity_monitor/interact/questions.html
2018-07-15T21:14:22
CC-MAIN-2018-30
1531676588972.37
[array(['images/natural.png', None], dtype=object) array(['images/counting_question.png', None], dtype=object) array(['images/non_counting_question.png', None], dtype=object) array(['images/multiple_sensors.png', None], dtype=object) array(['images/parameterized_sensor.png', None], dtype=object) array(['images/question_w_filter.png', None], dtype=object) array(['images/question_parameterized_sensor.png', None], dtype=object) array(['images/question_filter_parameterized_sensor.png', None], dtype=object) array(['images/question_builder2.png', None], dtype=object) array(['images/typeaheads.png', None], dtype=object) array(['images/browse.png', None], dtype=object) array(['images/nested_filters.png', None], dtype=object) array(['images/question_id.png', None], dtype=object) array(['images/question_expired.png', None], dtype=object) array(['images/admin_question_history_thumb_100_0.png', None], dtype=object) ]
docs.tanium.com
Create a custom Contao module – part two Warning This guide was written for Contao 2.x and a lot of it's information is outdated! Read with care and only use as a general-purpose guide. In this part of CD Collection module tutorial, I will show you how to create the most important part of our module – the Data Container Array. At a glance, it is a table configuration array, which tells Contao how to render module’s back end form. Just like the configuration files, DCA files are loaded when the Contao is initialized. It allows you to overwrite existing settings of other modules. As you know from the previous part of the tutorial, modules are loaded in the alphabetical order. Thus, if you want to add a new field to the news module, you don’t need to modify any of the news files. But that’s the subject for another tutorial. File structure The DCA file must be located in /cd_collection/dca/ folder and have the same name as the table definied in the module config: // Back end module array_insert($GLOBALS['BE_MOD']['content'], 3, array ( 'cd_collection' => array ( 'tables' => array('tl_cds_category', 'tl_cds'), 'icon' => 'system/modules/cd_collection/html/icon.gif' ) )); Please create two files called tl\_cds\_category.php and tl\_cds.php in dca folder. Inside DCA The data container array has usually the following structure: 'TL_DCA']['tl_table'] = array ( 'config' => array ( // dca config settings go here ), 'list' => array ( // all settings that are applied to records listing // we can define here: sorting, panel layout (filter, search, limit fields), label format, global operations, operations on each record ), 'palettes' => array ( // palettes settings ), 'fields' => array ( // fields that are visible in back end form ) );$GLOBALS[ As you can see it is one big nested array. First part, the config, contains basic settings like data container type or table relationship (child/parent). The list array is usually divided into four sections: sorting, label, global_operations, operations. Each of them is also an array. Next, the palettes – a palette defines the order of displaying form fields; it is also possible to group them into sections (available from 2.7.0). Finally, the fields is an array that holds information about our form fields. All of the config options are available on the official manual. That is all of the basic theory, let’s get to work! DCA config Open tl\_cds\_category.php and put the following content: 'TL_DCA']['tl_cds_category'] = array ( // Config 'config' => array ( 'dataContainer' => 'Table', 'ctable' => array('tl_cds'), 'switchToEdit' => true ), //...$GLOBALS[ As a data container we have defined table, because our module will use the MySQL database tables. The ctable is a shortcut from ‘children table’. In the previous part, we assumed that tl_cds is our child table. SwitchToEdit key, if true, activates the “save and edit” button when a new record is added: However, this button is available only when the sorting mode is 4. DCA list The list array is used to maintain the homepage of our module. Its purpose is to set up the user interface and provide a listing of records. It consists of four arrays that I will describe in a moment. Meanwhile I advise you to open the manual to make you see better what I am talking about. DCA Sorting Okay, so the first array is called sorting. As its name points out, it is used to define the settings for displaying our records. Put the following code into the file: // List 'list' => array ( 'sorting' => array ( 'mode' => 1, 'fields' => array('title'), 'flag' => 1, 'panelLayout' => 'search,limit' ), //... Mode set to 1 defines that records are sorted by a fixed field, which is defined in the next line. Here we choose the title of the category to be base for our sorting. Flag determines how records are sorted; 1 means “sort by initial letter ascending”, in other words – alphabetically. At the end of this subarray we define panel layout, which features search box and record limit dropdown. Panel layout of the Modules Label The label array is also a child of the list array. It is used to define the record’s label format. You might notice that this is similar to php sprintf() function. To better visualize the analogy, below code works like sprintf(format, fields) Paste the following code: 'label' => array ( 'fields' => array('title'), 'format' => '%s' ), //... I hope this code is clear to you. It simply gets the title field from the database and displays it. Global operations …are the functions that can be applied to multiple records same time. A perfect example is the edit multiple function, which we’ll implement into our module: 'global_operations' => array ( 'all' => array ( 'label' => &$GLOBALS['TL_LANG']['MSC']['all'], 'href' => 'act=select', 'class' => 'header_edit_all', 'attributes' => 'onclick="Backend.getScrollOffset();"' ) ), //... The above code will add the "Edit multiple" link Operations This array defines which operations will be available for each record. Put the following code: 'operations' => array ( 'edit' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds_category']['edit'], 'href' => 'table=tl_cds', 'icon' => 'edit.gif', ), 'copy' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds_category']['copy'], 'href' => 'act=copy', 'icon' => 'copy.gif', ), 'delete' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds_category']['delete'], 'href' => 'act=delete', 'icon' => 'delete.gif', 'attributes' => 'onclick="if (!confirm(\'' . $GLOBALS['TL_LANG']['MSC']['deleteConfirm'] . '\')) return false; Backend.getScrollOffset();"', ), 'show' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds_category']['show'], 'href' => 'act=show', 'icon' => 'show.gif' ) ) ), // end of list array //... I think the code is self-explanatory. We have added four basic functions that come in almost every module of Contao. Note that the edit array takes as href key other value than a typical (one-table) module – and that’s because we use two tables in our cd collection. Thus, when you click the edit button, it will take you to the listing of tl\_cds records! Each record can be either edited, copied, removed or simply showed DCA Palettes Palettes array defines how the module’s form should be presented. Since version 2.7 you can group fields into sections, which is a really great usability improvement. // Palettes 'palettes' => array ( 'default' => '{title_legend},title,description' ), //... As you can guess, the default palette is displayed by default. A palette is a string of field names which are concatenated with either a semicolon (;) or a comma (,). Whereas the comma is just used to separate the field names, the semicolon indicates the beginning of a new fieldset. The legends (those expandable and collapsable groups/fieldsets) should be always surrounded with braces – just like {title_legend}. See the image for better explanation: DCA Fields The above settings can be copy-pasted (with some little changes) into any of our modules, but fields are always unique for each module and have to be created manually. Put the following code: // Fields 'fields' => array ( 'title' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds_category']['title'], 'inputType' => 'text', 'search' => true, 'eval' => array('mandatory'=>true, 'maxlength'=>64) ), 'description' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds_category']['description'], 'inputType' => 'textarea', 'eval' => array('rte' => 'tinyFlash') ) ) ); // end of $GLOBALS['TL_DCA']['tl_cds_category'] array According to our module table in the database, we have added two fields: title and description. Now a little explanation of each of them. Label and inputType keys are obvious. Label’s value corresponds to the array that we will create in the next part of the tutorial. The search set to true makes the title field available to search via panel layout. Eval(uation) array configures a particular field in detail. You can e.g. create mandatory fields, add a date picker or define the rich text editor of a textarea. We set the title to true and its max length to 64, same as the field in a database. Description is a textarea with a lightweight config of a rte. tl_cds DCA Okay, now it’s time to create the data container array for tl\_cds table. It will have a little different config, as it is a child of tl\_cds\_category. Note that Contao is not able to display child records by default, so we are going to use a child_record_callback. Open the cd_collection/dca/tl\_cds.php file and put the following content: 'TL_DCA']['tl_cds'] = array ( // Config 'config' => array ( 'dataContainer' => 'Table', 'ptable' => 'tl_cds_category', ), //...$GLOBALS[ Above code is very simple. We define the database table as our data container, and tl\_cds\_category as a parent table. Sorting Now, insert the code: // List 'list' => array ( 'sorting' => array ( 'mode' => 4, 'fields' => array('title'), 'flag' => 1, 'headerFields' => array('title', 'description'), 'panelLayout' => 'search,limit', 'child_record_callback' => array('tl_cds', 'listCds') ), //... The sorting mode is set to 4 – displays the child records of a parent record. However, as I have mentioned before, Contao is not able to list them. Thus, we need to create a new function and assign it to child\_record\_callback. First parameter in array is name of the class, while the second is name of the function. Usually functions are placed at the end of the file, so I will leave it for later. From version 2.9.0, Contao provides records sorting in mode 4, so we also need to define a flag and the fields. Flag set to 1 means that the records will be sorted by initial letter ascending (from A to Z). headerFields array defines which fields of parent table are going to be listed in the child module: This data is taken from tl_cds_category table You can see that “My collection of rock cds.” is not on the same height as “description:”. This happens when we display field that is created by rich text editor. By default, rte embeds the text in \<p\> tags. Contao back end’s css applies a 12px bottom margin to all paragraph elements. Global operations & operations Global operations are the same as tl_cds_category’s. Operations have just one lil difference – in the edit array, href key takes as value act=edit, and not table=tl\_cds. 'global_operations' => array ( 'all' => array ( 'label' => &$GLOBALS['TL_LANG']['MSC']['all'], 'href' => 'act=select', 'class' => 'header_edit_all', 'attributes' => 'onclick="Backend.getScrollOffset();"' ) ), 'operations' => array ( 'edit' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds']['edit'], 'href' => 'act=edit', 'icon' => 'edit.gif' ), 'copy' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds']['copy'], 'href' => 'act=copy', 'icon' => 'copy.gif' ), 'delete' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds']['delete'], 'href' => 'act=delete', 'icon' => 'delete.gif', 'attributes' => 'onclick="if (!confirm(\'' . $GLOBALS['TL_LANG']['MSC']['deleteConfirm'] . '\')) return false; Backend.getScrollOffset();"' ), 'show' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds']['show'], 'href' => 'act=show', 'icon' => 'show.gif' ) ) ), // end of list array //... Palettes It is time to organize our fields and group them into sections. Put the following code after the list array: // Palettes 'palettes' => array ( 'default' => '{title_legend},title,artist;{image_legend},image;{comment_legend:hide},comment' ), //... Notice that the semicolons are used to seperate the fieldsets. Also, you might notice a new legend element :hide. It forces a group to be collapsed by default. Simple yet useful. Fields Time for the best part – defining fields :) We need to create four fields: title, artist, image and comment. Just like in the database. // Fields 'fields' => array ( 'title' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds']['title'], 'inputType' => 'text', 'search' => true, 'eval' => array('mandatory'=>true, 'maxlength'=>64, 'tl_class'=>'w50') ), 'artist' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds']['artist'], 'inputType' => 'text', 'search' => true, 'eval' => array('mandatory'=>true, 'maxlength'=>64, 'tl_class'=>'w50') ), 'image' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds']['image'], 'inputType' => 'fileTree', 'eval' => array('files'=>true, 'filesOnly'=>true, 'fieldType'=>'radio') ), 'comment' => array ( 'label' => &$GLOBALS['TL_LANG']['tl_cds']['comment'], 'inputType' => 'textarea', 'eval' => array('rte'=>'tinyFlash') ) ) ); // end of $GLOBALS['TL_DCA']['tl_cds'] array //... Step by step: Title – a normal text input, available for search, with max length of 64 characters. The new thing for you is tl_class. There are 5 input’s classes in Contao that are used for better appearance. All classes can be found here. Fields without applied classes Fields with applied 'w50' class Artist – this field is exact to title’s one. Image – as input type we define file tree, which will render a file structure of tl_files folder. Additionally, we set the files to true, which causes that both files and folders will be shown. However, we need to disable selecting a folder – that’s why we have added 'filesOnly'=>true. The last eval’s parameter is field type. It could be either a checkbox or radio, but since we want to add only one cd cover per album, radio is the thing. Comment – this is the same field as description field in tl_cds_category. Listing child records At this point, we are able to add new records to the database. Would be nice if we could list them, so they are possible to edit/view/delete and search. At the end of the tl\_cds.php file, create a new class that extends Back end and contains the listCds function: class tl_cds extends Backend { /** * List cds of our collection * @param array * @return string */ public function listCds($arrRow) { return '<div> <img src=" ' . $arrRow['image'] . ' " style="height:100px; width:100px; float:left; margin-right: 1em;" /><p><strong>' . $arrRow['title'] . '</strong> (' . $arrRow['artist'] . ')</p>' . $arrRow['comment'] . '</div>' . "\n"; } } It is a good convention to name classes same as tables they correspond to. Let’s take a look at the function. As its first and the only parameter it takes an array that contains data retrieved from the database. The array looks like this: Array ( [id] => 2 [pid] => 1 [tstamp] => 1270829730 [title] => Hellfire Club [artist] => Edguy [image] => tl_files/Edguy.jpg [comment] => <p>In contrast to their previous albums, Hellfire Club owes more to the sound of Iron Maiden than their biggest influence Helloween, both in vocals and in music. The first track "Mysteria" opens with the introduction "Ladies and Gentlemen! Welcome - to the Freak Show!".</p> ) Array ( [id] => 5 [pid] => 1 [tstamp] => 1270831831 [title] => One X [artist] => Three Days Grace [image] => tl_files/OneX.jpg [comment] => <p>One-X is the second album from Three Days Grace, released on June 13, 2006. The album entered the Billboard Top 200 charts at #5 with first week sales of more than 78,000 and has so far gone to sell over 1,200,000 copies in the US alone.</p> ) Each key corresponds to the field in a database table. I hope both the code and array are clear for you. Next step
https://docs.contao.org/books/cookbook/en/custom-module/part2.html
2017-03-23T02:11:27
CC-MAIN-2017-13
1490218186608.9
[array(['assets/xintroimg.jpg.pagespeed.ic.JzykscJ5fa.jpg', 'Create a custom TYPOlight module CD Collection'], dtype=object) array(['assets/xsave_and_edit.jpg.pagespeed.ic.CJgGyLJrXF.jpg', 'Save and edit button'], dtype=object) array(['assets/xpanelLayout.jpg.pagespeed.ic.EfvnhyIoX7.jpg', 'Panel layout'], dtype=object) array(['assets/xglobal_operations.jpg.pagespeed.ic.qxjWFlCBZt.jpg', 'Global operations'], dtype=object) array(['assets/xoperations.jpg.pagespeed.ic.Qm5XafGN6A.jpg', 'Operations'], dtype=object) array(['assets/xpalettes.jpg.pagespeed.ic.BlSMcR525P.jpg', 'Palettes'], dtype=object) array(['assets/xheader_fields.jpg.pagespeed.ic.jaP9AnNz3V.jpg', 'Header fields'], dtype=object) array(['assets/xwithout_classes.jpg.pagespeed.ic.JOnpLlf-MF.jpg', 'Inputs without classes'], dtype=object) array(['assets/xwith_classes.jpg.pagespeed.ic.1I6YytRXdg.jpg', 'Fields with classes'], dtype=object) ]
docs.contao.org
If you have not yet read it then please start with overview of eazyBI concepts. Following are the main steps that you need to take to start analyzing your uploaded or imported data and create and save reports. Go to Analyze main tab and follow these instructions. You can try these steps also in demo Sales cube. On this page: Drag dimensions to columns and rows Select dimensions across which you would like to analyze your data (measures). E.g. in this example we would like to analyze who are our best customers based on sales amount. Select just one measure If you have very many measures then instead of dragging all measures to columns you can select just one measure you would like to analyze. Expand and collapse, rearrange measures You can expand and collapse selected dimension members to show detailed hierarchy level members (e.g. start with all customers and expand into countries, regions or states, cities and individual customers). You can add or remove selected measures in columns or rearrange them with drag and drop. All hierarchy level members and member actions Instead of expanding hierarchy levels from a top you can select all hierarchy level members (e.g. all cities). And then you can order them by selected measure or select other available member actions (e.g. drill into lower hierarchy levels, drill across another dimension, select only one member or remove selected member, select top or bottom rows based on selected measure, filter rows by specified condition). Read also about more advanced filters using date filters and regular expressions. Conditional formatting on cells Select Cell formatting to specify the range of values which should have different text or background color in the column or row. You can also use Cell Formatting to color Text fields; then write the exact text as both Min and Max values. If you have letter-coded field values like A, B, C, D etc, then you can set A as Min and C as Max to color all A, B, Cs. Add members to Time dimension The dates in Time dimension are created dynamically - days are added only if there is some activity in the date. If you wish to add some analysis about future or past Time periods that you do not see in eazyBI, it is possible to add these Time members in Time dimension / All Hierarchy level members. To add date range you should use exact date name or relative date descriptions. After clicking on OK you will see a confirmation screen with number of how many Time dimension day level members will be created; click 'Yes' to proceed and create new Time dimension members that can be used to show e.g. forecasts or comparison when not all last year data exist. Page dimensions If necessary you can drag some dimensions to pages as well and then select one dimension member in page dimension to see corresponding results. By default all dimension members will be visible and Measures will be showed only where applicable based on filter. If you wish to see only dimension members that have values for selected filters, click on the Nonempty option in the Rows section header. This will make a NonEmpty CrossJoin with selected measures and only show dimension members with Measure values with the selected filter. Save report and toolbar buttons When you have created report that you would like to use later then save it from report header toolbar. If you edit exiting report and would like to save it as well as keep the previous report, click the save button and give report a new name. This will save the new report as well as leave the previous report as it was. There are also other report actions in the report header toolbar, for example to rename, delete report or embed reports in other HTML pages. There are options for report result configuration in the report results toolbar that you can try out - maximise, undo, redo buttons as well as adding of report description and exporting results. Export and import report definitions If you have several eazyBI accounts or several eazyBI environments (development, test, produciton) then you can export report definitions from one eazyBI account and import into another eazyBI account. To export individual report definition then click on other report actions in report header toolbar and select Export definition. You will see report definition in JSON format, please copy this report definition for pasting it in the other eazyBI environment. Exported report definition will also include all calculated member definitions that are used and needed for this report. You can also export all report definitions for selected cube from Analyze tab by clicking Export reports (if you have any private reports then they will not be exported in this all reports list). Now you can visit different eazyBI account where you would like to import one or several exported reports and click Import reports. In Import report definition dialog paste previously copied one or several report definition export results. If there will be any warnings then please review them and confirm that you would like to continue with report import. After import will be done you will be able to see and use imported reports in this other eazyBI account. Report folders If you have created many reports, then you can create report folders and organize your reports in folders. You can select a folder when saving a new report or move existing report to a selected folder.
https://docs.eazybi.com/display/EAZYBI/Create+reports
2017-03-23T02:16:54
CC-MAIN-2017-13
1490218186608.9
[]
docs.eazybi.com
Set the Service Startup Account for SQL Server Agent (SQL Server Configuration Manager) The SQL Server Agent service startup account defines the Windows account that SQL Server Agent runs as, as well as its network permissions. This topic describes how to set the SQL Server Agent service account with SQL Server Configuration Manager in SQL Server 2016 by using SQL Server Management Studio. In This Topic Before you begin: Limitations and Restrictions To set the Service Startup Account for SQL Server Agent using SQL Server Management Studio Before You Begin Limitations and Restrictions Beginning with SQL Server 2005 , SQL Server Agent no longer requires that the service startup account be a member of the Microsoft Administrators group. However, the SQL Server Agent service startup account must be a member of the SQL Server sysadmin fixed server role. The account must also be a member of the msdb database role TargetServersRole on the master server if multiserver job processing is used.. Using SQL Server Management Studio To set the Service Startup Account for SQL Server Agent In Registered Servers, click the plus sign to expand Database Engine. Click the plus sign to expand the Local Server Groups folder. Right-click the server instance where you want set up the Service Startup Account, and select SQL Server Configuration Manager…. In the User Account Control dialog box, click Yes. In SQL Server Configuration Manager, in the console pane, select SQL Server Services. In the details pane, right-click SQL Server Agent(server_name), where server_name is the name of the SQL Server Agent instance for which you want to change the service startup account, and select Properties. In the SQL Server Agent(server_name) Properties dialog box, in the Log On tab, select one of the following options under Log on as: Built-in account: select this option if your jobs require resources from the local server only. For information about how to choose a Windows built-in account type, see Selecting an Account for SQL Server Agent Service. Important The SQL Server Agent service does not support the Local Service account in SQL Server Management Studio. This account: select this option if your jobs require resources across the network, including application resources; if you want to forward events to other Windows application logs; or if you want to notify operators through e-mail or pagers. If you select this option: In the Account Name box, enter the account that will be used to run SQL Server Agent. Alternately, click Browse to open the Select User or Group dialog box and select the account to use. In the Password box, enter the password for the account. Re-enter the password in the Confirm password box. Click OK. In SQL Server Configuration Manager, click the Close button.
https://docs.microsoft.com/en-us/sql/ssms/agent/set-service-startup-account-sql-server-agent-sql-server-configuration-manager
2017-03-23T02:30:22
CC-MAIN-2017-13
1490218186608.9
[]
docs.microsoft.com
Designing Dimensions. Note For performance issues related to dimension design, see the SQL Server 2005 Analysis Services Performance Guide. Defining Dimensions, Attributes, and Hierarchies. Note You can also design and configure dimensions, attributes, and hierarchies programmatically by using either XMLA or Analysis Management Objects (AMO). For more information, see Analysis Services Scripting Language Reference and Analysis Management Objects (AMO). In This Section The following table describes the topics in this section. Creating a New Dimension Using the Dimension Wizard Describes how to define a database dimension by using the Dimension Wizard. Defining Database Dimensions Describes how to modify and configure a database dimension by using Dimension Designer. Defining Dimension Attributes Describes how to define, modify, and configure a database dimension attribute by using Dimension Designer. Defining Attribute Relationships Describes how to define, modify, and configure an attribute relationship by using Dimension Designer. Creating User-Defined Hierarchies Describes how to define, modify, and configure a user-defined hierarchy of dimension attributes by using Dimension Designer. Enhancing Dimensions using the Business Intelligence Wizard Describes how to enhance a database dimension by using the Business Intelligence Wizard.
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms174537(v=sql.100)
2018-05-20T20:17:41
CC-MAIN-2018-22
1526794863684.0
[]
docs.microsoft.com
By clicking the “help” link at the top right of your dashboard, you’ll have access to contextual help that is relevant to the section of your dashboard you are in at any given minute. If you’re stuck trying to add an image to your post, just click the help link from the post editor to see related help topics. If you’re trying to add a category, but just can’t remember the steps, click on the “help” link from the categories page for instructions. You can even access the specific video tutorials related to the section that you’re in.
http://docs.theblogpress.com/get-started/contextual-help-where-and-when-you-need-it
2018-05-20T19:31:48
CC-MAIN-2018-22
1526794863684.0
[]
docs.theblogpress.com
Preserve, enable, and delete users inSync Cloud Editions: Elite Plus Elite Enterprise Business Overview This section contains information on how you can, based on your organization requirement, preserve, re-activate, or delete inSync users using the inSync Management Console. Differences between preserving and deleting users The following table lists the differences between preserved and deleted users. In this section The following table lists the actions that you can perform on the users to change their state in inSync.
https://docs.druva.com/001_inSync_Cloud/Cloud/020_Backup_and_Restore/010_Set_up_inSync/030_Profile_User_Device_Management/020_Add_and_manage_users/060_Disable_enable_and_delete_users
2018-05-20T19:33:28
CC-MAIN-2018-22
1526794863684.0
[array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/cross.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) ]
docs.druva.com
Recently Viewed Topics Virtual Switches for Use with NNM The Tenable NNMNessus Network Monitor monitors network traffic at the packet layer to determine topology and identify services, security vulnerabilities, suspicious network relationships, and compliance violations. NNM provides visibility into both server and client-side vulnerabilities, discovers the use of common protocols and services (e.g., HTTP, SQL, file sharing), and performs full asset discovery for both IPv4 and IPv6, and even on hybrid networks. Virtualization of server rooms provides an added challenge to monitoring the network. Communication between VMs within the virtual switch is not monitored by the standard monitoring tools on the physical network since traffic between VMs does not route to the physical switch. NNM provides the ability to passively scan virtual network traffic between VMs that are in the same virtual switch as a deployed NNM VM. This section provides an overview of the standard methods to configure the virtual switches in various systems to provide NNM with a SPAN or mirror port to gather data from inside the virtual network between VMs. While some platforms provide the ability to send monitored traffic to a remote host, the guidance provided in this document describes an environment where NNM is configured on a VM within the virtual switch cluster. The exact desired options may vary based on local monitoring requirements. The platform use to generate the technical steps in this document was configured with the most recent versions of the software. If you are using older or newer software revisions, some of these steps may vary.
https://docs.tenable.com/nnm/deployment/Content/VM/Virtual_Switches_for_Use_with_NNM.htm
2018-05-20T19:36:23
CC-MAIN-2018-22
1526794863684.0
[]
docs.tenable.com
Switch between the Log List and the Log Graph modes by clicking on the Log Mode button: Use the query to control what’s displayed in your Log Graph: graph Timesteps. Changing the global timeframe changes the list of available Timesteps values. Select a Log Graph visualization type using the graph selector: Available visualizations: Visualize the evolution of a single measure (or a facet unique count of values) over a selected time frame, and (optionally) split by an available facet. The following timeseries Log Graph shows: The evolution of the top 5 URL Paths according to the number of unique Client IPs over the last month. Visualize the top values from a facet according to the chosen measure: The following Top List Log Graph shows: The evolution of the top 5 URL Paths according to the number of unique Client IPs over the last month. Select or click on a section of the graph to either zoom in the graph or see the list of logs corresponding to your selection: Export your Log Graph: This functionality is still in beta, contact our support team to activate it for your organization. Use the Timeseries widget to display log graph directly in your Dashboards. Additional helpful documentation, links, and articles:
https://docs.datadoghq.com/logs/graph/
2018-05-20T19:19:49
CC-MAIN-2018-22
1526794863684.0
[]
docs.datadoghq.com
Deployment Permissions for SQL Server Topic Last Modified: 2010-11-03 Microsoft SQL Server 2005 and Microsoft SQL Server 2008 have specific requirements when installing and deploying Microsoft Lync Server 2010. Because Windows and SQL Server define their security differently, logging in as an administrator in the Active Directory domain does not implicitly grant permissions for SQL Server. You must also be a member of the sysadmin entity on the SQL Server-based server you are configuring. Permissions Required for Database and Lync Server Installation The following options detail three permissions and group membership associations for installation of Lync Server files and SQL Server databases. Choose the scenario that best meets the requirements of your organization. Permissions and Group Membership Associations
https://docs.microsoft.com/en-us/previous-versions/office/skype-server-2010/gg398375(v=ocs.14)
2018-05-20T20:13:50
CC-MAIN-2018-22
1526794863684.0
[]
docs.microsoft.com
template<class C> class OEIsMember : public OESystem::OEUnaryPredicate<C> This class represents OEIsMember template functor that identifies all of the objects which are in an STL container. The following methods are publicly inherited from OEUnaryPredicate: The following methods are publicly inherited from OEUnaryFunction: OEIsMember() Default constructors. OEIsMember(std::set<C> &v) OEIsMember(std::list<C> &v) OEIsMember(std::deque<C> &v) OEIsMember(std::vector<C> &v) Constructs the functor using an STL container of type ‘C’, which contains objects or pointers to objects. Any of the elements of the STL container will be considered to be ‘a member’ of the functor. OEIsMember(OESystem::OEIter<C> &i) Constructs the functor using an iterator (OEIter) over values of the same type as the functor template argument. OEIsMember(OESystem::OEIterBase<C> *i) Constructs the functor using an iterator (OEIterBase) over values of the same type as the functor template argument. template<class T> OEIsMember(T bgn, T end) Constructs the functor using a sequence of values. The values from ‘bgn’ up ‘to, but not including ‘end’ are used to construct the functor. Using this constructor, the functor may be constructed from arrays and STL containers. OEIsMember<C> &operator=(const OEIsMember<C> &rhs) Assignment operator that copies the data of the ‘rhs’ OEIsMember object into the right-hand side OEIsMember object. bool operator()(const C &arg) const Returns true if the object arg (or a pointer to arg) is contained in the STL container passed to the functor on construction. OESystem::OEUnaryPredicate<C> *CreateCopy() const Deep copy constructor that returns a copy of the object. The memory for the returned OEIsMember object is dynamically allocated and owned by the caller.
https://docs.eyesopen.com/toolkits/java/oechemtk/OEChemClasses/OEIsMember.html
2018-05-20T19:16:56
CC-MAIN-2018-22
1526794863684.0
[]
docs.eyesopen.com
Technical Details on Microsoft Product Activation for Windows XP Software piracy is a worldwide problem which negatively impacts software developers, resellers, support professionals, and most importantly, consumers. One form of piracy, estimated to be as high as 50%, is known as casual copying. Casual copying is the sharing and installation of software on multiple PCs in violation of the software's end user license agreement (EULA). Microsoft has developed product activation as one solution to reduce this form of piracy. Product activation uses several methods and technologies to help achieve Microsoft's goals of protecting intellectual property rights by making it easy for users to comply with the terms of the EULA and reducing software piracy. In order to help customers and partners better understand the technologies used by product activation, and their unobtrusive and anonymous nature, we will outline in this bulletin: How activation works for Windows XP acquired through: A PC manufacturer (OEM) A retail store (where customers buy "boxed" software product) A volume licensing agreement (customers who acquire their licenses through programs such as Microsoft Open, Enterprise, or Select licensing). How the hardware hash component of the installation ID is created and the scenarios in which a copy of Windows XP may have to be re-activated due to a substantial hardware modification. For a more general overview on the basics of product activation please see. Additionally, this document contains some technical concepts. Pointers to reference material covering certain technical concepts are included in the appendix. Information contained in this document represents product activation in Windows XP as of the document's date of publication. On This Page Product Activation and volume licenses Product Activation and new pre-loaded PCs Product Activation and retail boxed software product Modifications to hardware and how they affect the activation status of Windows XP Conclusions Appendix A: This bulletin and Microsoft Product Activation for Office XP Family Products Appendix B: Technologies used in Product Activation Product Activation and volume licenses Windows XP upgrade licenses acquired through one of Microsoft's volume licensing agreements, such as Microsoft Open License, Enterprise Agreement, or Select License, will not require activation. Installations of Windows XP made using volume licensing media and volume license product keys (VLKs) will have no activation, hardware checking, or limitations on installation or imaging. Product Activation and new pre-loaded PCs The majority of customers acquire Windows with the purchase of a new computer, and most new computers pre-loaded with Windows XP will not require activation at all. Microsoft provides OEMs with the ability to "pre-activate" Windows XP in the factory and estimates that upwards of 80% of all new PCs will be delivered to the customer pre-activated. "Pre-activation" of Windows XP by the OEMs will be done in one of two different ways depending on the OEM's own configuration options and choices. Some OEMs may protect Windows XP using a mechanism which locks the installation to OEM-specified BIOS information in the PC. This technology works very similar to existing technologies that many OEMs have used over the years with the CDs they ship to reinstall Windows on these computers. We expanded and integrated the existing OEM CD BIOS locking mechanism with product activation, and call this method of protection "System Locked Pre-installation," or SLP. Successfully implemented, SLP uses information stored in an OEM PC's BIOS to protect the installation from casual piracy. No communication by the end customer center via the Internet or telephone call — just as in a retail scenario. OEMs may also activate Windows XP by contacting Microsoft in the same way the consumer would activate. Activation done in this way is the same as activating a retail boxed version of Windows XP. This is discussed in more detail further below. For OEMs who do not employ either of the above two methods of pre-activation, a new PC acquired with Windows XP preinstalled must be activated by the customer. This activation is completed in the exact same way as would someone who acquired Windows XP by purchasing a boxed version at a retailer. Product Activation and retail boxed software product Product activation relies on the submission of the Installation ID. The Installation ID is specifically designed to guarantee anonymity and is only used by Microsoft to deter piracy.. Example A processor serial number is 96 bits in length. When hashed, the resultant one-way hash is 128 bits in length. Microsoft uses only six bits from that resultant hash in activation's hardware hash. Due to the nature of the hashing algorithm, those six bits cannot be backwards calculated to determine anything at all about the original processor serial number. Moreover, six bits represent 64 (2^6) different values. There were over 100 million PCs sold last year worldwide. From those 100 million PCs sold, only 64 different hardware hash values could be created as part of activation. Microsoft developed the hardware hash in this way in order to maintain the user's privacy. Additionally, whether or not the PC can be put into a docking station or accepts PCMCIA cards is also determined (the possibility of a docking station or PCMCIA cards existing means that hardware may disappear or seem changed when those devices are not present). Finally, the hardware hash algorithm has a version number. Together with the general nature of the other values used, two different PCs could actually create the same hardware hash. The 10 different hardware values used to create the hash are outlined in the table below: Table 1 Hardware hash component values The product ID (nine bytes) and hardware hash (eight bytes) are used by Microsoft to process the activation request. When activation is done over the Internet, these two values form the Installation ID (in a binary format) and are sent along with request header information directly through secure sockets (SSL in HTTP) to the Microsoft activation system in a binary format. There are three communications made to complete Internet activation: Handshake request: Contains product ID, hardware hash, and request header data such as request ID (for linking the handshake, request, and acknowledgement) and activation technology version. 262 bytes total. License request: Contains product ID, hardware hash, and customer data structure for holding voluntary registration information if provided. If registration is skipped, this structure is empty. Also contains request header data such as request ID and the PKCS10 digital certificate request structure. The PKCS10 structure can vary slightly based on the inclusion of voluntary registration information; about 2763 to 3000 bytes total. Acknowledgement request: Contains certificate ID (returned to user's machine after license request), issue date, and error code. 126 bytes total. If Internet activation is successful, the activation confirmation is sent directly back to the user's PC as a digital certificate. This certificate is digitally signed by Microsoft so that it cannot be altered or counterfeited. The confirmation packet returned as part of Internet activation is approximately 9 kbytes in size (the digital certificate chain accounts for most of the confirmation data packet size). If activation is done by telephoning a customer service representative, the product ID and hardware hash are automatically displayed to the user as the Installation ID; a 50 digit decimal representation. The encoding encrypts the data so that it cannot be altered and provides check digits to help aid in error handling. Telephone activation is a four step process: Selecting the country from which the call is being made so that an appropriate phone number can be shown in the product UI. Dialing the phone number Providing the Installation ID to the customer service representative Entering the Confirmation ID provided by the customer service representative. The confirmation ID is a 42-digit integer containing the activation key and check digits that aid in error handling. Both the installation ID and confirmation ID are displayed to the user in easily understandable segments in the product UI. Modifications to hardware and how they affect the activation status of Windows XP Product activation rechecks the hardware it is running only to help reduce illegal hard disk cloning — another prevalent piracy method. Hard disk cloning is where a pirate copies the entire image of a hard disk from one PC to another PC. At each login, Windows XP checks to see that it is running on the same or similar hardware that it was activated on. If it detects that the hardware is "substantially different", reactivation is required. This check is performed after the SLP BIOS check discussed above, if the SLP BIOS check fails. This means that if your PC is pre-activated in the factory using the SLP pre-activation method, all the components in the PC could be swapped, including the motherboard, so long as the replacement motherboard was genuine and from the OEM with the proper BIOS. As noted above, installations of Windows XP made using volume licensing media and volume license product keys (VLKs) will not have any hardware component checking. Microsoft defines "substantially different" hardware differently for PCs that are configured to be dockable. Additionally, the network adapter is given a superior "weighting." If the PC is not dockable and a network adapter exists and is not changed, 6 or more of the other above values would have to change before reactivation was required. If a network adapter existed but is changed or never existed at all, 4 or more changes (including the changed network adapter if it previously existed) will result in a requirement to reactivate.. Scenario B: PC Two has the full assortment of hardware components listed in Table 1 except that it has no network adapter. User doubles the amount of RAM, swaps the video card and the SCSI controller. Result: Reactivation is NOT required. Dockable PCs are treated slightly more leniently. In a dockable PC, if a network adapter exists and is not changed, 9 or more of the other above values would have to change before reactivation was required. If no network adapter exists or the existing one is changed, 7 or more changes (including the network adapter) will result in a requirement to reactivate. Scenario C: Dockable PC Three has the full assortment of hardware components listed in Table 1 except that it has no network adapter. User doubles the amount of RAM, swaps to a bigger hard disk drive, and adds a network adapter. Result: Reactivation is NOT required. The change of a single component multiple times (e.g. from video adapter A to video adapter B to video adapter C) is treated as a single change. The addition of components to a PC, such as adding a second hard drive which did not exist during the original activation, would not trigger the need for a reactivation nor would the modification of a component not listed in the above table. Additionally, reinstallation of Windows XP on the same or similar hardware and a subsequent reactivation can be accomplished an infinite number of times. Finally, the Microsoft activation clearinghouse system will automatically allow activation to occur over the Internet four times in one year on substantially different hardware. This last feature was implemented to allow even the most savvy power users to make changes to their systems and, if they must reactivate, do so over the Internet rather than necessitating a telephone call. Conclusions Microsoft believes that product activation will be successful at deterring the casual copier, thereby reducing the piracy of Windows XP. Product activation achieves this goal by implementing a technology solution that deters the casual copier while: Continuing to meet the needs of corporate customers and their unique deployment needs for deployment of volume licenses Maintaining Windows XP's ease of use Striking a balance in protecting intellectual property clearly in favor of the user Protecting the user's privacy by utilizing information that is not personally identifiable. At no time is personally identifiable information secretly gathered or submitted to Microsoft as part of activation. Furthermore, Microsoft believes that product activation be completely unobtrusive to most Windows users. Most users of Windows XP will acquire it with the purchase of a new PC. The vast majority of these users will never see activation, either on first boot or with substantial hardware upgrades. For those users whose new PC requires that Windows XP be activated or who acquire Windows XP through a retail box, activation will most likely be a one-time occurrence that, whether completed via the Internet or by telephoning a Microsoft customer service representative, will be a simple, quick, and straightforward process. Appendix A: This bulletin and Microsoft Product Activation for Office XP Family Products Office XP Family products use an underlying activation technology similar to that of Windows XP. Please see the forthcoming Microsoft Technical Market Bulletin on product activation in Office XP Family products for details. Appendix B: Technologies used in Product Activation An overview of digital certificate technologies can be found on Microsoft's MSDN website at A comprehensive overview of cryptography solutions available to Microsoft developers can also be found on Microsoft's MSDN website at For more information, press only: Rapid Response Team, Waggener Edstrom, (503) 443-7000, [email protected] For online product information: Microsoft Piracy Web site: Microsoft Product Activation Web site: For independent information on software piracy: Business Software Alliance web site: Software & Information Industry Association web site:
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-xp/bb457054(v=technet.10)
2018-05-20T20:49:43
CC-MAIN-2018-22
1526794863684.0
[]
docs.microsoft.com
1.0.1 Patches 1.0.0 Jacobs theme is named after American-Canadian activist, journalist, and author Jane Jacobs, who is responsible for the New Urbanism movement in urban studies. It is designed for academic writing, particularly textbooks, but is also suitable for fiction. Headings and body type are set in Montserrat. The development of this theme was supported by eCampus Ontario. Edit this post on GitHub.
https://docs.pressbooks.org/changelog/pressbooks-jacobs/
2018-05-20T19:09:15
CC-MAIN-2018-22
1526794863684.0
[]
docs.pressbooks.org
Clamp node¶ This documentation is for version 2.0 of Clamp. Description¶ Clamp the values of the selected channels. A special use case for the Clamp plugin is to generate a binary mask image (i.e. each pixel is either 0 or 1) by thresholding an image. Let us say one wants all input pixels whose value is above or equal to some threshold value to become 1, and all values below this threshold to become 0. Set the “Minimum” value to the threshold, set the “Maximum” to any value strictly below the threshold (e.g. 0 if the threshold is positive), and check “Enable MinClampTo” and “Enable MaxClampTo” while keeping the default values for “MinClampTo” (0.0) and “MaxClampTop” (1.0). The result is a binary mask image. To create a non-binary mask, with softer edges, either blur the output of Clamp, or use the Grade plugin instead, setting the “Black Point” and “White Point” to values close to the threshold, and checking the “Clamp Black” and “Clamp White” options. See also:
http://natron.readthedocs.io/en/master/plugins/net.sf.openfx.Clamp.html
2018-05-20T19:22:00
CC-MAIN-2018-22
1526794863684.0
[]
natron.readthedocs.io
Add a user to a Connect Support conversation You can add additional users to a Connect Support conversation. Before you begin An administrator must enable the glide.connect.support.add_members property before users can be added to conversations. Role (-). The assigned support agent cannot be removed from a Connect Support conversation. Note: Only the assigned support agent can create an incident from the Connect Support conversation.
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/use/collaboration/task/add-users-to-support-chat.html
2018-05-20T19:39:10
CC-MAIN-2018-22
1526794863684.0
[]
docs.servicenow.com
It's very similar to creating a standard menu item... - Go to Appearance > Menus - Expand the Custom Links section and enter a "/" for the URL - this will take you to the home page of your site. The enter "Home" for the Link Text. - Click Add To Menu - this is what it will look like in the menu. - Save the menu. Don't forget to clear your cache with the yellow broom at the top right of your dashboard. If you're in the Customizer, you'll need to Save & Exit to see the broom.
http://docs.theblogpress.com/categories-tags-and-menus/menus/how-to-create-a-home-link-in-menu
2018-05-20T19:05:02
CC-MAIN-2018-22
1526794863684.0
[]
docs.theblogpress.com
You add an XaaS blueprint to an application blueprint similar to how you add other blueprints in the design canvas. About this task Use this method to add an XaaS to an application blueprint that contains other blueprints. If the XaaS blueprint is all that you want to provide to your users, you can add it to a service and entitle users to it without adding it to an application blueprint. If you run a scale in or scale out action on a deployed application blueprint, the XaaS blueprint is not scaled. Prerequisites Create and publish an XaaS blueprint. See Create an XaaS Blueprint as a Catalog Item. Review how to customize the XaaS blueprint forms. See Designing Forms for XaaS Blueprints and Actions. Log in to the vRealize Automation console as an infrastructure architect. Procedure - Select the name of the blueprint to which you are adding the XaaS. The design canvas appears. It contains the current application component blueprints and other components. - Click XaaS in the Categories list. - Drag your blueprint to the canvas. - Configure the default values for the general parameters and infrastructure options. These default values appear in the service catalog form when a user requests the item. - Click Finish. Results The XaaS blueprint is now part of the application blureprint. What to do next Verify that the application blueprint is added to a service and entitled to users. See Managing the Service Catalog.
https://docs.vmware.com/en/vRealize-Automation/7.1/com.vmware.vrealize.automation.doc/GUID-D7A761B1-FD74-4DDD-9E62-C9B08B7300B8.html
2018-05-20T19:11:29
CC-MAIN-2018-22
1526794863684.0
[]
docs.vmware.com
Use live feed to work on recordsA record feed is associated with a record, such as an incident or change.Using groups in Live FeedGroups allow users to create focused discussions in live feed.Use feeds in Live FeedFeeds allow users to create focused discussions in Live Feed.Live Feed UI overviewThe Live Feed user interface provides many methods you can use to share content with others in your organization.Use teams in Live FeedUsers can be combined into teams for the purpose of subscribing to specifically-focused feeds.Post content in Live FeedIn Live Feed, you can post new messages and replies to existing messages for all users in the feed. You can also send a reply message to a team or record.
https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/use/live-feed/concept/c_UseLiveFeed.html
2018-05-20T19:32:27
CC-MAIN-2018-22
1526794863684.0
[]
docs.servicenow.com
Tools & Resources Customized Update Sources for OfficeScan Agents Aside from the OfficeScan server, OfficeScan agents can update from custom update sources. Custom update sources help reduce OfficeScan agent update traffic directed to the OfficeScan server and allow OfficeScan agents that cannot connect to the OfficeScan server to get timely updates. Specify the custom update sources on the Customized Update Source List, which can accommodate up to 1024 update sources. Tip: Trend Micro recommends assigning some OfficeScan agents as Update Agents and then adding them to the list.
http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/keeping-protection-u/trend_client_program123456789/-trend_client_progra/customized-update-so.aspx
2018-05-20T19:34:30
CC-MAIN-2018-22
1526794863684.0
[]
docs.trendmicro.com
Difference between revisions of "Components Search Manager Options" From Joomla! Documentation Revision as of 04:34, 9 May 2013 How to Access - Click the Global Configuration button in the Control Panel and click the Search button on left side panel, or - Select Components → Search → Options from the drop-down menus. Screenshot Description - Details on Component Options - Details on Permissions configuration Toolbar At the top left>.
https://docs.joomla.org/index.php?title=Help31:Components_Search_Manager_Options&curid=28991&diff=89166&oldid=89080
2015-11-25T00:42:58
CC-MAIN-2015-48
1448398444138.33
[array(['/images/f/f9/Help30-components-search-manager-options-screen.png', 'Component'], dtype=object) ]
docs.joomla.org
/src/google.golang.org/api/dataflow - ActiveState ActiveGo 1.8 ... ActiveGo 1.8 ActiveGo ▽ Get ActiveGo Get started Packages Directory /src/google.golang.org/api/dataflow Name Synopsis .. v1b3 Package dataflow provides access to the Google Dataflow API. Build version go1.8.3. Portions of this page are modifications based on work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License. © 2018 ActiveState Software Inc. All rights reserved. Trademarks.
http://docs.activestate.com/activego/1.8/pkg/google.golang.org/api/dataflow/
2018-11-13T00:07:53
CC-MAIN-2018-47
1542039741176.4
[]
docs.activestate.com
Client Authentication¶ Many services at Authentise require some form of client authentication. That is, the request being made must be proven to be authentic. There are generally two ways of doing so: via API token and via a user session. If you are a partner of Authentise you’ll use the API token method as outlined in design-streaming-api. All other users will use the user session authentication outlined below. Please note that when you create a user using the instructions below you automatically agree to Authentise’s Terms of Service. Creating a User¶ To create an authenticated session you’ll first need a user account. You can create one via the Users service. The request looks like this: POST Content-Type: application/json { "email" : "eli @uthentise.com", "name" : "Eli Ribble", "password" : "my-secret", "username" : "EliRibble", } This request creates a new user for me. I’ve obfuscated my email just a bit in this example to make it a bit harder for bots to spam me. The response should be a 201 indicating success and a Location header will be provided in the response that lets me know where I can GET information about my user. Creating a Session¶ Now that I have a user I can create a new session with Authentise. POST Content-Type: application/json { "username" : "EliRibble", "password" : "my-secret", } This will return a 201 again on success and provide a Cookie called session. The cookie will have the domain set to anything within the authentise.com domain so that the session is included with requests to any of the other services. You’ll need to include this cookie in any of the requests you make to Authentise’s services. If you fail to do so, or you provide an expired cookie, you’ll receive a 401 response code indicating your request was unauthorized. If you make too many requests using a given session you may also receive a 429 status code which indicates you’ve hit our rate limiting and you need to stop sending so many requests.
http://docs.authentise.com/authentication.html
2018-11-13T01:04:22
CC-MAIN-2018-47
1542039741176.4
[]
docs.authentise.com
DataStax ODBC driver for Hive on Windows The DataStax ODBC Driver for Hive provides Windows users access to the information that is stored in DataStax Enterprise Hadoop. Hadoop is deprecated for use with DataStax Enterprise. DSE Hadoop and BYOH (Bring Your Own Hadoop) are deprecated. Hive is also deprecated and will be removed when Hadoop is removed. instructions.
https://docs.datastax.com/en/datastax_enterprise/5.0/datastax_enterprise/ana/hiveODBC.html
2018-11-13T00:39:12
CC-MAIN-2018-47
1542039741176.4
[]
docs.datastax.com
File Stream. File Set Length(Int64) Stream. File Set Length(Int64) Stream. File Set Length(Int64) Stream. Method Set Length(Int64) Definition Sets the length of this stream to the given value. public: override void SetLength(long value); public override void SetLength (long value); override this.SetLength : int64 -> unit Public Overrides Sub SetLength (value As Long) Parameters Exceptions An I/O error has occurred. The stream does not support both writing and seeking. Attempted to set the value parameter to less than 0. Remarks. Note Use the CanWrite property to determine whether the current instance supports writing, and the CanSeek property to determine whether seeking is supported. For additional information, see CanWrite and CanSeek. For a list of common file and directory operations, see Common I/O Tasks.
https://docs.microsoft.com/en-us/dotnet/api/system.io.filestream.setlength?view=netframework-4.7.2
2018-11-13T01:13:05
CC-MAIN-2018-47
1542039741176.4
[]
docs.microsoft.com
The Macro Argument property provides an optional argument to the macro. Property Type: Static Default Value: null Hierarchical Reference: ControlName.MacroArgument The value or result of a rule, placed in the Macro Argument property, is passed into the special variable Current Macro Argument (see Info: Special Variables). Using the Macro Argument property allows the same macro to be used, but with differing outcomes depending on when it is run.. Static properties can be made Dynamic by double clicking the gray radio button.
http://docs.driveworkspro.com/Topic/MacroArgument
2018-11-13T01:41:51
CC-MAIN-2018-47
1542039741176.4
[]
docs.driveworkspro.com
TinyCLR OS Introduction TinyCLR OS started with Microsoft's .NET Micro Framework and continues to enable managed .NET development and debugging using Visual Studio on embedded devices. All you need to get started is Visual Studio (free version available), a TinyCLR device, and a USB cable. Tip TinyCLR OS is still an alpha so there is still a lot more to come! Take a look at the release notes to see what's new and the roadmap to see what we have planned. To learn more about TinyCLR embedded programming check out our tutorials. You can also visit our main website at and our community forums at forums.ghielectronics.com.
http://docs.ghielectronics.com/software/tinyclr/intro.html
2018-11-13T00:06:24
CC-MAIN-2018-47
1542039741176.4
[array(['images/tinyclr-logo-noborder.jpg', 'TinyCLR Logo'], dtype=object)]
docs.ghielectronics.com
Getting Started with AeroGear Mobile Services - Introduction - Setting up AeroGear Mobile Services on OpenShift - Registering a Mobile Client - Provisioning your First Service - Binding a Mobile Client - Setting Up your Local Development Environment - Running your First Mobile App. This guide shows you how to: Set up AeroGear Mobile Services on OpenShift Create a Mobile Client and a Mobile Service (Identity Management) Set up a local development environment Configure the AeroGear showcase app for your mobile platform (Android, iOS, Cordova or Xamarin). Run the showcase app and make calls to the Identity Management service. Make sure you satisfy all the requirements listed in the Prerequisites and that you have a mobile app development environment configured. Some experience with OpenShift administration would also be helpful. Setting up AeroGear Mobile Services on OpenShift To use Mobile Services, you must run OpenShift and install an add-on which enables Mobile Services. For more information about OpenShift, see OpenShift website. Prerequisites MacOS or Linux A system running OpenShift oc cluster upas described in Local Cluster Management Use OpenShift client tools version 3.9 Access to a Docker Hub account. The installer uses Docker Hub as a source for AeroGear Docker images. Ansible (version 2.6 or above) For Linux (Fedora), add an extra port to the dockerczone: $ firewall-cmd --permanent --zone dockerc --add-port 443/tcp $ firewall-cmd --reload A local mobile development environment for the platform you want to develop on. Procedure Clone the Mobile-core installer: The installer configures the local development installation of AeroGear Mobile Services using ansible scripts in our mobile-core repo. Clone this repo to your local machine and check out the 1.0.0 tag using: git clone cd mobile-core git checkout 1.0.0 In the same directory, run the installer: $ ./installer/install.sh The installer checks that valid versions of Ansible, Docker and the OpenShift Client Tools are installed. Enter your Docker Hub login credentials when prompted. The installer checks these credentials are valid before continuing. Accept the default values for the next set of prompts, unless you have specific requirements and understand the implications of changing the values. For more information about these values, see the DockerHub, Cluster IP and Wildcard DNS Host values section in Additional Resources. DockerHub Tag (Defaults to latest): DockerHub Organisation (Defaults to aerogearcatalog): Cluster IP (Defaults to < Network IP Address >) Wildcard DNS Host (Defaults to nip.io): For more information, see the DockerHub, Cluster IP and Wildcard DNS Host values section in Additional resources. The following installation can take a while the first time it runs, as it pulls a number of Docker images. Once completed successfully, this results in an output similar to the following: TASK [output-oc-cluster-status : debug] ****************************************************************************************************************************************************** ok: [localhost] => { "msg": [ "Web console URL:", "", "Config is at host directory /var/lib/origin/openshift.local.config", "Volumes are at host directory /var/lib/origin/openshift.local.volumes", "Persistent volumes are at host directory /var/lib/origin/openshift.local.pv", "Data is at host directory /path/to/mobile-core/ui/openshift-data" ] } PLAY RECAP *********************************************************************************************************************************************************************************** localhost : ok=44 changed=17 unreachable=0 failed=0 The log above is displayed after a successful installation. Verify the installation: Browse to the Web console URL displayed at the end of the installation, and log in, accepting the self-signed certificate if displayed. The developer login credentials are: username: developer password: password The service catalog is displayed Check that the Mobile tab is displayed in the service catalog. If this tab is not displayed, wait a few minutes to make sure that the installation process has completed. If the Mobile tab still is not displayed, follow the troubleshooting steps below. Additional resources Troubleshooting If you have problems running AeroGear Mobile Services, run the installer using the --debug option to capture as much information as possible: $ ./installer/install.sh --debug Firewall issues can occur with external devices trying to communicate with Mobile Services provisioned on a Linux machine. This is due to a number of Mobile Services using ports which are restricted to root users only. If you encounter these issues, you can add the ports to your firewall. Depending on the port your service uses an example of the ports you may want to add to your firewall are: $ firewall-cmd --add-port 443/tcp $ firewall-cmd --add-port 80/tcp DockerHub, Cluster IP and Wildcard DNS Host values Use DockerHub Tag and DockerHub Organisation to configure the location of the APBs used by the service-catalog in the cluster you are creating: DockerHub Tag (Defaults to latest): DockerHub Organisation (Defaults to aerogearcatalog): The Cluster IP value defaults to the IP address of your primary network interface. If you want to connect to your OpenShift instance from a mobile device, ensure that your device is on the same network. Typically, you should ensure you are using the IP Address of your Wireless Adapter (if one exists): Cluster IP (Defaults to < Network IP Address >) The Wildcard DNS Host option alters the wildcard DNS host you want to use: Wildcard DNS Host (Defaults to nip.io): Registering. Create a new project or choose an existing project. Select Catalog from the left hand menu. You can filter the catalog items to only show mobile specific items by selecting the Mobile tab. Click Services and. Setting Up your Local Development Environment Running your First Mobile App. Running the app in an emulator.
http://docs.aerogear.org/aerogear/latest/getting-started.html
2018-11-13T01:31:35
CC-MAIN-2018-47
1542039741176.4
[array(['_images/catalog-mobile-clients.png', 'catalog mobile clients'], dtype=object) array(['_images/mobile-clients-services-all-idm-provisioned.png', 'mobile clients services all idm provisioned'], dtype=object)]
docs.aerogear.org
What is the difference between severity, impact, urgency and priority? There are many naming standards to classify incidents. The Alert Manager follows ITIL framework naming, where priority is calculated from impact and urgency ( impact x urgency = priority ). The impact is taken from the alert settings. The urgency is taken from the incident configuration's default urgency setting, or, if the alert has a field urgency with a valid setting, from the alert's result. For alerts with multiple urgencies, the first urgency level is taken. Urgencies can be overridden manually after an incident has been created. The incident's priority is calculated with the help of the alert_priority lookup table based on the alert's severity reps. impact and the incident's urgency. The default matrix can be found here. How should I understand users? There are two main reasons why we are using users in our app: - Incident assignment: Dispatch incident investigation tasks to other colleagues - Notifications: Receive notifications at different stages by e-mail What is the difference between built-in and Alert Manager users? Built-in users are traditional Splunk users. They can be used to re-assign incidents and receive notifications as long as the built-in user repository is activated (see User settings in the Alert Manager). Alert Manager users are virtual users configured in the Alert Manager only. The primary usage of these users is to represent user or group accounts outside of Splunk. - Example 1: You wan't to notify or dispatch an incident to a Domain controller admin when a new Incident has been created, but the admin has no account in Splunk - Example 2: You wan't to send a notification to a mailing list without having to create a dedicated Splunk user for this purpose. Why do I have to deploy the Add-on on a Search Head? As the Alert Manager generates some events (by default alerts), they get parsed on the Search Head. So event breaking rules from props.conf need to get applied here, regardless if events get forwarded to indexers later or not.
http://docs.alertmanager.info/en/latest/faq/
2018-11-13T01:03:45
CC-MAIN-2018-47
1542039741176.4
[]
docs.alertmanager.info
Important: DC/OS does not currently support multiple region configurations. If you would like to experiment with multi-region configurations, this topic provides the setup recommendations and caveats. - The following multi-region setups have not been tested or verified. - A typical DC/OS cluster has all master and agent nodes in the same zone. The cost of having masters spread across zones usually outweighs the benefits. Single Region Masters and Cross-Region AgentsSingle Region Masters and Cross-Region Agents In this configuration, DC/OS masters are within a region, possibly spanning zones. DC/OS agents span multiple regions. This configuration is similar to masters in an on-prem datacenter, and agents running on-prem and on a public cloud. Recommendations and CaveatsRecommendations and Caveats - This setup is typically not recommended because of how difficult it is of how difficult it is to guarantee the latency and addressing requirements. - This setup should only be considered if dedicated private connections between regions is available. For example, AWS Direct Connect or Azure Express Route. - Because networking partitions are more likely across zones, masters could end up in split-brain scenario. For example, two different masters think they are the leader at the same time. This is typically harmless because only one leader is allowed to make modifiable actions, for example register or shutdown agents.
https://docs.mesosphere.com/1.9/installing/high-availability/multi-region/
2018-11-13T01:02:17
CC-MAIN-2018-47
1542039741176.4
[]
docs.mesosphere.com
How | SETTINGS | GROUP WEBSITE CODE. See Group Booking Engine Website Code - using the Confirmation # as a User ID and the password you created for the Group. The User ID and password is the same for all guests logging into the Group Booking. - Give the Group Contact the User ID: is auto set via the system. This field is not editable. - Password: is set via the property. Guests cannot change this field.
https://docs.bookingcenter.com/pages/viewpage.action?pageId=4490650
2018-11-13T00:29:19
CC-MAIN-2018-47
1542039741176.4
[]
docs.bookingcenter.com
glIsProgram — Determines if a name corresponds to a program object program Specifies a potentialiv, glDeleteProgram, glDetachShader, glLinkProgram,..
http://docs.gl/es3/glIsProgram
2018-11-13T01:29:37
CC-MAIN-2018-47
1542039741176.4
[]
docs.gl
28th European Diabetes Congress Start Date : June 10, 2019 End Date : June 11, 2019 Time : 9:00 am to6:00 am Phone : 07025085201 Location : Edinburgh, Scotland Description 2019conference. Registration Info the Organizer Organized by Organized by davidrichard David Richard , Edinburgh, Scotland.: 7025085201 Mobile: 7025085201 Website: Event Categories: Business Practice, Cardiology, Endocrinology, Family Practice, General Surgery, Geriatrics, Infectious Disease, Internal Medicine, Obstetrics, Ophthalmology, and Physical Medicine.
http://meetings4docs.com/event/28th-european-diabetes-congress/
2018-11-13T00:14:15
CC-MAIN-2018-47
1542039741176.4
[array(['http://meetings4docs.com/wp-content/themes/Events-your%20theme/thumb.php?src=http://meetings4docs.com/wp-content/uploads/2018/08/21Untitled.png&w=105&h=105&zc=1&q=80&bid=1', None], dtype=object) ]
meetings4docs.com
The Jelastic Platform provides you with a Shared Load Balancer (resolver). It represents an NGINX proxy server between client side (browser, for example) and your application, deployed to the Jelastic Cloud. Shared LB processes all incoming requests, sent to an environment domain name ({user_domain}.{hoster_domain}), which entry point (balancer, application server or even database) does not have a Public IP address attached. The common Shared LB processes the requests, sent to all of the applications, located within the same hardware node. In order to be protected from DDoS attacks, Shared Load Balancer is limited to 50 simultaneous connections per the source address of the request.. As a result, there can be several entry points for users' environments, used at the same time. In this way, the incoming load can be effectively distributed. We recommend to use Shared Resolver for your dev and test environments. As for production environments, which are intended to handle high traffic, it is more appropriate to use your own Public IP for getting and processing the requests. Also, it allows you to apply a number of additional options to your application, which may help to make it more secure (e.g. with Custom SSL) and responsive (through attaching Custom Domain).
https://docs.jelastic.com/shared-load-balancer
2018-11-13T00:47:16
CC-MAIN-2018-47
1542039741176.4
[array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2FShared-LB_Public-IP-illustrations.png&x=1855&a=true&t=5e6c7e40516e3a81db36428e805a9358&scalingup=0', None], dtype=object) array(['https://download.jelastic.com/public.php?service=files&t=5e6c7e40516e3a81db36428e805a9358&path=%2F&files=New-Resolvers.gif&download', None], dtype=object) array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2F3.png&x=1920&a=true&t=5e6c7e40516e3a81db36428e805a9358&scalingup=0', 'Public IP vs Shared Load Balancer'], dtype=object) ]
docs.jelastic.com
Thinking Kinto KintoHub believes that microservices are poised to rule the world, but as developers, we're missing piece to the puzzle on the how. In our perspective, microservices must follow these principles: - Microservices functionality and data are bounded contexts with a single responsibility made up of one or more functional API endpoints. - A single microservice instance should be able to support and scale an unlimited amount of applications through contextual API endpoints. We know good things come in 3's but that's all we follow as of June 8th, 2018. As a society of developers, let's start seeing microservices as the last time we write common functionality. Microservices expose languages through APIs which empower developers to utilize business logic within any language to solve our problems at hand. How this works is that every API endpoint needs contexts to the source of who is calling. This may be a public webhook for Application X in environment Y, or it may be a call from a mobile client with Session 1337 in Application N in environment M. At the end of the day, businesses are duplicating logic. Thinking Kinto begins with understanding that every API call comes with information about who is calling your service through what application environment. Every API call is tracked with a required Kinto-App-Id and optional "Session Memory" information which allows you to perform logic that User-Id: 9000 is logging in to Application KintoHub in environment Production. How does it work?How does it work? The magic here is that we no longer need to dedicate a single service per app/environment. What are the benefits? Shared costs in hosting for everyone, and shared logic. Wait, what if I want custom functionality?Wait, what if I want custom functionality? We're going to be introducing interesting ways to override and modify functionality based on your application. We're looking forward to sharing more once we get there! What about security?What about security? All of our services are black boxed. You cannot send data out nor get data into the service. Rest assured, we're working hard to ensure everything processed on KintoHub is safe and secure.
https://docs.kintohub.com/docs/thinking-kinto
2018-11-13T00:35:12
CC-MAIN-2018-47
1542039741176.4
[array(['/docs/assets/client-multi-app-services.png', 'Screenshot - Authorize to kintohub'], dtype=object)]
docs.kintohub.com
IPropertyBag2 interface Provides an object with a property bag in which the object can save its properties persistently. The IPropertyBag2 interface inherits from the IUnknown interface. IPropertyBag2 also has these types of members: - Methods Methods The IPropertyBag2 interface has these methods. Remarks When a client wants to control how the individually named properties of an object are saved, it uses an object's IPersistPropertyBag2 interface as a persistence mechanism. The client supplies a property bag to the object in the form of an IPropertyBag2 interface. IPropertyBag2 is an enhancement of the IPropertyBag interface. IPropertyBag2 allows the object to obtain type information for each property by using the CountProperties method and the GetPropertyInfo method. A property bag that implements IPropertyBag2 must also support IPropertyBag, so that objects that only support IPropertyBag can access their properties. Also, an object that supports IPropertyBag2 must also support IPropertyBag so that the object can communicate with property bags that only support IPropertyBag. Requirements See also Reference
https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/aa768192(v=vs.85)
2018-11-13T00:51:54
CC-MAIN-2018-47
1542039741176.4
[]
docs.microsoft.com
Send Docs Feedback Installation Checklist Edge for Private Cloud v. 4.16.09 The checklist covers the preceding prerequisites and provides a list of required files to obtain before proceeding. Here is a summary of the primary requirements covered there. If an older version of Apigee Edge for Private Cloud was installed on the machine, ensure that you delete the folder /tmp/java before a new installation. If user "apigee" was created prior to the installation, ensure that "/home/apigee" exists as home directory and is owned by “apigee:apigee”. -. -. - iptables: Validate that there are no iptables policies preventing connectivity between nodes on the required Edge ports. If necessary, you can stop iptables during installation using the command: > sudo /etc/init.d/iptables stopOn CentOS 7.x:> systemctl stop firewalld - Help or comments? - If something's not working: Ask the Apigee Community or see Apigee Support. - If something's wrong with the docs: Send Docs Feedback (Incorrect? Unclear? Broken link? Typo?)
http://ja.docs.apigee.com/private-cloud/v4.16.09/installation-checklist
2017-09-19T18:54:56
CC-MAIN-2017-39
1505818685993.12
[]
ja.docs.apigee.com
Before the release of the .NET Framework, all code running on a user's computer had the same rights or permissions to access resources that a user of the computer had. For example, if the user was allowed to access the file system, the code was allowed to access the file system; if the user was allowed to access a database, the code was allowed to access that database. Although these rights or permissions may be acceptable for code in executables that the user has explicitly installed on the local computer, they may not be acceptable for potentially malicious code coming from the Internet or a local Intranet. This code should not be able to access the user's computer resources without permission. The .NET Framework introduces an infrastructure called Code Access Security that lets you differentiate the permissions, or rights, that code has from the rights that the user has. By default, code coming from the Internet and the Intranet can only run in what is known as partial trust. Partial trust subjects an application to a series of restrictions: among other things, an application is restricted from accessing the local hard disk, and cannot run unmanaged code. The .NET Framework controls the resources that code is allowed to access based on the identity of that code: where it came from, whether it has a Strong-Named Assemblies, whether it is signed with a certificate, and so on. ClickOnce technology, which you use to deploy Windows Forms applications, helps make it easier for you to develop applications that run in partial trust, in full trust, or in partial trust with elevated permissions. ClickOnce provides features such as Permission Elevation and Trusted Application Deployment so that your application can request full trust or elevated permissions from the local user in a responsible manner. Understanding Security in the .NET Framework Code access security allows code to be trusted to varying degrees, depending on where the code originates and on other aspects of the code's identity. For more information about the evidence the common language runtime uses to determine security policy, see Evidence. It helps protect computer systems from malicious code and helps protect trusted code from intentionally or accidentally compromising security. Code access security also gives you more control over what actions your application can perform, because you can specify only those permissions you need your application to have. Code access security affects all managed code that targets the common language runtime, even if that code does not make a single code-access-security permission check. For more information about security in the .NET Framework, see Key Security Concepts and Code Access Security Basics. If the user run a Windows Forms executable file directly off of a Web server or a file share, the degree of trust granted to your application depends on where the code resides, and how it is started. When an application runs, it is automatically evaluated and it receives a named permission set from the common language runtime. By default, the code from the local computer is granted the Full Trust permission set, code from a local network is granted the Local Intranet permission set, and code from the Internet is granted the Internet permission set. Note In the .NET Framework version 1.0 Service Pack 1 and Service Pack 2, the Internet zone code group receives the Nothing permission set. In all other releases of the .NET Framework, the Internet zone code group receives the Internet permissions set. The default permissions granted in each of these permission sets are listed in the Default Security Policy topic. Depending on the permissions that the application receives, it either runs correctly or generates a security exception. Many Windows Forms applications will be deployed using ClickOnce. The tools used for generating a ClickOnce deployment have different security defaults than what was discussed earlier. For more information, see the following discussion. The actual permissions granted to your application can be different from the default values, because the security policy can be modified; this means that your application can have permission on one computer, but not on another. Developing a More Secure Windows Forms Application Security is important in all steps of application development. Start by reviewing and following the Secure Coding Guidelines. Next, decide whether your application must run in full trust, or whether it should run in partial trust. Running your application in full trust makes it easier to access resources on the local computer, but exposes your application and its users to high security risks if you do not design and develop your application strictly according to the Secure Coding Guidelines topic. Running your application in partial trust makes it easier to develop a more secure application and reduces much risk, but requires more planning in how to implement certain features. If you choose partial trust (that is, either the Internet or Local Intranet permission sets), decide how you want your application to behave in this environment. Windows Forms provides alternative, more secure ways to implement features when in a semi-trusted environment. Certain parts of your application, such as data access, can be designed and written differently for both partial trust and full trust environments. Some Windows Forms features, such as application settings, are designed to work in partial trust. For more information, see Application Settings Overview. If your application needs more permissions than partial trust allows, but you do not want to run in full trust, you can run in partial trust while asserting only those additional permissions you need. For example, if you want to run in partial trust, but must grant your application read-only access to a directory on the user's file system, you can request FileIOPermission only for that directory. Used correctly, this approach can give your application increased functionality and minimize security risks to your users. When you develop an application that will run in partial trust, keep track of what permissions your application must run and what permissions your application could optionally use. When all the permissions are known, you should make a declarative request for permission at the application level. Requesting permissions informs the .NET Framework run time about which permissions your application needs and which permissions it specifically does not want. For more information about requesting permissions, see Requesting Permissions. When you request optional permissions, you must handle security exceptions that will be generated if your application performs an action that requires permissions not granted to it. Appropriate handling of the SecurityException will ensure that your application can continue to operate. Your application can use the exception to determine whether a feature should become disabled for the user. For example, an application can disable the Save menu option if the required file permission is not granted. Sometimes, it is difficult to know if you have asserted all the appropriate permissions. A method call which looks innocuous on the surface, for example, may access the file system at some point during its execution. If you do not deploy your application with all the required permissions, it may test fine when you debug it on your desktop, but fail when deployed. Both the .NET Framework 2.0 SDK and Visual Studio 2005 contain tools for calculating the permissions an application needs: the MT.exe command line tool and the Calculate Permissions feature of Visual Studio, respectively. The following topics describe additional Windows Forms security features. - Deploying an Application with the Appropriate Permissions The most common means of deploying a Windows Forms application to a client computer is with ClickOnce, a deployment technology that describes all of the components your application needs to run. ClickOnce uses XML files called manifests to describe the assemblies and files that make up your application, and also the permissions your application requires. ClickOnce has two technologies for requesting elevated permissions on a client computer. Both technologies rely on the use of Authenticode certificates. The certificates help provide some assurance to your users that the application has come from a trusted source. The following table describes these technologies. Which technology you choose will depend on your deployment environment. For more information, see Choosing a ClickOnce Deployment Strategy. By default, ClickOnce applications deployed using either Visual Studio or the .NET Framework 2.0 SDK tools (Mage.exe and MageUI.exe) are configured to run on a client computer that has Full Trust. If you are deploying your application by using partial trust or by using only some additional permissions, you will have to change this default. You can do this with either Visual Studio or the .NET Framework 2.0 SDK tool MageUI.exe when you configure your deployment. For more information about how to use MageUI.exe, see Walkthrough: Deploying a ClickOnce Application from the Command Line. Also see How to: Set Custom Permissions for a ClickOnce Application or How to: Set Custom Permissions for a ClickOnce Application. For more information about the security aspects of ClickOnce and Permission Elevation, see Securing ClickOnce Applications. For more information about Trusted Application Deployment, see Trusted Application Deployment Overview. Testing the Application If you have deployed your Windows Forms application by using Visual Studio, you can enable debugging in partial trust or a restricted permission set from the development environment. Also see How to: Debug a ClickOnce Application with Restricted Permissions or How to: Debug a ClickOnce Application with Restricted Permissions. See Also Windows Forms Security Code Access Security Basics ClickOnce Security and Deployment Trusted Application Deployment Overview Mage.exe (Manifest Generation and Editing Tool) MageUI.exe (Manifest Generation and Editing Tool, Graphical Client)
https://docs.microsoft.com/en-us/dotnet/framework/winforms/security-in-windows-forms-overview
2017-09-19T19:01:54
CC-MAIN-2017-39
1505818685993.12
[]
docs.microsoft.com
All memberships are set to auto-renew each year at the rate that you originally paid. You will receive an email a few days prior to renewal letting you know this and giving you the option to update your payment details or cancel. If you need to update your credit card information, simply login to the billing portal. If you are a member, a link to this portal was sent to you upon signup and you can visit this to "Update your Payment Method." The link does expire for security reasons; if it has expired, login to your account at muse-themes.com, click on your name. You can cancel through the billing portal. Visit our cancellation page for detailed steps. For any other renewal or billing questions, contact our accounts team – [email protected].
http://docs.muse-themes.com/accounts-and-billing/renewals
2017-09-19T18:47:59
CC-MAIN-2017-39
1505818685993.12
[]
docs.muse-themes.com
Subscribe You can subscribe to Mailchimp using our add-on as follows: Action Form Action Form comes with a Subscribe to Mailchimp template which has a few predefined fields and actions: First Name, Last Name, Email, Subscribe to Mailchimp, and Display Message. Action Grid, DNN API Endpoint The Subscribe to Mailchimp action is basically the same in all the supported modules, with a few slight differences. Whereas in Action Form, Action Grid, and DNN API Endpoint you can use either a text box field or expression to determine which email will be used to subscribe in Sharp Scheduler and InfoBox, the email must be determined by ways of injecting data or SQL query, for example. The resulting Email token is then used in the Subscribe to Mailchimp Email field. It is worth noting that in InfoBox, the Subscribe to Mailchimp action is a button action. The API Key that needs to be provided in Subscribe to Mailchimp can be found in your Mailchimp account, under Account > Extra > API Keys. The List Name must be exactly as it appears in Mailchimp, as indicated by the help text.
https://docs.dnnsharp.com/integrations/mailchimp/subscribe.html
2020-01-17T15:21:45
CC-MAIN-2020-05
1579250589861.0
[array(['/integrations/mailchimp/assets/template-af.jpg', None], dtype=object) array(['/integrations/mailchimp/assets/subsc.jpg', None], dtype=object) array(['/integrations/mailchimp/assets/expr.jpg', None], dtype=object) array(['inject.jpg', None], dtype=object) array(['/integrations/mailchimp/assets/sql.jpg', None], dtype=object) array(['/integrations/mailchimp/assets/api-key.jpg', None], dtype=object)]
docs.dnnsharp.com
Authorization Rule Collection. Remove(AuthorizationRule) Method Definition Removes a AuthorizationRule object from the collection. public: void Remove(System::Web::Configuration::AuthorizationRule ^ rule); public void Remove (System.Web.Configuration.AuthorizationRule rule); member this.Remove : System.Web.Configuration.AuthorizationRule -> unit Public Sub Remove (rule As AuthorizationRule) Parameters - rule - AuthorizationRule The AuthorizationRule object to remove. Exceptions The passed AuthorizationRule object does not exist in the collection, the element has already been removed, or the collection is read-only. Examples The following code example shows how to use the Remove method. Refer to the code example in the AuthorizationSection class topic to learn how to get the collection. // Remove the rule from the collection. authorizationSection.Rules.Remove(authorizationRule); ' Remove the rule from the collection. authorizationSection.Rules.Remove(authorizationRule) Remarks.
https://docs.microsoft.com/en-us/dotnet/api/system.web.configuration.authorizationrulecollection.remove?redirectedfrom=MSDN&view=netframework-4.8
2020-01-17T16:25:24
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
Run an SSIS package with PowerShell SQL Server SSIS Integration Runtime in Azure Data Factory Azure Synapse Analytics (SQL DW) This quickstart demonstrates how to use a PowerShell script to connect to a database server and run an SSIS package. Prerequisites. SSIS PowerShell Provider You can use the SSIS PowerShell Provider to connect to an SSIS catalog and execute packages within it. Below is a basic example of how to execute an SSIS package in a package catalog with the SSIS PowerShell Provider. (Get-ChildItem SQLSERVER:\SSIS\localhost\Default\Catalogs\SSISDB\Folders\Project1Folder\Projects\'Integration Services Project1'\Packages\ | WHERE { $_.Name -eq 'Package.dtsx' }).Execute("false", $null) PowerShell script Provide appropriate values for the variables at the top of the following script, and then run the script to run the SSIS package. Note The following example uses Windows Authentication. To use SQL Server authentication, replace the Integrated Security=SSPI; argument with User ID=<user name>;Password=<password>;. If you're connecting to an Azure SQL Database server, you can't use Windows authentication. # Variables $SSISNamespace = "Microsoft.SqlServer.Management.IntegrationServices" $TargetServerName = "localhost" $TargetFolderName = "Project1Folder" $ProjectName = "Integration Services Project1" $PackageName = "Package.dtsx" # Load the IntegrationServices assembly $loadStatus = [System.Reflection.Assembly]::Load("Microsoft.SQLServer.Management.IntegrationServices, "+ "Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91, processorArchitecture=MSIL") # Create a connection to the server $sqlConnectionString = ` "Data Source=" + $TargetServerName + ";Initial Catalog=master;Integrated Security=SSPI;" $sqlConnection = New-Object System.Data.SqlClient.SqlConnection $sqlConnectionString # Create the Integration Services object $integrationServices = New-Object $SSISNamespace".IntegrationServices" $sqlConnection # Get the Integration Services catalog $catalog = $integrationServices.Catalogs["SSISDB"] # Get the folder $folder = $catalog.Folders[$TargetFolderName] # Get the project $project = $folder.Projects[$ProjectName] # Get the package $package = $project.Packages[$PackageName] Write-Host "Running " $PackageName "..." $result = $package.Execute("false", $null) Write-Host "Done."
https://docs.microsoft.com/en-us/sql/integration-services/ssis-quickstart-run-powershell?view=sql-server-2017
2020-01-17T16:27:20
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
Using RedisInsight Adding a Redis instance Now, let's connect RedisInsight to a Redis Server. We can start by connecting to redis server running on localhost. If the connection is successful, you should start seeing statistics for this redis server. Note - Troubleshooting: you with a GUI to manage your Redis Cluster with ease.. CLI RedisInsight CLI lets you run commands against a redis server. You don't need to remember the syntax - the integrated help shows you all the arguments and validates your command as you type. Memory Analysis RedisInsight Memory analysis help you analyze your redis instance and helps in reducing memory usage and improving application performance. Analysis can be done in two ways: online mode - In this mode, RedisInsight downloads a rdb file from your connected redis instance and analyzed it to create a temp file with all the keys and meta data required for analysis. In case, there is a master-slave connection, RedisInsight downloads the dump from the slave. Troubleshooting RedisIns>/.redisinsight directory.
https://docs.redislabs.com/latest/ri/using-redisinsight/
2020-01-17T17:12:55
CC-MAIN-2020-05
1579250589861.0
[]
docs.redislabs.com
To start, make sure you have the most up to date version of the Scout for Dog Walkers App ( 1.9.0+). Apple iOS Google Android Adding a Profile Photo Log into the app using your email address and password. Use the menu in the upper left hand corner and navigate to the profile section of the app. Tap the "+" button at the top of the profile. Choose a method for uploading your staff profile photo. You can take a photo using the camera or upload an existing photo from your phone's photo gallery. Zoom and or move the image to fit within the guidelines. Select "choose" to set your profile photo. Note: Your customers will see the final profile image in the check-in email. Removing a Profile Photo To remove a profile image, tap the existing profile image and select "Remove Profile Picture"
https://docs.scoutforpets.com/en/articles/2468834-add-a-staff-profile-photo
2020-01-17T15:26:25
CC-MAIN-2020-05
1579250589861.0
[array(['https://downloads.intercomcdn.com/i/o/83987348/7db18ce9c944dd0607281d6f/Rich+Pic.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/83987683/42cafb7040596bd5016b86ce/Rich+Profile.png', None], dtype=object) ]
docs.scoutforpets.com
To add or edit a staff members profile, select the staff icon (pictured) from the side navigation bar. Add Staff To add a new staff member, click the button in the top right of the application. Enter all required and relevant fields and click "Save" in the upper right corner of the application. Required fields: - Physical Address Once you have entered the required fields and any of the non-required fields, you have two options: - Save Only - Saves the staff memeber without sending an activation email. - Save + Notify - Saves the staff member and sends an activation email. The staff activation email contains a link for the staff member to confirm their email address and set a password. Once a password has been set, the staff member is given a link to download the Scout for Dog Walkers App. Edit Staff After navigating to the staff management page, click on a staff member to edit. Click the "More" button in the top right hand corner of the staff profile to expand the menu. Select "Update Profile" from the drop down menu. Make any desired changes to the staff profile. When you're finished, click the "Save" button in the top right corner.
https://docs.scoutforpets.com/en/articles/431201-add-edit-staff-members
2020-01-17T15:24:05
CC-MAIN-2020-05
1579250589861.0
[array(['https://downloads.intercomcdn.com/i/o/60393583/ae96fd049a571b6eb98676f2/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/60395091/5973e31e6911efbc8b54ca52/Screen+Shot+2018-05-22+at+12.39.51+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/60421450/58ad5ab003f605bea78ac4c5/Screen+Shot+2018-05-22+at+3.34.55+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/60421739/b1f06d064dcba8a5a26ec2b7/Screen+Shot+2018-05-22+at+3.41.04+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/60422086/3c5d66b4085492b7961a5ad2/Screen+Shot+2018-05-22+at+3.43.52+PM.png', None], dtype=object) ]
docs.scoutforpets.com
Viewing alarms When there are active alarms on your Pexip Infinity deployment, a flashing blue icon appears at the top right of each page of the Administrator interface. To view details of the current alarms, click on this icon or go to the page ( ). - Alarms remain in place for as long as the issue exists. After the issue has been resolved (for example, if a conference ends, therefore freeing up licenses) the associated alarm will automatically disappear from thepage. - Multiple instances of the same type of alarm can be raised. For example if two Conferencing Nodes are not correctly synchronized to an NTP server, you will see an alarm for each node. - You can select individual alarms and view the associated documentation (this guide) for suggested causes and resolutions. Thepage shows the details of all historic alarms including the severity level, and the time the alarm was raised and lowered. An alarm is raised in each of the following situations: Cloud bursting alarms The following alarms may be raised in relation to issues with dynamic cloud bursting. See Dynamic bursting to a cloud service for more information about resolving these alarms.
https://docs.pexip.com/admin/viewing_alarms.htm
2020-01-17T17:42:24
CC-MAIN-2020-05
1579250589861.0
[]
docs.pexip.com
DocumentConveter can only be deployed with Features of scope 'WebApplication'. The Feature cannot deploy DocumentConveter. Change the scope of the Feature to 'Site', 'WebApplication' or 'Farm' or move the DocumentConveter to a different Feature. To suppress this violation in XML SharePoint code add the following comment right before the XML tag which causes the rule violation. Learn more about SuppressMessage here. <!-- "SuppressMessage":{"rule":"SPC016301:DoNotDefineDocumentConverterInFeatureWithWrongScope",.
https://docs.rencore.com/spcaf/v7/SPC016301_DoNotDefineDocumentConverterInFeatureWithWrongScope.html
2020-01-17T15:51:52
CC-MAIN-2020-05
1579250589861.0
[array(['./_img/open.gif', 'Expand Expand'], dtype=object) array(['./_img/close.gif', 'Minimize Minimize'], dtype=object)]
docs.rencore.com
Map This report interacts with the Map field type. If you use this field type to draw markers and parcels on the map, you can use the Map report to display all previously entered objects. When you hover over the marker, a popup window with detailed information about the object. Access to the report is configurable. For more information on how to set up such a report, see the video review below.
https://docs.rukovoditel.net/index.php?p=43
2020-01-17T17:37:57
CC-MAIN-2020-05
1579250589861.0
[]
docs.rukovoditel.net
To add a pet for a customer, click the customer link in the side navigation or the icon below. Choose a customer and click to open their profile. To add a new pet, select the Pets tab at the top of the screen. To add a new pet, click the "Add Pet" button on the top right of the screen. Enter all required and relevant information and click save in the top right corner of the application. Required Information: - Pet name, - Pet Type - Breed. To edit a pet, click the edit icon to the right of the existing pet's name.
https://docs.scoutforpets.com/en/articles/1943814-add-edit-pets
2020-01-17T17:14:40
CC-MAIN-2020-05
1579250589861.0
[array(['https://downloads.intercomcdn.com/i/o/60877415/2fafe7b7d8543d262850ae73/Screen+Shot+2018-05-25+at+12.26.49+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/60877682/9c8f8aa291f20744def8ac7b/Screen+Shot+2018-05-25+at+12.28.34+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/60877975/40e037e2f4b92994f27939c8/Screen+Shot+2018-05-25+at+12.29.45+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/60878605/be3520686307e58f11f2142c/edit+icon.png', None], dtype=object) ]
docs.scoutforpets.com
This is an archived version of the documentation for SonarQube version 6.7 LTS. See Documentation for current functionality. Calculation Calculation must be triggered manually each time a Portfolio structure is modified. Portfolios should also be recomputed on a regular basis to keep them up to date with the most recent project quality snapshots. Portfolio are computed with the SonarQube Scanner. To compute all your Portfolio, run the following command (credentials from a user with "Administer System" or "Execute Analysis" permission is required): sonar-scanner views -Dsonar.login=<token> or sonar-scanner views -Dsonar.login=<login> -Dsonar.password=<pwd>.
https://docs.sonarqube.org/display/SONARQUBE67/Configuring+Portfolios+and+Applications
2020-01-17T17:06:22
CC-MAIN-2020-05
1579250589861.0
[]
docs.sonarqube.org
TOPICS× About sub-classifications Adobe Analytics support both single-level and multiple-level classifications models. A classification hierarchy allows you to apply a classification to a classification. Sub-classification refers to the ability to create classifications of classifications. However, this is not the same as a Classification Hierarchy used to create Hierarchy reports. For more information about Classification hierarchies, see Classification Hierarchies . For example: Each classification in this model is independent and corresponds to a new sub-report for the selected reporting variable. Furthermore, each classification constitutes one data column in the data file, with the classification name as the column heading. For example: For more information about the data file, see Classification Data Files . Multiple-level classifications are comprised of parent and child classifications. For example: Parent classifications: A parent classification is any classification that has an associated child classification. A classification can be both a parent and child classification. The top-level parent classifications correspond to single-level classifications (See Single-Level Classifications ). Child classifications: A child classification is any classification that has another classification as its parent instead of the variable. Child classifications provide additional information about their parent classification. For example, a Campaigns classification might have a Campaign Owner child classification. Numeric classifications also function as metrics in classification reports. Each classification, either parent or child, constitutes one data column in the data file. The column heading for a child classification using the following naming format: <parent_name>^<child_name> For more information about the data file format, see Classification Data Files . For example: Although the file template for a multilevel classification is more complex, the power of multilevel classifications is that separate levels can be uploaded as separate files. This approach can be used to minimize the amount of data that needs to be uploaded periodically (daily, weekly, and so forth) by grouping data into classification levels that change over time versus those that don't. If the Key column in a data file is blank, Adobe automatically generates unique keys for each data row. To avoid possible file corruption when uploading a data file with second-level or higher-level classification data, populate each row of the Key column with an asterisk (*). See Common Classification Upload Issues for troubleshooting help. Product classification data is limited to data attributes directly related to the product. The data is not limited to how the products are categorized or sold on the website. Data elements like sale categories, site browse nodes, or sale items are not product classification data. Rather, these elements are captured in report conversion variables. When uploading data files for this product classification, you can upload the classification data as a single file or as multiple files (see below). By separating the color code in file 1 and the color name in file 2, the color name data (which may only be a few rows) needs to be updated only when new color codes are created. This eliminates the color name (CODE^COLOR) field from the more frequently updated file 1 and reduces file size and complexity when generating the data file.
https://docs.adobe.com/help/en/analytics/components/classifications/c-sub-classifications.html
2020-01-17T15:44:21
CC-MAIN-2020-05
1579250589861.0
[array(['/content/dam/help/analytics.en/help/components/c-classifications2/assets/single-level-popup-C.png', None], dtype=object) array(['/content/dam/help/analytics.en/help/components/c-classifications2/assets/Multi-Level-Class-popup.png', None], dtype=object) array(['/content/dam/help/analytics.en/help/components/c-classifications2/assets/sample-product-classifications.png', None], dtype=object) ]
docs.adobe.com
Introduction Session Recording allows you to record the on-screen activity of any user session hosted from a VDA for Server OS or Desktop OS, over any type of connection, subject to corporate policy and regulatory compliance. Session Recording records, catalogs, and archives sessions for retrieval and playback. The following illustration shows the Session Recording components and their relationship with each other:<![CDATA[ ]]> Session Recording uses flexible policies to trigger recordings of application sessions automatically. This enables IT administrators to monitor and examine user activity of applications - such as financial operations and healthcare patient information systems - supporting internal controls for regulatory compliance and security monitoring. Similarly, Session Recording also aids in technical support by speeding problem identification and time-to-resolution. To be able to quickly store all the XenApp sessions that are recorded and sent by the Citrix Session Recording Agent, the Session Recording Server should be allocated with sufficient storage space and capable of processing the received recording messages with high throughput. If the space available on the server is not enough to store the recording files thar are being received as messages fro the Citrix Session Recording Agent, then the recording files may be kept in queue until they get enough space allocation or discarded forever. To avoid such eventualities, it is important for administrators to closely monitor the availability of storage space and throughput, promptly detect abnormalities, and fix them before it impacts recording. This is where eG Enterprise helps administrators.
https://docs.eginnovations.com/Citrix_Session_Recording_Server/Introduction_to_Session_Recording_Monitoring.htm
2020-01-17T17:05:43
CC-MAIN-2020-05
1579250589861.0
[array(['../Resources/Images/CitrixSessionRecordingServer/RecordingServer_575x451.png', None], dtype=object) array(['../Resources/Images/start-free-trial.jpg', None], dtype=object)]
docs.eginnovations.com
All content with label amazon+import+infinispan+installation+listener+nexus+notification+repeatable_read+started+transactionmanager. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, dist, release, query, deadlock, archetype, lock_striping, jbossas, guide, schema, cache, s3, grid, test, jcache,, jbosscache3x, read_committed, xml, distribution, cachestore, data_grid, resteasy, hibernate_search, cluster, permission, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, demo, scala, client, migration, non-blocking, jpa, filesystem, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, hotrod, webdav, snapshot, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, jsr-107, docbook, jgroups, locking, rest, hot_rod more » ( - amazon, - import, - infinispan, - installation, - listener, - nexus, - notification, - repeatable_read, - started, - transactionmanager ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+import+infinispan+installation+listener+nexus+notification+repeatable_read+started+transactionmanager
2020-01-17T17:09:02
CC-MAIN-2020-05
1579250589861.0
[]
docs.jboss.org
This topic reviews supported languages, frameworks, databases, and markers for OpenShift version 2 (v2) and OpenShift version 3 (v3). See Azure Red Hat OpenShift tested integrations for more information about common combinations that Azure Red Hat OpenShift customers are using. See the Supported Databases section of the Database Applications topic.
https://docs.openshift.com/aro/dev_guide/migrating_applications/support_guide.html
2020-01-17T15:39:36
CC-MAIN-2020-05
1579250589861.0
[]
docs.openshift.com
Developing with Sorted Sets in a CRDB Similar to Redis Sets, Redis Sorted Sets are non-repeating collections of Strings. The difference between the two is that every member of a Sorted Set is associated with a score used to order the Sorted Set from lowest to highest. While members are unique, they may have the same score. With Sorted Sets, you can quickly add, remove or update elements as well as get ranges by score or by rank (position). Sorted Sets in CRDBs behave the same and maintain additional metadata to handle concurrent conflicting writes. Conflict resolution is done in two phases: - First, the database resolves conflict at the set level using "OR Set" (Observed-Remove Set). With OR-Set behavior, writes across multiple CRDB instances are typically unioned except in cases of conflicts. Conflicting writes can happen when a CRDB instance deletes an element while the other adds or updates the same element. In this case, an observed Remove rule is followed, and only instances it has already seen are removed. In all other cases, the Add / Update element wins. - Second, the database resolves conflict at the score level. In this case, the score is treated as a counter and applies the same conflict resolution as regular counters. Please see the following examples to get familiar with Sorted Sets' behavior in CRDB: Example of Simple Sorted Set with No Conflict: Explanation: When adding two different elements to a Sorted Set from different replicas (in this example, x with score 1.1 was added by Instance 1 to Sorted Set Z, and y with score 1.2 was added by Instance 2 to Sorted Set Z) in a non-concurrent manner (i.e. each operation happened separately and after both instances were in sync), the end result is a Sorted Set including both elements in each CRDB instance. Example of Sorted Set and Concurrent Add: Explanation: When concurrently adding an element x to a Sorted Set Z by two different CRDB instances (Instance 1 added score 1.1 and Instance 2 added score 2.1), the CRDB implements Last Write Win (LWW) to determine the score of x. In this scenario, Instance 2 performed the ZADD operation at time t2>t1 and therefore the CRDB sets the score 2.1 to x. Example of Sorted Set with Concurrent Add Happening at the Exact Same Time: Explanation: The example above shows a relatively rare situation, in which two CRDB instances concurrently added the same element x to a Sorted Set at the same exact time but with a different score, i.e. Instance 1 added x with a 1.1 score and Instance 2 added x with a 2.1 score. After syncing, the CRDB realized that both operations happened at the same time and resolved the conflict by arbitrarily (but consistently across all CRDB instances) giving precedence to Instance 1. Example of Sorted Set with Concurrent Counter Increment: Explanation: The result is the sum of all ZINCRBY operations performed by all CRDB instances. Example of Removing an Element from a Sorted Set: Explanation: At t4 - t5, concurrent ZREM and ZINCRBY operations ran on Instance 1 and Instance 2 respectively. Before the instances were in sync, the ZREM operation could only delete what had been seen by Instance 1, so Instance 2 was not affected. Therefore, the ZSCORE operation shows the local effect on x. At t7, after both instances were in-sync, the CRDB resolved the conflict by subtracting 4.1 (the value of element x in Instance 1) from 6.1 (the value of element x in Instance 2).
https://docs.redislabs.com/latest/rs/developing/crdbs/developing-sorted-sets-crdb/
2020-01-17T17:02:16
CC-MAIN-2020-05
1579250589861.0
[]
docs.redislabs.com
Mailman 3 Core..: >>> command = cli('mailman.commands.cli_info.info') >>> command('mailman info') GNU Mailman 3... Python ... ... config file: .../test.cfg db url: ... REST root url: REST credentials: restadmin:restpass
https://mailman.readthedocs.io/en/latest/src/mailman/rest/docs/rest.html
2020-01-17T15:37:11
CC-MAIN-2020-05
1579250589861.0
[]
mailman.readthedocs.io
How to Create a Visual C# SMO Project in Visual Studio .NET On the File menu, click New and then Project. The New Project dialog box appears. In the Visual Studio Installed pane, navigate to Templates\Visual C#\Windows and select Console Application. (Optional) In the Name text box, type the name of the new application. Click OK to load the console application template. Follow the instructions on Installing SMO to install the package for your project to reference. On the View menu, click Code..
https://docs.microsoft.com/en-us/sql/relational-databases/server-management-objects-smo/how-to-create-a-visual-csharp-smo-project-in-visual-studio-net?view=sql-server-2017
2018-08-14T13:41:21
CC-MAIN-2018-34
1534221209040.29
[array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/no.png?view=sql-server-2017', 'no'], dtype=object) ]
docs.microsoft.com
<tablelist> The <tablelist> element references a topic that contains a list of tables within the book. It indicates to the processing software that the author wants a list of tables generated at the particular location. If no @href attribute is specified on the <tablelist> element, an external processor might generate a list of tables at this location. - map/topicref bookmap/tablelist.
http://docs.oasis-open.org/dita/dita/v1.3/os/part2-tech-content/langRef/technicalContent/tablelist.html
2018-08-14T13:23:19
CC-MAIN-2018-34
1534221209040.29
[]
docs.oasis-open.org
Migrating VM Storage to the Cloud When a VM is running in cloud using the "Run in Cloud" operation, it uses the Velostrata intelligent "Cache on Demand" mechanism, which takes into account various sophisticated algorithms to efficiently predict and stream any needed storage from source VM to the destination cloud. At any point in time you can initiate full storage migration which will start the process of full data migration to destination cloud. The storage migration allows for a fully migrated server that can be detached for a permanent cloud instance, which is no longer dependent on the source VM storage. Storage migration can also be used for a VM running in the cloud temporarily, and in this case the storage migration offers additional data redundancy point (backup), reduced network traffic, and better handling of WAN outages. - On the VSphere Web Client, select the desired Virtual Machine. - If the VM is still running on-premises, follow the steps in Running the Migration Wizard. - If the VM is already running in the cloud, view the Summary tab, and in the Storage Migration row, click on the green arrow and select Migrate VM storage to Cloud. Alternatively, right-click the VM and select Velostrata Operations > Migration Operations > Start Storage Migration. The data migration starts. The migration can take minutes to hours depending on various parameters like the size of the virtual machine, the bandwidth to the cloud, and the current utilization etc. It is important to note that the storage migration activity is de-prioritized versus the regular storage reads of the VM, and the storage migration will occur primarily in idle times. This prevents any performance degradation for the production VMs running in cloud. Migration can be monitored in the VM portlet on vSphere Web Client, this can also be used to pause the migration if desired. In addition, you can also view a Migration Throughput graph. - On the VSphere Web Client, select the desired Datacenter linked to the cloud extension. - In the Monitor tab, select the Velostrata Service tab, choose the specific cloud extension and the time range to view the graph. When storage migration is complete, the VM shows as Fully Cached. In this state, all VM disks are stored in an GCP Bucket (or AWS S3 object/Azure blob) and are accessible by our Cloud Edge appliances for all reads and writes. VM writes may still replicate back to the on-premises datastore if the original storage mode policy indicated that it should (“write back” as opposed to “write isolation”). - In PowerShell, connect to the Velostrata Manager by running Connect-VelostrataManager. - When prompted enter details for the Server, Username (apiuser) and Password (the subscription ID). - To detach the could source VM storage, run Start-VelosStorageMigration [-Id] <string[]>.
http://docs.velostrata.com/m/75847/l/527636-migrating-vm-storage-to-the-cloud
2018-08-14T13:41:48
CC-MAIN-2018-34
1534221209040.29
[array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/691/857/original/05b42898-1608-4383-9b1b-5f7e26e9b118.png?1497895243', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/691/855/original/0466ef90-fd35-48eb-a5b8-5abbeed09f74.png?1497895240', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/691/861/original/7daa5d3d-2eb6-45ea-9305-068ff90d7ab4.png?1497895247', None], dtype=object) array(['https://s3.amazonaws.com/screensteps_live/image_assets/assets/000/691/859/original/520cd605-e3cf-483c-b4e7-aea27d783008.png?1497895246', None], dtype=object) ]
docs.velostrata.com
Mounting¶ Magnetic Mount¶ For mounting on even surfaces, such as glass, brick or wood. You will need: - Rubbing alcohol or water - Soft cloth for cleaning - For best results allow 3 hours for the Magnetic Mount to reach room temperature. Next, clean the chosen surface with alcohol or water and wipe dry. - Detach the top magnet from the device and peel off the protective sheet. - Stick the adhesive side onto the chosen surface. Make sure to firmly press every inch against the surface. - Wait 30 minutes for the adhesive to set before use. - Attach device to the magnet.
http://docs.visionect.com/GettingStarted/Mounting.html
2018-08-14T13:55:23
CC-MAIN-2018-34
1534221209040.29
[array(['../_images/Magnetic_Mount.png', '../_images/Magnetic_Mount.png'], dtype=object) ]
docs.visionect.com
eggdropEstimated reading time: 5 minutes The official Docker image of Eggdrop- IRC’s oldest actively-developed bot!: Eggheads (the Eggdrop community) Supported architectures: (more info) amd64 Published image artifact details: repo-info repo’s repos/eggdrop/directory (history) (image metadata, transfer size, etc) Image updates: official-images PRs with label library/eggdrop official-images repo’s library/eggdropfile (history) Source of this description: docs repo’s eggdrop/directory (history) Supported Docker versions: the latest release (down to 1.6 on a best-effort basis) What is Eggdrop? Eggdrop is the world’s most popular Open Source IRC bot, designed for flexibility and ease of use, and is freely distributable under the GNU General Public License (GPL). It is designed to Linux, BSD, SunOs, Windows, and Mac OS X, among others. The core codebase is extendable via TCL scripts or C modules and bots can be linked to form botnets, enabling the sharing of userfiles and partylines across multiple bots. How to use this image First Run To run this container the first time, you’ll need to pass in, at minimum, a nickname and server via Environmental Variables. At minimum, a docker run command similar to $ docker run -ti -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/for/host/data:/home/eggdrop/eggdrop/data eggdrop should be used. This will modify the appropriate values within the config file, then start your bot with the nickname FooBot and connect it to irc.freenode.net. These variables are only needed for your first run- after the first use, you can edit the config file directly. Additional configuration options are listed in the following sections. Please note that, even in daemon mode, the -i flag for docker run is required. Environmental Variables SERVER This variable sets the IRC server Eggdrop will connect to. Examples are: -e SERVER=just.a.normal.server -e SERVER=you.need.to.change.this:6667 -e SERVER=another.example.com:7000:password -e SERVER=[2001:db8:618:5c0:263::]:6669:password -e SERVER=ssl.example.net:+6697 Only one server can be specified via an environmental variable. The + denotes an SSL-enabled port. After the first run, it is advised to edit the eggdrop config directly to add additional servers (see Long-term Persistence below). NICK This variable sets the nickname used by eggdrop. After the first use, you should change it by editing the eggdrop config directly (see Long-term Persistence below). Long-term Persistence After running the eggdrop container for the first time, the configuration file, user file and channel file will all be available inside the container at /home/eggdrop/eggdrop/data/ . NOTE! These files are only as persistent as the container they exist in. If you expect to use a different container over the course of using the Eggdrop docker image (intentionally or not) you will want to create a persistent data store. The easiest way to do this is to mount a directory on your host machine to /home/eggdrop/eggdrop/data. If you do this prior to your first run, you can easily edit the eggdrop configuration file on the host. Otherwise, you can also drop in existing config, user, or channel files into the mounted directory for use in the eggdrop container. You’ll also likely want to daemonize eggdrop (ie, run it in the background). To do this, start your container with something similar to $ docker run -i -e NICK=FooBot -e SERVER=irc.freenode.net -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d eggdrop If you provide your own config file, specify it as the argument to the docker container: $ docker run -i -v /path/to/eggdrop/files:/home/eggdrop/eggdrop/data -d eggdrop mybot.conf Any config file used with docker MUST end in .conf, such as eggdrop.conf or mybot.conf Adding scripts An easy way to add scripts would be to create a scripts directory on the host and mount it to /home/eggdrop/eggdrop/scripts (or the path of your choosing). This would be accomplished by adding an option similar to -v /path/to/host/scripts:/home/eggdrop/eggdrop/scripts to your docker run command line (and then edit your config file to load the scripts from the path that matches where you mounted the scripts dir). Exposing network ports If you want to expose network connections for your bot, you’ll also want to use the -p flag to expose whichever port you specified in the config as the listen port (default is 3333). For example, to expose port 3333, add -p 3333:3333 to your docker run command line. Troubleshooting / Support For additional help, you can join the #eggdrop channel on Freenode The git repository for the Dockerfile is maintained at eggdrop/ directory. As for any pre-built image usage, it is the image user’s responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.library, sample, eggdrop
https://docs.docker.com/samples/library/eggdrop/
2018-08-14T14:07:59
CC-MAIN-2018-34
1534221209040.29
[array(['https://raw.githubusercontent.com/docker-library/docs/d36235b330f3945d48c230eb58f3ea8319b6d985/eggdrop/logo.png', 'logo'], dtype=object) ]
docs.docker.com
French Childcare Assistance Factsheet If you have trouble accessing this document, please contact us to request a copy in a format you can use. View this document as… This document provides families who speak French with information about the financial assistance available to help cover the cost of approved child care.
https://docs.education.gov.au/node/43701
2018-08-14T13:16:00
CC-MAIN-2018-34
1534221209040.29
[]
docs.education.gov.au
View a DNS server debug log file Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 To view a DNS server debug log file Stop the DNS Server service. Open WordPad. On the File menu, click Open. In Open, for File name, specify the path to the DNS server debug log file. By default, if the applicable DNS server is running locally, the file and path are as follows: systemroot\System32\Dns\Dns.log After you specify the correct path and file, click Open to view the log WordPad, click Start, point to All programs, point to Accessories, and then click WordPad. To stop the DNS Server service, see Related Topics. The location of the DNS.log file is managed using the DNS console. To specify the name and location of the DNS.log file, see Related Topics. By default, the Dns.log file is empty if you have not previously enabled debug logging options. Debug logging slows DNS server performance and should only be enabled for temporary use. Information about functional differences - Your server might function differently based on the version and edition of the operating system that is installed, your account permissions, and your menu settings. For more information, see Viewing Help on the Web. See Also Concepts Start or stop a DNS server View the DNS server system event log Using server debug logging options DNS tools Select and enable debug logging options on the DNS server Disable debug logging options on the DNS server
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc776445(v=ws.10)
2018-08-14T14:01:54
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
Managing Cookies Under the http protocol, a server or a script uses cookies to maintain state information on the client workstation. The WinINet functions have implemented a persistent cookie database for this purpose. They can be used to set cookies in and access cookies from the cookie database. For more information, see HTTP Cookies. The InternetSetCookie and InternetGetCookie functions can be used to manage cookies. Using Cookie Functions The following functions allow an application to create or retrieve cookies in the cookie database. Note that these functions do not require a call to InternetOpen. Cookies that have an expiration date are stored in the local users account under Users\"username"\AppData\Roaming\Microsoft\Windows\Cookies directory, and the Users\"username"\AppData\Roaming\Microsoft\Windows\Cookies\Low directory for applications running under low privileges. Cookies that do not have an expiration date are stored in memory and are available only to the process in which they were created. As noted in the HTTP Cookies topic, the InternetGetCookie function does not return cookies that have been marked by the server as non-scriptable with the "HttpOnly" attribute in the Set-Cookie header. Getting a Cookie InternetGetCookie returns the cookies for the specified URL and all its parent URLs. The following example demonstrates a call to InternetGetCookie. TCHAR szURL[256]; // buffer to hold the URL LPTSTR lpszData = NULL; // buffer to hold the cookie data DWORD dwSize=0; // variable to get the buffer size needed // Insert code to retrieve the URL. retry: // The first call to InternetGetCookie will get the required // buffer size needed to download the cookie data. if (!InternetGetCookie(szURL, NULL, lpszData, &dwSize)) { // Check for an insufficient buffer error. if (GetLastError()== ERROR_INSUFFICIENT_BUFFER) { // Allocate the necessary buffer. lpszData = new TCHAR[dwSize]; // Try the call again. goto retry; } else { // Insert error handling code. } } else { // Insert code to display the cookie data. // Release the memory allocated for the buffer. delete[]lpszData; } Setting a Cookie InternetSetCookie is used to set a cookie on the specified URL. InternetSetCookie can create both persistent and session cookies. Persistent. The data for the cookie should be in the format: NAME=VALUE For the expiration date, the format must be: DAY, DD-MMM-YYYY HH:MM:SS GMT DAY is the three-letter abbreviation for the day of the week, DD is the day of the month, MMM is the three-letter abbreviation for the month, YYYY is the year, and HH:MM:SS is the time of the day in military time. The following example demonstrates two calls to InternetSetCookie. The first call creates a session cookie and the second creates a persistent cookie. BOOL bReturn; // Create a session cookie. bReturn = InternetSetCookie(TEXT(""), NULL, TEXT("TestData = Test")); // Create a persistent cookie. bReturn = InternetSetCookie(TEXT(""), NULL, TEXT("TestData = Test; expires = Sat,01-Jan-2000 00:00:00 GMT")); Note WinINet does not support server implementations. In addition, it should not be used from a service. For server implementations or services use Microsoft Windows HTTP Services (WinHTTP).
https://docs.microsoft.com/en-us/windows/desktop/WinInet/managing-cookies
2018-08-14T13:19:52
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
Streaming enrichment information is useful when you need enrichment information in real time. This type of information is most useful in real time as opposed to waiting for a bulk load of the enrichment information. You incorporate streaming intelligence feeds slightly differently than when you use bulk loading. The enrichment information resides in its own parser topology instead of in an extraction configuration file. The parser file defines the input structure and how that data is used in enrichment. Streaming information goes to Apache HBase rather than to Apache Kafka, so you must configure the writer by using both the writerClassName and simple HBase enrichment writer (shew) parameters. Define a parser topology in $METRON_HOME/zookeeper/parsers/user.json: touch $METRON_HOME/config/zookeeper/parsers/user.json Populate the file with the parser topology definition. For example, the following commands associate IP addresses with user names for the Squid information. { } } } - parserClassName The parser name. - writerClassName The writer destination. For streaming parsers, the destination is SimpleHbaseEnrichmentWriter. - sensorTopic Name of the sensor topic. - shew.table The simple HBase enrichment writer (shew) table to which you want to write. - shew.cf The simple HBase enrichment writer (shew) column family. - shew.keyColumns The simple HBase enrichment writer (shew) key. - shew.enrichmentType The simple HBase enrichment writer (shew) enrichment type. - columns The CSV parser information. In this example, the user name and IP address. This file fully defines the input structure and how that data can be used in enrichment. Push the configuration file to Apache ZooKeeper: Create a Kafka topic sized to manage your estimated data flow: /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --zookeeper $ZOOKEEPER_HOST:2181 --replication-factor 1 --partitions 1 --topic user Push the configuration file to ZooKeeper: $METRON_HOME/bin/zk_load_configs.sh -m PUSH -z $ZOOKEEPER_HOST:2181 -i $METRON_HOME/zookeeper Start the user parser topology: $METRON_HOME/bin/start_parser_topology.sh -s user -z $ZOOKEEPER_HOST:2181 -k $KAKFA_HOST:6667 The parser topology listens for data streaming in and pushes the data to HBase. Data is flowing into the HBase table, but you must ensure that the enrichment topology can be used to enrich the data flowing past. Edit the new data source enrichment configuration at $METRON_HOME/config/zookeeper/enrichments/squidto associate the ip_src_addrwith the user name: { "enrichment" : { "fieldMap" : { "hbaseEnrichment" : [ "ip_src_addr" ] }, "fieldToTypeMap" : { "ip_src_addr" : [ "user" ] }, "config" : { } }, "threatIntel" : { "fieldMap" : { }, "fieldToTypeMap" : { }, "config" : { }, "triageConfig" : { "riskLevelRules" : { }, "aggregator" : "MAX", "aggregationConfig" : { } } }, "configuration" : { } } Push the new data source enrichment configuration to ZooKeeper: $METRON_HOME/bin/zk_load_configs.sh -m PUSH -z $ZOOKEEPER_HOST:2181 -i $METRON_HOME/zookeeper
https://docs.hortonworks.com/HDPDocuments/HCP1/HCP-1.4.2/bk_administration/content/creating_a_streaming_enrichment_feed_source.html
2018-08-14T13:46:37
CC-MAIN-2018-34
1534221209040.29
[]
docs.hortonworks.com
For our runbook demonstration, we create a mock enrichment source. In your production environment you will want to use a genuine enrichment source. To create a mock enrichment source, complete the following steps: As root user, log into $HOST_WITH_ENRICHMENT_TAG. sudo -s $HOST_WITH_ENRICHMENT_TAG Copy and paste the following data into a file called whois_ref.csvin $METRON_HOME/config. This CSV file represents our enrichment source. Make sure you don't have an empty newline character as the last line of the CSV file, as that will result in a null pointer exception.
https://docs.hortonworks.com/HDPDocuments/HCP1/HCP-1.4.2/bk_runbook/content/create_mock_enrichment_source.html
2018-08-14T13:46:35
CC-MAIN-2018-34
1534221209040.29
[]
docs.hortonworks.com
Welcome to JCMsuite’s Parameter Reference!¶ This parameter reference gives full reference to all input parameters for JCMsuite. It is written for engineers who work with JCMsuite and who need to get an insight about the handling of their specific projects with JCMsuite. This parameter reference, however, is not intended to give all the necessary introductions needed when JCMsuite is used for the first time. For such general information about the application areas, installation, first use, and scripting support, please see here. Table Of Contents
https://docs.jcmwave.com/JCMsuite/html/ParameterReference/index.html?version=3.12.9
2018-08-14T14:13:41
CC-MAIN-2018-34
1534221209040.29
[]
docs.jcmwave.com
i-Docs was thrilled to host two arts media practitioners at the recent Symposium in March, in a double bill keynote: ‘At the Intersection of Technology, Art, Science & the Future’. Robin McNicholas, from award-winning creative studio Marshmallow Laser Feast, has directed myriad VR experiences and installations, and took the i-Docs audience on a journey through some of these works and their development. Following Robin, the audience heard from the co-founder of Hyphen-Labs, Carmen Aguilar y Wedge. Carmen discussed broadening the technological and social imagination through developing inclusive and innovative projects such as NeuroSpeculative AfroFeminism, which was on show during the i-Docs showcase, Immerse Yourself. Watch the full video with both keynotes here:
http://i-docs.org/2018/05/30/video-at-the-intersection-of-technology-art-science-the-future-i-docs-2018/
2018-08-14T14:20:40
CC-MAIN-2018-34
1534221209040.29
[]
i-docs.org
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. The response to a ListSqlInjectionMatchSets request. Namespace: Amazon.WAFRegional.Model Assembly: AWSSDK.WAFRegional.dll Version: 3.x.y.z The ListSqlInjectionMatchSetsResponse;
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/WAFRegional/TListSqlInjectionMatchSetsResponse.html
2018-08-14T14:51:51
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
Outlining PHP Editor allows you to collapse the content of functions, classes, namespaces and PHPDoc to have a better overview of your code. Moreover, when PHP Editor finds a syntax error or a logical error, the corresponding expression is underlined with a red wave and the error detail is listed in Visual Studio Error List tool window. Default Shortcuts - Ctrl+M, O - Collapse all blocks to definitions. - Ctrl+M, P - Stop outlining. - Ctrl+M, M - Toggle outlining. - Ctrl+M, L - Toggle all outlining. Collapsible Regions The following list describes all the code fragments supporting outlining: - Class body - Namespace content - Function body - Lambda functions - PHPDoc comment block - Multi-line comments - Group of single-line comments - PHP script tags containing more than one line of code - PHP content between #region/ #endregionor //region/ //endregion - Code blocks enclosed in { ... }(since version 1.18) - Content of switch, caseand default(since version 1.18) - Code enclosed within for, foreach, if, else, elseif(since version 1.18) Outlining behavior can be modified in PHP language options. To disable automatic outlining of a newly opened file, or to change additional outlining options, go to Tools | Options | Text Editor | PHP | Advanced. #region Outlining Single line comments starting with the region keyword are treated as the start of collapsible region, and are matched with a following endregion comment. Regions can be nested. Alternately, the user can specify the region name after the region keyword.
https://docs.devsense.com/en/editor/outlining
2018-08-14T13:56:06
CC-MAIN-2018-34
1534221209040.29
[array(['https://docs.devsense.com/content_docs/editor/imgs/phptools-outlining.png', 'PHP code outlining. PHP code outlining'], dtype=object) array(['https://docs.devsense.com/content_docs/editor/imgs/phptools-regionoutline.png', 'Outlining of #region sections. Outlining of #region sections'], dtype=object) ]
docs.devsense.com
Chinese Simplified Childcare Assistance Factsheet If you have trouble accessing this document, please contact us to request a copy in a format you can use. View this document as… This document provides families who read Chinese Simplified with information about the financial assistance available to help cover the cost of approved child care.
https://docs.education.gov.au/node/43706
2018-08-14T13:13:13
CC-MAIN-2018-34
1534221209040.29
[]
docs.education.gov.au
vCenter Server periodically updates storage data in its database. The updates are partial and reflect only those changes that storage providers communicate to vCenter Server. When needed, you can perform a full database synchronization for the selected storage provider. Procedure - Browse to vCenter Server in the vSphere Web Client navigator. - Click the Configure tab, and click Storage Providers. - From the list, select the storage provider that you want to synchronize with and click the Rescan the storage provider ( ) icon. Results The vSphere Web Client updates the storage data for the provider.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-91D40851-F982-4390-ACA3-483BCEC8F93C.html
2018-08-14T13:35:10
CC-MAIN-2018-34
1534221209040.29
[]
docs.vmware.com
This module provides access to the select() and poll() functions available in most operating systems, epoll() available on Linux 2.5+ and kqueue() available on most BSD. exception. New in version 2.6. . New in version 2.6. (Only supported on BSD.) Returns a kernel event object; see section Kevent Objects below for the methods supported by kevent objects. New in version 2.6. This is a straightforward interface to the Unix select() system call. The first three arguments are sequences of ‘waitable objects’: either integers representing file descriptors or objects with a parameterless method named fileno() returning such an integer: Empty sequences are allowed, but acceptance of three empty sequences is platform-dependent. (It is known to work on Unix but not on Windows.) The optional timeout argument specifies a time-out as a floating point number in seconds. When the timeout argument is omitted the function blocks until at least one file descriptor is ready. A time-out value of zero specifies a poll and never blocks.. Files reported as ready for writing by select(), poll() or similar interfaces in this module are guaranteed to not block on a write of up to PIPE_BUF bytes. This value is guaranteed by POSIX to be at least 512. Availability: Unix. New in version 2.7.. Wait for events. timeout in seconds (float) The). Register a file descriptor with the polling object. Future calls to the poll() method will then check whether the file descriptor has any pending I/O events. fd can be either an integer, or an object with a fileno() method that returns an integer. File objects implement fileno(), so they can also be used as the argument.. Modifies an already registered fd. This has the same effect as register(fd, eventmask). Attempting to modify a file descriptor that was never registered causes an IOError exception with errno ENOENT to be raised. New in version 2.6. Remove a file descriptor being tracked by a polling object. Just like the register() method, fd can be an integer or an object with a fileno() method that returns an integer. Attempting to remove a file descriptor that was never registered causes a KeyError exception to be raised. Polls the set of registered file descriptors, and returns a possibly-empty list containing ..
http://docs.python.org/2/library/select.html
2014-03-07T09:17:35
CC-MAIN-2014-10
1393999639954
[]
docs.python.org
Help Center Local Navigation - Shortcuts - Phone - Messages - Attachments - Camera and video camera - Media - Browser - Date, time, and alarm - - Typing - Language - Display and keyboard - Search - Messages, attachments, and web pages - Organizer data - Search shortcuts - To perform this task, your BlackBerry® device must be associated with an email account that uses a BlackBerry® Enterprise Server that supports this feature. For more information, contact your administrator. Parent topic: Contact basics Parent topic: Organizer data Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/9271/Search_for_contacts_org_address_book_4_6_410171_11.jsp
2014-03-07T09:23:59
CC-MAIN-2014-10
1393999639954
[]
docs.blackberry.com
. Copy this to trunk VERSION.txt - branches to be NOTE: you will need special utilities installed in order to build the debian module (see the README in contrib/debian for details). - jetty-j2se6-x.y.z.jar - jetty6-all.deb - ONLY WHEN YOU ARE REALLY REALLY REALLY REALLY REALLY SURE EVERYTHING IS REALLY REALLY REALLY OK, then push the maven artifacts: promote the releases - Update jira to release the version, and enter the next version number - Tell everybody: blogs, lists, etc. - Wait for the bug reports to come in
http://docs.codehaus.org/pages/viewpage.action?pageId=4391182
2014-03-07T09:21:02
CC-MAIN-2014-10
1393999639954
[]
docs.codehaus.org
See: Description A sequencer in ModeShape is a component that is able to process information (usually the content of a file, or a property value on a node) and recreate that information as a graph of structured content. This package defines the interfaces for the sequencing system. The StreamSequencer interface is a special form of sequencer that processes information coming through an InputStream. Implementations are responsible for processing the content and generating structured content using the supplied SequencerOutput interface. Additional details about the information being sequenced is available in the supplied StreamSequencerContext.
http://docs.jboss.org/modeshape/2.3.0.Final/api-full/org/modeshape/graph/sequencer/package-summary.html
2014-03-07T09:18:31
CC-MAIN-2014-10
1393999639954
[]
docs.jboss.org
Message-ID: <297204614.21143.1394183803562.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_21142_1795102931.1394183803562" ------=_Part_21142_1795102931.1394183803562 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: The third Groovy Developer Conference took place in Paris, in Sun's offi= ces, on the 29th and the 30th of January 2007. = This event was kindly sponsored by the Codehaus Foundation and the room boo= ked by Alexis Moussine-Pouchkine.=20 Were present:=20 During this meeting, we followed the agenda outlined there. The meeting has been productive= and the resulting decisions will be explained in the following sections.= p>=20 First of all, we have decided to keep the current naming / number scheme= of betas and RCs. The number of betas and RCs hasn't been decided yet.= =20 The next major milestone will be 1.1 (and not 2.0 as we might have suppo= sed). We are aiming at releasing this 1.1 before the end of 2007, ideally i= n Q3. The next versions will thus be:=20 We might however provide some 1.x.y releases in case some critical bugs = are found that can't wait for the next major milestones.=20 So far, we haven't felt a strong need to create branches. We will contin= ue to work on SVN trunk to add new features as well as for fixing bugs. Thi= s will not complicate our work for dealing with different branches.=20 For further enhancements / additions / changes proposals, we should form= alize them in the form of a dedicated document. Discussing such proposals i= s often tedious on mailing-lists because we often miss the "big pictur= e" or the tricky details, and the discussion might finish in a dead en= d. The proposal should provide some context of why such an enhancemen= t is provided, explain the way it solves a certain problematic, code sample= s, corner cases, ideas for the implementation or even a prototype implement= ation as far as possible. The details of such a template for proposals should be provided soon.=20 We are going to reorganize the sources and tests to separate:=20 This also means we should separate more cleanly:=20 Core languages classes are everything related to the core concepts of th= e language as well as the GDK. Library classes contain classes related to v= arious APIs like JMX, Swing, and such. While module content usually require= additional dependencies than the ones of the JDK and also usually follow t= heir own release cycle.=20 Separating sources also means creating different artifact deliverables:<= /p>=20 Having a small groovy-core library is particularly interesting for those= wishing to integrate Groovy in their Java applications.=20 Regarding the tests, we should separate the 1.0 tests in a dedicated fol= der, while creating new folders for including "Groovy in Action" = tests, new tests for 1.1 (with tests specific to Java 1.5 features), as wel= l as tests for the TCK of JSR-241.=20 The ASL 2 license has to be applied and enforced everywhere -- instead o= f copying the old BSD license again and again.=20 Despite the great progresses made on improving, adding new content, and = reorganizing the documentation, more care should be provided to the online = documentation in the future. Newcomers should easily find their way through= the website and should be able to get started quickly. Non-documented feat= ures should be identified and reported so that actions can be taken to impr= ove those areas.=20 A particular care should also be provided to improve the JavaDoc of the = core Groovy classes. pro= gressively create the most simplistic build that can possibly exist to buil= d the artifacts and pass the test cases, as well as providing reporting cap= abilities (test reports, coverage) for integration in our Continuous Integr= ation process. Users will also be able to very easily build Groovy themselv= es with raw Ant without having to install anything like Maven or other buil= d tool.=20 Other facilities such as creating the distributions can potentially be h= andled differently (Gant or other).=20 The deliverables of JSR-241 consists of:=20 The RI has been released in the form of Groovy 1.0. The formal gra= mmar. The Abstract Syntax Tree will be improved to provide better support for = tools, especially IDEs, and the "groovydoc" documentation tool. A= nd all help possible should be provided to teams working on IDE plugins.=20 "groovydoc" should as a first step be able to generate the Jav= aDoc documentation from the Groovy classes, but should eventually be able t= o generate documentation for both Java and Groovy classes so that inter-lin= ks can be created. More thoughts and discussions might happen to see whethe= r solutions could be found to document classes with dynamic behaviors, like= Builders or MetaClasses.=20 The Groovy console and shell will be revamped to become even more user-f= riendly.=20 The grammar will be cleaned up slightly to remove some artifacts of idea= s brushed during the past JSR meetings but which have never been implemente= d or have been rejected after discussions.=20 Groovy 1.1 will provide annotation support and will integrate seamlessly= with Java 5's annotations. Despite Groovy supporting this particular= Java 5 feature, the rest of the project will always stay compatible with b= ytecode for 1.4 JVMs, except for classes annotated which will be compiled a= s bytecode for 1.5 JVMs. Annotated classes will only be able to be co= mpiled when the compiler is run on a JVM 1.5, while the rest of Groovy clas= ses will always be able to compile and run both on 1.4 and 1.5 and beyond.<= br /> Specific test suites and probably DGM methods requiring a JVM 1.5 wil= l be separated and will not prevent anyone from using Groovy on a JVM 1.4.<= /p>=20 It is important that Groovy stays compatible with 1.4 as much as possibl= e so that Groovy can still be used in corporate environment that have not m= ade the switch yet to more recent versions of their JVMs.=20 Enum will be supported, at least partially.=20 We may reserve ourselves the right to not provide all the extensibility = provided by Java Enums (particularly the capability to define behavior on E= nums).=20 While we usually avoid providing users with too many choices for impleme= nting a given thing, we listened to newcomers wishing to be able to copy an= d paste some Java code containing "old for loops". Groovy 1.1 wil= l add the ability to use classical for loops as provided by Java or C#.= =20 The ExpandoMetaClass pioneered by the Grails project will be introduced = in Groovy itself. By default, those MetaClasses should not be extensible by= default as they can be dangerous in case of concurrent access and shared e= nvironments.=20 Additionally, a new convention will be introduced to define a script cus= tomizing various MetaClasses. For instance, a script in the magic package: = groovy.runtime.metaclass.script can hold a script using ExpandoMetaClasses = to enhance other classes.=20 The experimental date / time / duration support of the GData module will= be back in the core so as to provide a nice syntax for dealing with date a= nd time handling. This will be based on the Java calendar class, as we don'= t want to add an additional dependency of the core on Joda-Time despite its= great merits. But the day JSR-310 finds its way into the JDK, we might cer= tainly upgrade our implementation to using this API, as the underlying impl= ementation should be transparent to the users.=20 While it is possible to call methods without parentheses for top-level s= tatements, it only works for methods with normal arguments, but not with me= thods taking named-arguments (map literal). To provide even more readabilit= y and expressivity, we decided to also allow to omit parentheses in that ca= se, to help with the definition of Domain-Specific Languages.=20 We consider adding the possibility of doing multiple assignments. Method= s returning lists could spread the elements of their list to multiple varia= bles at a time. This mechanism may even be used later in more complex scena= rios like retrieving groups from a regular expression.=20 This section contains some additional information on what has happened a= nd been decided till the meeting.=20 Groovy will sports minimal generics support so that declarations like Li= st<Employee> adds the relevant reflective bytecode information needed= by frameworks like JPA.=20 Make map and closure coercion work also for classes and abstract classes= , and not only for interfaces.=20 c =3D foo ?: bar // equivalent to c =3D foo ? foo : bar=20
http://docs.codehaus.org/exportword?pageId=71920
2014-03-07T09:16:43
CC-MAIN-2014-10
1393999639954
[]
docs.codehaus.org
Description / Features The plugin computes and feds Sonar with four (5) new metrics : Authors Activity, Commits / Author, Commits / Clock Hour, Commits / Week Day and Commits / Month. Four project widgets ( under the SCM category ) display these metrics using graphical representations. The "Author activity" widget SCM access is avalable only with username/password or no SCM information is included in project's pom.xml you have to also install the SCM Activity plugin - Restart the Sonar.scm.user.secured and sonar.scm.password.secured properties of SCM Activity plugin - Launch a new quality analysis and the metrics will be fed. In other words, whole history stats, will be collected only if sonar.scm-stats.period1 property is set to zero(0). Negative values are ignored for all periods. Compatibility Matrix Metrics Definitions Future Work Plenty !!! Waiting for your ideas as well! Change Log Release 0.2 (10 issues)
http://docs.codehaus.org/pages/viewpage.action?pageId=230397851
2014-03-07T09:17:08
CC-MAIN-2014-10
1393999639954
[array(['/download/attachments/229741975/scm-stats-commits-per-user.png?version=1&modificationDate=1349171731612&api=v2', None], dtype=object) array(['https://dl.dropbox.com/u/16516393/authors_activity.png', None], dtype=object) array(['/download/attachments/229741975/scm-stats-commits-clockhour.png?version=1&modificationDate=1347001896777&api=v2', None], dtype=object) array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif', None], dtype=object) ]
docs.codehaus.org
All content with label amazon+async+aws+configuration+datagrid+eventing+expiration+infinispan+installation+interface+listener+out_of_memory+query. Related Labels: publish, coherence, interceptor, server, replication, recovery, transactionmanager, dist, release, deadlock, archetype, jbossas, lock_striping, nexus, guide, schema, cache, s3, grid, jcache, test, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, getting, clustering, setup, eviction, gridfs, concurrency, examples, jboss_cache, import, index, events, hash_function, batch, buddy_replication, loader, xa, cloud, mvcc, notification, tutorial, read_committed, jbosscache3x, xml, distribution, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, adaptor, websocket, transaction, interactive, xaresource, build, gatein, searchable, demo, scala, client, non-blocking, migration, filesystem, jpa, tx, article, gui_demo, snmp, client_server, infinispan_user_guide, standalone, webdav, snapshot, repeatable_read, hotrod, docs, consistent_hash, batching, store, whitepaper, jta, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - amazon, - async, - aws, - configuration, - datagrid, - eventing, - expiration, - infinispan, - installation, - interface, - listener, - out_of_memory, - query ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+async+aws+configuration+datagrid+eventing+expiration+infinispan+installation+interface+listener+out_of_memory+query
2020-02-17T02:17:23
CC-MAIN-2020-10
1581875141460.64
[]
docs.jboss.org
Hands-on Lab Drives JNetDirect to 64 Bits Interesting case study about how a company was able to port their application to 64-bits fairly quickly through the Microsoft Hands-on Lab here at campus.. If you're interested in seeing more stories like this be sure to visit the Microsoft Customer Evidence site. Here's a list of the case studies related specifically to Visual C++. I've been meeting with some of the guys that run this site and I've made some suggestions around adding some more code to these case studies and maybe eventually creating developer papers associated with these stories. I'll let you know how that goes as we move forward.
https://docs.microsoft.com/en-us/archive/blogs/brianjo/hands-on-lab-drives-jnetdirect-to-64-bits
2020-02-17T02:19:05
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Certified consulting partners to deliver mobile solutions Tip Xamarin Consulting Partner Program has merged with the Microsoft Partner Network as of June 30, 2018. Getting started with the Microsoft Partner Network: - If you’re not already a registered Microsoft Partner Network member, enroll to become a partner. - Demonstrate your expertise by completing the Application Development Competency and/or the Cloud Platform Competency. - Update your profile’s Specialties tags to include Xamarin, Android, iOS, Mobile Apps, and Mobile Applications Development so you’re discoverable as a Microsoft solution provider.. Listed by primary location, many partners provide services across borders.
https://docs.microsoft.com/en-us/xamarin/cross-platform/partners/?cid=kerryherger
2020-02-17T02:07:02
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
.. 2018–03–28 Page amended with limited editorial review Updated in 5.6 Updated in 2018.1
https://docs.unity3d.com/es/2018.4/Manual/GIVis.html
2020-02-17T02:07:20
CC-MAIN-2020-10
1581875141460.64
[]
docs.unity3d.com
Agent Properties (General) Use this tab to obtain general information on the selected Exchange Database Agent. Client Name Displays the name of the client computer. iDataAgent/Agent Type Displays the name of the agent. Installed date Displays the date on which the agent was installed or upgraded on the client computer. Backup Type Specifies the preselected backup level. This determines which data is secured by an Exchange Database backup that is not a full backup. - Incremental specifies that each non-full Exchange Database backup secures only that data that has changed since the last backup of any type. - Differential specifies that each non-full Exchange Database backup secures all data that has changed since the last full backup. Exchange Administrator Account Displays the Exchange Administrator Account for the site in which this Exchange Server resides. Change Account Click and populate the resulting dialog box only if you changed the Exchange Administrator Account and it now differs from the one displayed in the Exchange Administrator Account field. Use VSS Specifies whether Microsoft's Exchange Writer will perform a full backup of the selected subclient using a VSS shadow copy. When cleared, disables Microsoft's Exchange Writer backups for the selected subclient. CCR Options A group of options that apply to for an Exchange Database Agent that uses Exchange 2007 servers that are configured for Cluster Continuous Replication (CCR). These options must have the Use VSS option enabled. - Backup from replica Specifies whether to back up the database from a CCR replica using VSS shadow copy. When this option is selected, backups performed for all subclients on this agent will use VSS to back up the replica database. - Backup on active node if passive node is unavailable Specifies whether to back up the database from the active node of a CCR cluster using VSS shadow copy, in cases where the passive node is not available (for example, failover will cause the passive node to become the active node, after which the passive node is unavailable). When this option is selected, backups performed for all subclients on this agent will use VSS to back up the database from the active node of a CCR cluster when the passive node is not available. Note: When this option is selected, the software attempts to run the backup from the active node if database replication is in a suspended state. Exchange Server Name Displays the name of the Exchange Server that is installed on the client computer. Use this space to modify this server name if the name displayed is incorrect (for example, not the same as the Client name or Host name). Exchange Version Click the available list to select the Exchange Server Version that is installed on the client computer. Copy Backup When selected, Copy Backup is enabled and log files will not be truncated. When cleared, Copy Backup is disabled and log files will be truncated. Description Use this field to enter a description about the entity. This description can include information about the entity's content, cautionary notes, and so on.
http://docs.snapprotect.com/netapp/v11/article?p=products/exchange_database/help/general_exchange.htm
2020-02-17T02:06:09
CC-MAIN-2020-10
1581875141460.64
[]
docs.snapprotect.com
Buildcat is a portable, lightweight, elegant render farm based on RQ and Redis. It runs on OSX, Linux, and Windows, includes integration with SideFX Houdini, and can be easily extended to support other tools. Documentation¶ - Design - Basic Setup - Advanced Setup - Integrations - Compatibility - Contributing - Release Notes - API Reference - Support
https://buildcat.readthedocs.io/en/stable/
2020-02-17T00:23:20
CC-MAIN-2020-10
1581875141460.64
[array(['_images/buildcat.png', '_images/buildcat.png'], dtype=object)]
buildcat.readthedocs.io
.. Ins 2.16(13) (13) Group, quasi-group or special class implications. No advertisement may state or imply, unless true, that prospective policyholders or members of a particular class of individuals become group or quasi-group members or are uniquely eligible for a special policy or coverage and will be subject to special rates or underwriting privileges or that a particular coverage or policy is exclusively for preferred risks, a particular segment of people, or a particular age group or groups. Ins 2.16(14) (14) Inspection of policy. Ins 2.16(14)(a) (a) An offer in an advertisement of free inspection of a policy or an offer of a premium refund shall not be a cure for misleading or deceptive statements contained in such advertisement. Ins 2.16(14)(b) (b) An advertisement which refers to the provision in the policy advertised regarding the right to return the policy shall disclose the time limitation applicable to this right. Ins 2.16(15) (15) Identification of plan or number of policies. Ins 2.16(15)(a) (a) When an advertisement refers to a choice regarding benefit amounts, it shall disclose that the benefit amounts provided will depend upon the plan selected and that the premium will vary with the amount of the benefits. Ins 2.16(15)(b) (b) When an advertisement refers to various benefits, all of which can be obtained only by purchasing 2 or more policies, it shall disclose that the benefits are provided only through a combination of such policies. Ins 2.16(16) (16) Use of statistics. Ins 2.16(16)(a) (a) An advertisement which sets out the dollar amounts of claims paid, the number of persons insured or other statistical information shall identify the source of the statistical information. No person subject to this section may use an advertisement unless it accurately reflects all of the relevant facts. No advertisement may contain irrelevant statistical data. Ins 2.16(16)(b) (b) No advertisement may imply that the statistical information given is derived from the insurer's experience under the policy advertised unless true. The advertisement shall specifically so state if the information applies to other policies or plans. Ins 2.16(16)(c) (c) An advertisement which sets out the dollar amounts of claims paid shall also indicate the period during which such claims have been paid. Ins 2.16(17) (17) Claims. No advertisement may: Ins 2.16(17)(a) (a) Contain untrue statements with respect to the time within which claims are paid; provides illustration formats, prescribes standards to be followed when illustrations are used, and specifies the disclosures that are required in connection with illustrations. The goals of this rule.. Down Down /code/admin_code/ins/2 true administrativecode /code/admin_code/ins/2/16/20 Office of the Commissioner of Insurance (Ins) administrativecode/Ins 2.16(20) administrativecode/Ins 2.16(20).
https://docs.legis.wisconsin.gov/code/admin_code/ins/2/16/20
2020-02-17T01:06:51
CC-MAIN-2020-10
1581875141460.64
[]
docs.legis.wisconsin.gov
Transfer Mode Enum Definition Indicates whether a channel uses streamed or buffered modes for the transfer of request and response messages. public enum class TransferMode public enum TransferMode type TransferMode = Public Enum TransferMode - Inheritance - Fields Examples The following example sets the TcpTransportBindingElement.TransferMode property to Streamed through code: TcpTransportBindingElement transport = new TcpTransportBindingElement(); transport.TransferMode = TransferMode.Streamed; BinaryMessageEncodingBindingElement encoder = new BinaryMessageEncodingBindingElement(); CustomBinding binding = new CustomBinding(encoder, transport); The following example sets the TcpTransportBindingElement.TransferMode property to Streamed through configuration: <customBinding> <binding name="streamingBinding"> <binaryMessageEncoding /> <tcpTransport transferMode="Streamed" /> </binding> </customBinding> Remarks, and NetNamedPipeBinding system-provided bindings using the transfer mode properties exposed on them. The mode can be set on the NetTcpBinding class, for example, by using the NetTcpBinding.
https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.transfermode?view=netframework-4.7.2
2020-02-17T00:55:27
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
O que você acha de testar os mais recentes softwares Microsoft em um ambiente isolado e totalmente seguro? Não seria ótimo poder testar novos servidores imediatamente, sem formatar discos rígidos ou dedicar um de seus computadores ao projeto? Agora você pode, com os Virtual Labs TechNet. Take part in these Virtual Labs and experience how the Microsoft Forefront line of business security products provides greater protection and control through integration with your existing IT infrastructure and through simplified deployment, management, and analysis. Get hands-on with the key features and components of the 2007 Microsoft Office system and learn the benefits of deploying it within your organization. Walk through installation, configuration and the new components around enterprise features administration, collaboration, business intelligence, and experience common deployment and integration scenarios that can help you make better choices when planning your roll-out. Step into these Virtual Labs and explore the new and updated features in the Windows 7 operating system, including AppLocker, BranchCache, BitLocker, and many others. Server Products Server Technologies Server Operating Systems Desktop Operating Systems Embedded Operating Systems Security Products and Technologies System Center Products Office System Products Subjects and Series These Virtual Labs for developers give you hands-on experience with Microsoft's programming tools and technologies. Explore more ways to evaluate Microsoft products:
https://docs.microsoft.com/pt-br/previous-versions/bb467605(v%3Dmsdn.10)
2020-02-17T00:38:09
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
The Web Simulator has several features for developing Web applications. The Web Simulator provides the following panel operations: Expand/Collapse Each panel can be opened or closed by clicking the small arrow on the left side of the panel bar. Reorder Each panel can be moved and reordered by dragging the items on the drag area on the right side of the panel bar. Show/Hide Each panel can be displayed or hidden by clicking the panel-setting button on the right side of the application address bar. The Web Simulator has the following panels which allow you to control the simulation conditions of various device aspects: In the Orientation and Zooming panel, you can switch the orientation between the portrait and landscape modes. If your application has subscribed to the orientation change event, it receives the event and the subscribed event handler is invoked. You can also set the zoom level of your application to view specific areas of the application. Zooming is a visual aid and does not trigger application notifications. Figure: Orientation and Zooming panel (mobile app on the left, wearable on the right) The System Summary panel displays generic information and settings about the application, system, device, and platform. Figure: System Summary panel (mobile app on the left, wearable on the right) The Geolocation panel contains location-related settings. You can set the local time zone to test whether your application reacts properly when the target device is located in different geographical areas. Figure: Geolocation panel The panel also provides an input area to configure geographical data being sent from the device. Additionally, a map is displayed and updated in accordance to the changing of data. To simulate a custom, multi-point route: The Application Configuration panel displays a graphical representation of the config.xml file. You can use it to ensure the validity of your application configuration. For more information on the configuration details, see W3C/HTML5 Specifications. Figure: Application Configuration panel The Sensors panel provides slide bars to configure the ambient, accelerometer, and magnetic field sensors. To change the accelerometer value, either drag the simulator image, or enter a degree value along each axis. The following buttons can be used to simulate the accelerometer sensor: Figure: Accelerometer sensor (mobile app on the left, wearable on the right) To set the magnetic field, enter the X, Y, and Z axis values. Figure: Accelerometer and gyro sensors Note If the computer does not fully support WebGL™, the simulated device in the Sensors panel looks like in the following figure. Figure: Sensor without WebGL™ The Packages and Applications panel provides a simulated packages and applications management center on a device. It lists available and installed packages and applications on a device: You can use the Packages and Applications panel to verify created operations and operation details. Figure: Packages and Applications panel You can receive notifications of changes in the list of installed packages. The setPackageInfoEventListener() method of the PackageManager interface (in mobile, wearable, and TV applications) registers an event listener for changes in the installed packages list. To unsubscribe the listener, use the unsetPackageInfoEventListener() method. You can use the PackageInformationEventCallback interface (in mobile, wearable, and TV applications) to define listeners for receiving notifications. Learning to receive notifications when the list of installed packages changes allows you to manage device packages from your application: Define the event handlers for different notifications using the PackageInformationEventCallback listener interface: tizen.package.setPackageInfoEventListener(packageEventCallback); To stop receiving notifications, use the unsetPackageInfoEventListener() method of the PackageManager interface: tizen.package.unsetPackageInfoEventListener(); A Sample Package is preinstalled in the simulator and contains 2 sample applications: Tizen dialer for making phone calls, and Tizen sender for sending SMS messages. Many sample applications, such as CallLog, use the Tizen Application API to invoke these service applications. Since the simulator allows you to run only 1 application at a time, the Application Module Message window is available, which can provide return data for success callback and simulate application launch failure. The following sample code demonstrates how to define an application control and invoke the service provided by the Tizen sender application. You can use the Application Module Message window to simulate the success value for the success callback or an error message for the error callback. var appControl = new tizen.ApplicationControl('', 'sms:' + phoneNumber); tizen.application.launchAppControl(appControl, null, function() { console.log('launch app service ...'); }, function(e) {/* Error handling */}, { onsuccess: function() { console.log('Message service launch success'); }, onfailure: function(er) {/* Error handling */} }); Figure: Providing application callback data The Web Simulator does not have a home screen. Therefore, when the application.exit() method is called, you cannot navigate to another application or to the home screen. In this situation, a message is displayed stating that the application tried to exit and can be launched again. Figure: Launch an application again In the Communications panel, you can handle calls, messages, and the push service. Calls The Calls tab provides controls for simulating incoming calls made to the application. The calls can be tracked by call history-related methods using the Tizen Call History API. Figure: Calls tab Click Call to display the calling screen. Click Answer to simulate a received call, and Ignore to simulate a rejected call. Figure: Calling screen Messages The Messages tab provides controls for simulating SMS, MMS, and email message exchange between the panel and a target application. To send a message from the panel to the application: The application receives messages using the Tizen Messaging API. The Message Thread section shows the message history of the current session. Figure: Messages tab Push Service The Push tab provides controls for delivering push notifications to your application. The applications table (on the Packages and Applications panel) lists registered applications for receiving push notifications and connectivity status. If an application is connected, the push service sends the push notification data directly to the application. If an application is not connected, the push service posts a UI notification on the Notification panel. For the application to receive push messages, it has to register itself with the tizen.push.registerService() method. If the registration is successful, a red pause button is shown at the Application section under Status. During this status, notification messages pushed by the service server are posted on the Notification panel. Figure: Registered for the push service After the registration, the application must connect to the push service with the tizen.push.connectService() method. When the application is connected, a callback provided by the application is called whenever a notification message arrives. Figure: Connected to the push service To push a message from the panel to the application: The application receives push notifications using the Tizen Push API. Figure: Push tab The Network Management panel is used to manage network capabilities, such as Wi-Fi, cellular network (2G, 3G, and 4G), NFC, Bluetooth, and bearer selection. Figure: Network Management panel You can also set additional parameters for the NFC and Bluetooth functionalities, such as NFC tag and target type, Bluetooth adapter information, and the simulated devices. Figure: NFC parameters Figure: Bluetooth parameters The Bearer Selection section provides network bearer selection management by listing supported network devices and their current availability status. You can request and release specific network connections from this section. Figure: Network bearer selection Your application can manage network devices and network status using the Tizen NFC, Bluetooth, and Network Bearer Selection APIs. The Power Manager panel provides controls for managing the state of the battery and power resources. Figure: Power Manager panel The BATTERY section simulates the device battery level. Your application can retrieve the current battery status using the Tizen System Information API. The Download panel allows you to create a simulated download object with custom size, MIME type, and download speed. All simulated download objects support start, cancel, pause, and resume operations, and provide status feedback mechanism. You can use the simulated download object created by the panel to test various conditions for your application. The panel contains 2 predefined simulated download objects: and. When an object is selected from the drop-down list, its details are displayed at the bottom half of the panel. The panel also allows you to add, remove, and update download objects. Details, such as URL, MIME type, file size, and speed, are configurable. The following sample code demonstrates how to start the download process and set a listener callback to monitor the status of the download. By adjusting the parameter of the download object, you can verify that you application behaves correctly in different scenarios. request = tizen.DownloadRequest(''); downloadId = tizen.download.start(request); tizen.download.setListener(downloadId, listener); Figure: Download panel The Notification panel provides a notification center administrating system notifications. As the Simulator has no real desktop UI components, such as status bar or notification tray, the panel serves as the final rendering place of all the notifications. You can easily verify that the notification details you created with the Tizen Notification API are correct. Figure: Notification panel with empty notification The following sample code demonstrates how to create a status notification. When it is posted with the post() method, the details of the notification are displayed on the panel, as shown in the figure below. notification = new tizen.StatusNotification('PROGRESS', 'Notification Sample', { content: 'sample content', iconPath:, soundPath:, vibration: true, progressValue: 67 }); Figure: Notification panel with a notification
https://docs.tizen.org/application/tizen-studio/web-tools/web-simulator-features
2020-02-17T01:11:00
CC-MAIN-2020-10
1581875141460.64
[]
docs.tizen.org
Defines the properties that can be set when forking a Repository. To reduce backwards compatibility issues as new properties are added over time, instances of this class may only be created using its RepositoryForkRequest.Builder. The following property is required: getParent(): The repository to fork getName(): Defaults to the parentrepository's nameif not explicitly specified getProject(): Defaults to the forking user's personal projectif not explicitly specified isForkable(): Defaults to true, meaning the fork will itself be forkable, if not explicitly specified isPublic(): Defaults to false, meaning the fork will will not be public, if not explicitly specified slug, which is used in URLs (both for the browser and when cloning), will be generated from the provided getName(). Both the name and the generated slug must be unique within the projector the fork cannot be created. fork(RepositoryForkRequest)
https://docs.atlassian.com/bitbucket-server/javadoc/5.16.0/api/reference/com/atlassian/bitbucket/repository/RepositoryForkRequest.html
2020-02-17T00:51:04
CC-MAIN-2020-10
1581875141460.64
[]
docs.atlassian.com
Whenever you login the system, you can always find a LiveChat button at bottom-right corner of the page. You can leave a message with this chat system. EnGenius support team will usually feed back in minutes. The EnGenius Support Passcode is used to verify users' identities for security purposes. When you get trouble on configuring your networks or operating your cloud configurations, you can click on the Help button on the top-right corner of menu. Choose Remote Support and click on Generate PASSCODE. You can send the generated passcode the support team on LiveChat. With the passcode, support team can access your account temporarily to diagnose and resolve issues you've raised. Note that the generated PASSCODE will automatically expire after a period of time. Support team won't be able to access your resource once the PASSCODE is expired.
https://docs.engenius.ai/engenius-cloud/get-remote-support
2020-02-17T01:37:05
CC-MAIN-2020-10
1581875141460.64
[]
docs.engenius.ai
Quoc D. Bui's Blog Business Platform Division, Customer Advisory (Technology) Team BizTalk Server–distinguished field The distinguished field is used to expose the value of a node of a message instance to BizTalk... Author: Quoc Bui Date: 11/30/2010 BizTalk Patterns–part 2 I recently posted another common design pattern. It is called the Sync-Async pattern. This pattern... Author: Quoc Bui Date: 11/25/2010 BizTalk Patterns I recently presented at the ACSUG September Meeting: BizTalk Solution Patterns with Quoc Bui from... Author: Quoc Bui Date: 09/29/2010 BizTalk patterns – the parallel shape The parallel shape in BizTalk Server (v2004 to v2009) is widely misinterpreted to be multi-threaded.... Author: Quoc Bui Date: 10/16/2009
https://docs.microsoft.com/en-us/archive/blogs/quocbui/
2020-02-17T02:16:11
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com