content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
On Friday, April 3, a new release of the Apigee hybrid UI is available.
New features and enhancements
This section describes the new features and enhancements in this release.
UI support for OASValidation policy (Beta)
The OASValidation (OpenAPI Specification Validation) policy (Beta) enables you to validate an incoming request or response message against an OpenAPI 3.0 Specification (JSON or YAML). For more information about the policy, see OASValidation policy (Beta). For information about attaching the policy using the UI, see Attaching and configuring policies in the UI.
Bugs fixed
The following bugs are fixed in this release. This list is primarily for users checking to see if their support tickets have been fixed. It's not designed to provide detailed information for all users. | https://docs.apigee.com/release/notes/200403-apigee-hybrid-release-notes | 2020-10-20T05:59:57 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.apigee.com |
When decoupling services via facts, it is vitally important, that the consuming party understands the facts it is interested in. Therefore, evolution is a challenge. As soon, as the publisher starts publishing a particular fact type in a (non-compatible) format, the consumer will break. This leads to complex deployment dependencies, that we tried to avoid in the first place.
In order to avoid this, the most important advice is:
make sure, new fact versions are always downwards compatible
and
make sure you tolerate unknown properties when processing facts
If there are only additions for instance in the new fact version, then the ‘tolerant reader’ can kick in and ignore unknown properties. See Tolerant Reader
Sometimes however, you need to change a fact schema in terms of structure. We assume here, you use a Schema/Transformation registry, as this feature is disabled otherwise.
In the above scenario, the publisher wants to start publishing facts with the updated structure (version 2) while the consumer that expects the agreed upon structure (version 1) should continue to work.
For this to work, there are three prerequisites:
This would not work otherwise anyway, because we assume version 1 and version 2 to be incompatible, so the correct schema must be chosen for validation anyway. In this case, it would be version 2.
When it subscribes on a particular fact type, it also needs to provide the version it expects (1 here)
The Registry takes little javascript snippets, that can convert for instance a version 2 fact payload, into a version 1.
Factcast will build transformation chains if necessary (from 4-3, 3-2 and 2-1, in order to transform from version 4 to version 1). Every non-existent transformation is assumed compatible (so no transformation is necessary).
When necessary, you also can add a 4-1 transformation to the registry to do the transformation in one step, if needed. Beware though, you will not benefit in terms of performance from this.
If there are many possible paths to transform from an origin version to the target version, the shortest always wins. If there are two equally long paths, the one that uses the bigger shortcut sooner wins.
Anther use-case is that, over time, the publisher published 3 different versions of a particular fact type, and you (as a consumer) want to get rid of the compatibility code dealing with the older versions.
Same as downcast, just express your expectation by providing a version to your subscription, and factcast will transform all facts into this version using the necessary transformations from the registry. While for downcast, missing transformations are considered compatible, upcasting will fail if there is no transformation code to the requested version.
If transformation is not possible due to missing required code snippets in the registry or due to other errors, FactCast will throw an exception.
Obviously, transformation via javascript from a VM brings a considerable overhead. (Might be better with graal, which is not yet supported)
In order not to do unnecessary work, factcast will cache the transformation results, either in memory or persistently.
See the Properties-Section on how to configure this.
Note: Whenever a transformation is not possible, factcast will just throw an appropriate exception.
For an example, see the example registry
Remember that problems in the registry can cause errors at runtime in factcast, so that you should validate the syntactical correctness of it. This is where the cli tool will help. | https://docs.factcast.org/concept/transformation/ | 2020-10-20T06:38:31 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.factcast.org |
FLOW for Wallets & Custodians
How to integrate your wallet software with FLOW
Creating an Account
A user needs a Flow account in order to receive, hold and send FLOW tokens. The accounts & keys documentation provides a detailed overview of how accounts work on Flow.
You can create an account using templates and helper code from one of the Flow SDKs:
Receiving FLOW Deposits
Every Flow account supports the FLOW token by default. Once an account is created, it is already provisioned to receive FLOW deposits from other users.
FLOW, like any other
FungibleToken on Flow, is stored in a special resource called a
FungibleToken.Vault.
Every new account is created with an empty FLOW vault stored at the
/storage/flowTokenVault storage path.
let vault = account.borrow<&FlowToken.Vault>(from: /storage/flowTokenVault)
Conceptually, a vault is like a mailbox with a lock. Anybody can deposit tokens
but only the account holder can withdraw them. This functionality is made possible by
resource capabilities in Cadence. Each account publishes a
FungibleToken.Receiver interface
that points to its FLOW vault. The receiver is the mail slot; it allows others to
deposit FLOW into a vault without stealing what's inside.
Here's how you deposit FLOW into an account:
let receiver = account .getCapability(/public/flowTokenReceiver)! .borrow<&{FungibleToken.Receiver}>() ?? panic("Could not borrow FungibleToken.Receiver reference") receiver.deposit(from: <-senderVault)
Detecting Deposits
The
FlowToken contract emits a
FlowToken.TokensDeposited event whenever tokens
move between accounts.
pub event TokensDeposited(amount: UFix64, to: Address?)
You can query for this event to detect when tokens are deposited into a user's account.
TODO: Link to event querying docs
Receiving FLOW from an ICO
A portion of the initial FLOW token supply will be distributed directly to new and existing backers who participate in the initial coin offering (ICO) of FLOW. Tokens distributed through an ICO are subject to a lockup period, meaning they can't be sold, transferred or traded until sufficient time has passed.
Although locked tokens can't be liquidated, they can still be used for staking. Any staking rewards acrued from locked tokens are deposited into the rewardee's account as unlocked tokens.
FLOW.ICO vs FLOW
It is the responsibility of the custodian to ensure that FLOW received from an ICO event (FLOW.ICO) is not liquidated before the legal lockup period has passed. In order to ensure that this does not happen, it is important to store FLOW.ICO tokens separately from unlocked FLOW tokens.
To achieve this separation, a custodian should provision a new token vault that follows this standard:
FLOW.ICO Token Vault
- Type:
FlowToken.Vault
- Location:
/storage/lockedFlowTokenVault
Creating the FLOW.ICO Vault
The following Cadence transaction creates an empty FLOW token vault and stores it at the standard FLOW.ICO storage path. This transaction assumes that the account has already been created.
import FungibleToken from 0xFUNGIBLE_TOKEN_ADDRESS import FlowToken from 0xFLOW_TOKEN_ADDRESS transaction { prepare(signer: AuthAccount) { // Create an empty FlowToken Vault and store it signer.save(<-FlowToken.createEmptyVault(), to: /storage/lockedFlowTokenVault) // Create a public capability to the Vault that only exposes // the deposit function through the Receiver interface signer.link<&FlowToken.Vault{FungibleToken.Receiver}>( /public/lockedFlowTokenReceiver, target: /storage/lockedFlowTokenVault ) // Create a public capability to the Vault that only exposes // the balance field through the Balance interface signer.link<&FlowToken.Vault{FungibleToken.Balance}>( /public/lockedFlowTokenBalance, target: /storage/lockedFlowTokenVault ) } }
Below is a variation of the above transaction that provisions the FLOW.ICO vault at the time of account creation.
import FungibleToken from 0xFUNGIBLE_TOKEN_ADDRESS import FlowToken from 0xFLOW_TOKEN_ADDRESS transaction { prepare(signer: AuthAccount) { let newAccount = AuthAccount(payer: signer) newAccount.save(<-FlowToken.createEmptyVault(), to: /storage/lockedFlowTokenVault) newAccount.link<&FlowToken.Vault{FungibleToken.Receiver}>( /public/lockedFlowTokenReceiver, target: /storage/lockedFlowTokenVault ) newAccount.link<&FlowToken.Vault{FungibleToken.Balance}>( /public/lockedFlowTokenBalance, target: /storage/lockedFlowTokenVault ) } }
Receiving a FLOW.ICO Deposit
All FLOW tokens deposited from an ICO event will be automatically routed to the FLOW.ICO vault
stored at the
/storage/lockedFlowTokenVault storage path. If an account does not contain
a vault at this path, it cannot receive ICO deposits.
Getting the FLOW.ICO Balance
See the next section for an example of how to query the balance of a
FlowToken.Vault instance.
Getting the Balance of an Account
From Cadence
Similar to the token receiver, each account publishes a
FungibleToken.Balance capability
that allows anybody to read the balance of an account. This allows Cadence programs
to fetch the balance of an account directly in code.
let balanceRef = account .getCapability(/public/flowTokenBalance)! .borrow<&FlowToken.Vault{FungibleToken.Balance}>() ?? panic("Could not borrow FungibleToken.Balance reference") log(balanceRef.balance)
The above code can be executed as part of a read-only Cadence script.
From the Access API
The FLOW Access API makes it easy to query an account's balance without writing any Cadence code.
The GetAccount RPC method includes a
balance field, which holds the FLOW token balance
for the requested account.
import ( "github.com/onflow/flow-go-sdk" "github.com/onflow/flow-go-sdk/client" ) func main() { flowClient, _ := client.New(accessAPIHost) account, _ := flowClient.GetAccount(ctx, address) fmt.Println(account.Balance) }
Sending FLOW
Below is an example of a transaction that transfers FLOW from one account to another.
import FungibleToken from 0xFUNGIBLE_TOKEN_ADDRESS import FlowToken from 0xFLOW_TOKEN_ADDRESS transaction(amount: UFix64, to: Address) { // The FungibleToken.Vault resource that holds the tokens to be transferred let sentVault: @FungibleToken.Vault prepare(sender: AuthAccount) { // Get a reference to the sender's stored vault let vault = sender. borrow<&ExampleToken.Vault>(from: /storage/flowTokenVault) ?? panic("Could not borrow reference to the owner's Vault!") // Withdraw tokens from the sender's stored vault self.sentVault <- vault.withdraw(amount: amount) } execute { // Get the recipient's public account object let recipient = getAccount(to) // Get a reference to the recipient's FungibleToken.Receiver let receiver = recipient. getCapability(/public/flowTokenReceiver)!. borrow<&{FungibleToken.Receiver}>() ?? panic("Could not borrow receiver reference to the recipient's Vault") // Deposit the withdrawn tokens in the recipient's receiver receiver.deposit(from: <-self.sentVault) } }
This transaction template is available for use in our SDKs:
- Transfer Tokens with the JavaScript SDK
Staking FLOW
The FLOW staking documentation outlines the steps a custodian can take to support staking through a trusted node operator. | https://docs.onflow.org/token/wallets/ | 2020-10-20T05:33:26 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.onflow.org |
Does Prefix upload any of my code, assemblies, or other data to the Internet?
No! Prefix does not collect any data, logs, code, assemblies, etc, about your apps and upload it to the Internet. Prefix runs as a Windows Service on the user’s workstation. It works by collecting data from profiling APIs. This data is evaluated locally and shown in the Prefix UI. This performance data never leaves the user’s workstation and is never uploaded to the Internet.
Does Prefix connect to Stackify for anything?
- The email registration process creates a unique user account for each Prefix user
- Hourly ping checks occur from Prefix to Stackify to look for a new version of Prefix
- Optionally download Stackify APM stats for comparison purposes. (More below)
If the user has a Stackify APM account, they will be prompted to optionally link Prefix to their Stackify account to unlock some additional features Stackify linked accounts will try to download performance stats from Stackify’s APM to show comparison data within Prefix. No local Prefix data is uploaded in this process except for the app name and URL being compared. Note: This feature can be disabled within Stackify’s client account settings, if desired.
Why does Prefix require me to put in an email address?
- We want to keep you up to date on Prefix updates and tips.
- We also match your email address to our user list to potentially unlock additional features available to customers who also utilize Stackify’s APM products.
Stackify will not sell your email address or spam you.
What “usage data” does Prefix collect?
- Every hour Prefix pings Stackify’s servers to check for new versions and report that it is still installed.
- Prefix utilizes 3rd party tools to anonymously track basic usage data. Certain product and UI events are tracked like page views, enabling and disabling the profiler, etc to understand how the product is used and how it can be improved.
Why does Prefix access my IIS and web config files?
Prefix parses web config files to find the appSetting for Stackify.AppName to show a specific name for your apps within Prefix. Prefix does not store or use any other type of configuration data. It does not use or store any sort of credentials, database info, etc. | https://docs.stackify.com/v1/docs/prefix-data-collection-policy | 2020-10-20T06:40:34 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.stackify.com |
(Author, Matt Gallagher)
Like all wars before it, the war in Iraq has spawned its own literature. In Vietnam the war produced the likes of Philip Caputo and Tim O’Brien. Today as our current conflict has morphed into the war against ISIS, writers like Matt Gallagher have come on the scene with novels like YOUNGBLOOD, which takes the reader inside a platoon in the town of Ashuriyah, outside of Baghdad, when the optimism spawned by the “surge” gave way to skepticism about the war, and as we know the rise of ISIS and the American withdrawal in 2011. When stationed in Iraq, Gallagher began writing in his own blog from inside the war that attracted a large following. Military authorities eventually shut down Gallagher’s blog, but his new novel has allowed him to express many of the feelings and emotions of his characters, many of which, I am certain, are composites of the men he served with.
The narrator of YOUNGBLOOD is Lieutenant Jack Porter, and through his voice Gallagher expresses the view that “so little of Iraq had anything to do with guns, bombs, or jihads.” The novel portrays a war that encompasses the locals and their lives, as they try and cope with a form of hell that has destroyed their way of life. It comes across as a confusing and angry conflict which continues to this day with little understanding on the part of the people who are responsible for the mess that Iraq has become, as many of them are now calling for the United States to dispatch even more troops to the region. The American mission after years in Iraq had evolved into, “clear, hold, and build, a motto that was extremely difficult to implement successfully.
(Author, Matt Gallagher inside a Stryker vehicle in Iraq)
Porter faces a number of obstacles as a platoon commander. First, he had to deal with bribery and the overall corruption that existed. American military payments were made to numerous groups including sheiks, both Sunni and Sh’ia, and militia leaders in order to combat al-Qaeda, and other groups to obtain their loyalty. Further payments went to Iraqi families that were victims of collateral damage, even more money flowed to projects to rebuild Iraq’s infrastructure, but it seemed that little was being built. Porter’s second problem was Sergeant Daniel Chambers, a military lifer who had already served tours earlier in the war. Chambers had been foisted on Porter by his superiors and his demeanor and discipline became a threat to Porter’s command which undermined his relationship with his men.
Once Gallagher introduces his main characters we learn that Chambers may have been involved in the killing of two unarmed Iraqi citizens who were mistaken for jihadis the military was looking for. Porter wants to prove that Chambers had violated the rules of engagement and begins to investigate the shooting in the hopes of getting rid of the ornery sergeant. A second major plot line is Porter’s relationship with Rana, a local sheik’s daughter. Rana, who was involved with an American soldier who converted to Islam, and wants to marry her, is killed. It is left for Porter to pick up the pieces. As the novel evolves, Gallagher integrates past events as a means of trying to understand the present. His relationship with his brother Will, a West Point graduate who served in Iraq, and his girlfriend Marissa, who seemed to have drawn away from him, play on Porter’s mind throughout.
The reader acquires a strong sense of what it is like to be a soldier in Iraq. The fear of death, having the Stryker vehicle you are riding on set off an IED. The friendships that result in sick jokes, games and other amusements that fill the void of limited down time. The exhaustion of carrying 60 pounds of body armor and weapons during patrols or having to maintain a sharp focus for long periods as they try and survive. Gallagher writes with verve and humor as he tries to convey Porter’s experiences, who is fully aware that no one will understand him, not his brother Will or his girlfriend Marissa back in the United States. Porter must live with his memories as he faces the reality of war each day, a war where he exhibits empathy for the Iraqi people he comes in contact with, and the men he commands. The end result is that Gallagher portrays the horror and inequities of war, and how it has eroded the fabric and foundation of Iraqi society. After one puts the book down one wonders what will be the final chapter for Iraq as a nation, as it continues to struggle with sectarianism, a corrupt political system, the constant threat of violence, and the legacy of the American invasion.
(Author, Matt Gallagher serving in Iraq) | https://docs-books.com/2016/03/08/youngblood-by-matt-gallagher/ | 2020-10-20T06:16:08 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs-books.com |
You can create a new and empty file menu under → .). | https://docs.kde.org/stable5/en/kdemultimedia/kwave/newsignal.html | 2020-10-20T05:48:48 | CC-MAIN-2020-45 | 1603107869933.16 | [array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)
array(['kwave-plugin-newsignal.png', 'Screenshot of the File New Dialog'],
dtype=object) ] | docs.kde.org |
Hey, 👋 I'm Felipe Lima, and I'm the creator of ScaffoldHub.
I offer custom development services in case you need to expand the built-in ScaffoldHub functionalities.
I only work with development tasks, meaning that I don't get involved in high-level business planning.
It can be a complete project, a module, or just a feature, if you have the specifics of what you need, I can build it.
I have plenty of experience with web development, especially with ScaffoldHub, which I'm iterating and improving for about four years now.
The custom development workflow works like this:
You send me the specifics of a feature.
I send you the quote for the time and cost.
You pay half up-front.
I deliver it to you.
You pay the other half.
We iterate through this process until all the features you need are complete.
In case you have questions and want me to explain live the architecture of ScaffoldHub in detail, I'm here to help. We make a call, you invite your team, and I explain and answer everything you need to know.
Both development and consulting are based on a $75/hour price.
Please email me at [email protected].
Thank you for choosing ScaffoldHub! | https://docs.scaffoldhub.io/other/custom-development | 2020-10-20T06:11:35 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.scaffoldhub.io |
Dec 05, 2019
Saved on Aug 18, 2020
Through the search context panel, you can locate transformations to specify and add at the current location in your recipe.
You can search for transformations to add in any of the following ways:
...
© 2013-2020 Trifacta® Inc. Privacy Policy | Terms of Use | https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=109906522&selectedPageVersions=8&selectedPageVersions=9 | 2020-10-20T06:14:19 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.trifacta.com |
If you are using a 64-bit machine you can install PowerServer Mobile (32-bit) to the 64-bit OS without any special configurations. But if you have a previous version of Appeon PowerServer (64-bit) already installed on this machine then you must uninstall it first.
Step 1: Open IIS Manager, right click the top node (not the website node) in the treeview and select Stop from the popup menu. This will stop the entire IIS.
Step 2: Close any opened window, especially PowerBuilder and IIS Manager.
Step 3: Uninstall all of the Appeon components including PowerServer, PowerServer Toolkit, and PowerServer Help. You will need to uninstall these components one by one.
Step 4: Verify Appeon is cleanly uninstalled by the following two steps:
Double check the Control Panel\Programs\Programs and Features and make sure no Appeon component is listed.
Open a command prompt window and then type regedit<Enter>. Double check that no ADT or ASN keys are listed under HKEY_LOCAL_MACHINE\SOFTWARE\Appeon\<version_number>.
Step 5: Clear the Internet Explorer cache and temporary files.
Step 6: Delete the entire Appeon folder from C:\Users\User_Name\AppData\Roaming\.
Step 7: Delete all the Appeon application folders from the IIS Web root. For example, under C:\inetpub\wwwroot\ at minimum you should delete the following folders: appeon, appeon_acf_demo, appeon_code_examples, pet_world, sales_application_demo.
Step 8: Restart the machine.
Step 9: Start IIS by right-clicking the top node in the treeview in the IIS Manager and selecting Start from the popup menu.
After that, you can proceed to install PowerServer Mobile by following steps in Task 2: Install PowerBuilder & PowerServer. | https://docs.appeon.com/pb2019/appeon_mobile_tutorials/ch01s01s02.html | 2020-10-20T06:43:49 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.appeon.com |
Now that you have successfully deployed the application, you are ready to run the application on a supported mobile device (iPad) with Appeon Workspace installed.
Install Appeon Workspace.
Step 1: Make sure your mobile device .
Configure the network connection of the mobile device (iPad).
Make sure that the Windows PC and the iPad are connected to the same Wi-Fi router.
Tap the AppeonMobile icon on your iPad to launch Appeon Workspace.
Tap the New icon (
) to the left of the title bar.
In the App URL text box, enter the application URL for the tutorial application in this format:. For example, if your IIS domain is and you specified tutorial in the PowerServer Toolkit configuration as the Web folder name then the URL would be.
Tap the Test Connection button to test the server connections. If successful please proceed to Step 7, otherwise please enter the correct URL.
Tap the Back icon (
) on the title bar to save the information and return to the main screen of the Appeon Workspace.
Once you return to the main screen of the Appeon Workspace, the downloading and installation process of the tutorial application occurs automatically.
After the installation process has completed, tap the tutorial application icon on the home screen to run it.
In the tutorial application window, click the menu icon (
), and then select Tutorial > Open
or
Click the toolbar icon (
) and then click the Open icon (
).
This opens w_cusdata sheet window.
Mobile-style title bar, menu, & toolbar
The layout of title bar, menu and toolbar are automatically adjusted by PowerServer Mobile, to make more room for the window and controls.
Click the Retrieve button.
This retrieves data from the database.
Figure 74.. | https://docs.appeon.com/ps2019/getting_started/ch04s02.html | 2020-10-20T05:33:08 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.appeon.com |
Schedule Maintainance
BizTalk360 has an option where administrators can stop the alert notifications being sent for a specific maintenance time for every environment. Say, if the deployment is supposed to take 1 hour, they can create a schedule to stop alerts during this period.
- Click 'Settings' (gear icon) located at the top of the page
- Click 'Monitoring and Notification' in the Menu panel on the left side of the screen
- Click 'Schedule Maintenance' ->New Schedule.
- Select the environment in which the maintenance is planned
- Exclude the alarm from maintenance(Optional)- Select the alarm you u want to exclude from maintainance i.e you still receive alerts during maintenance as well for the excluded alarm.
Schedule Configuration
The schedule can be configured to stop receiving alerts immediately or later point of time-based on your maintenance plan.
Immediate Maintainance
You can set up the maintenance immediately from the current time by, enabling the 'Immediate' option and providing the maintenance end date and time in the schedule configuration.
Future Maintainance
Configure the schedule with Start and end date time to stop receiving alerts during the future maintenance period. A future maintenance schedule can be configured for onetime or recurrence execution.
- One-Time execution - The schedule will execute only once based on start/End Time configured.
Say for instance if you want to stop notification alert during the deployment which has been planned on 27th of March from 9 AM to 11 AM then you can create a schedule with onetime execution. you will not receive any alerts from 9 AM to 11 AM on 27th March.
2. Recurrence Execution -The schedule will be created once and that can be executed multiple times based on the recurrence pattern configured during the selected start and end time.
Configure recurrence schedule with below recurrence pattern :
- Daily: The Schedule will execute every single day or 2 days/3days once during the configured start and end date-time.
Scenario, The below image describes, the schedule will execute every 2 days once from 2 PM to 3 PM, from the start date March 31 to the end date April 10.
- Weekly: The Schedule will execute every week or 2 weeks/3weeks once during the configured start and end date-time period .you can also define whether you want to execute the schedule during all the days in a selected week, or only on the particular days.
Scenario- Say for instance If you want to stop receiving alerts during all the weekend for the whole 1 year. Then configure a schedule with start and end date for 1 year .
Recurrence Pattern: Weekly frequency -> Recur every week -> On Saturday and Sunday
- Monthly - The schedule will recur on a monthly basis. Say you can define in which month you are planning for maintenance and also you can configure Date of the month / Day of the week the maintenance is planned
Date of Month - Schedule will get executed on the specific dates of the configured month. The below image explains the schedule will repeat every 1st and 15th of January, May and December Month.
Day of the week - The schedule will repeat on every configured day of the week on the selected month. For instance, if you want to maintain system on the last Sunday of every month, you can configure as below.
Months - All Months , Days - Last Sunday
Stop Maintainance
Users can stop the maintenance in between manually d. Say for instance If you planned to put the system in maintenance during deployment period for 2 hours, but if your deployment was over within 1 hour, then you can stop your maintenance. you don't want to wait for 2 hours to receive alerts.
You stop maintainance from 2 sections :
- From an operational dashboard: When the environment is on maintainance, the same will be intimated in the operation dashboard. User can stop maintainance by clicking 'Stop Maintainance' and proving reason for stopping.
From Schedule Maintainance Section: For the current active schedule(which is on maintainance currently) a stop will button will be enabled . User can stop maintainance by clicking that stop button .
Schedule Auditing
The below schedule activities are audited along with the user name and the action performed time for further reference.
- Create - New Schedule created for maintenance
- Update- Edit and modify the schedule configuration settings.
- Delete - Delete the schedule, to stop the execution
- Stop - Manually stopping the maintenance in between.
- Complete - Maintenance completed after the configuration period, This is a system action.
2. During maintenance, no alert will be triggered for the alarm and Autocorrect will also not execute.
3. You can edit the schedule and stop the maintenance in between. i.e before the maintenance period gets over. | https://docs.biztalk360.com/docs/schedule-maintenance | 2020-10-20T05:55:56 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.biztalk360.com |
Push API \ Upload Picture To Twitter PHP SDK
Push API Resources
Workflow
Request: the code to send to the API
Send a
POST request with the data below to the endpoint
/push/identities/<identity_token>/twitter/picture.json
to upload a picture to the Twitter account of a user. The
<identity_token> is obtained whenever one of your users connects
using a social network account.
By using the
picture_id of the uploaded picture you can attach it to a new Tweet.
Pictures that are not attached to a Tweet will be removed from Twitter after 24 hours.
To be able to use this endpoint Twitter must be fully configured for your OneAll Site and the setting Permissions \ Access of your Twitter app must be set to Read and Write.
POST data to include in the request
{ "request":{ "push":{ "picture":{ "description": "#description#", "url": "#url#" } } } }
Result: the code returned by the API
Resultset Example
{ "response":{ "request":{ "date": "Thu, 21 Sep 2017 16:25:31 0200", "resource": "/push/identities/923843ec-1749-4cc1-988f-d6963f3b1baa/twitter/picture.json", "status":{ "flag": "success", "code": 200, "info": "Your request has been processed successfully" } }, "result":{ "data":{ "provider": "twitter", "object": "picture", "picture_id": "910890718352281600", "expires_in": "86400" } } } } | http://docs.oneall.com/api/resources/push/twitter/picture/ | 2018-03-17T06:26:25 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.oneall.com |
How does Wordfence get IPs
- 2.10.1 Let Wordfence use the most secure method to get visitor IP addresses. Prevents spoofing and works with most sites.
- 2.10.2 Use PHP's built in REMOTE_ADDR and don't use anything else. Very secure if this is compatible with your site.
- 2.10.3 Use the X-Forwarded-For HTTP header. Only use if you have a front-end proxy or spoofing may result.
- 2.10.4 Use the X-Real-IP HTTP header. Only use if you have a front-end proxy or spoofing may result.
- 2.10 core files against repository version for changes
- 3.4.7 Scan theme files against repository versions for changes
- 3.4.8 Scan plugin files against repository versions for changes
- 3.4.9 Scan wp-admin and wp-includes for files not bundled with WordPress
- 3.4.10 Scan for signatures of known malicious files
- 3.4.11 Scan file contents for backdoors, trojans and suspicious code
- 3.4.12 Scan database for backdoors, trojans and suspicious code
- 3.4.13 Scan posts for known dangerous URLs and suspicious content
- 3.4.14 Scan comments for known dangerous URLs and suspicious content
- 3.4.15 Scan for out of date Delete Wordfence tables and data on deactivation?
- 3.7.14 Disable Wordfence Cookies
- 3.7.15 Add a debugging comment to HTML source of cached pages
- 3.7.16"
If your website has been hacked, spammers will often include a small script on your site that redirects any visitors who hit that script's URL from your site to a malicious or pornographic website. The reason they do this is because the site they are redirecting to is a known bad site and spam filters will block any emails containing links to their own site. So instead of emailing links to their own site, they will email out links to another sitelisted if you are on a shared hosting program and another website on your site is infected with malware or is engaging in malicious activity. This feature does a check to see if your IP address is clean or if it is listed as malicious. If you find that your IP address is listed as malicious, log a support call with your hosting provider to have your site moved to a different IP address or have them work to clean the IP page. that could be accessed remotely, such as old WordPress settings in a file name wp-config.old. Preventing access to these files can keep your database password or other important information secure.
Scan for publicly accessible quarantined files
This scan will check for quarantined files that some hosts produce when they detect possible malware. Usually these files end in ".suspected", which can cause the web server to expose the contents of PHP files instead of running them as PHP code when a visitor tries to view them. If a sensitive file such as wp-config.php is renamed by the host to wp-config.php.suspected, this can expose your database password to the public.
Note: If the file is not hidden when you try to fix this result by clicking the link to hide the file, this may be caused by having multiple levels of .htaccess files. You might need to add "RewriteOption Inherit" to your .htaccess file; be sure to check any other sites you have in a parent directory or subdirectory of the same host, since other "rewrites" can be inherited too. In general, saving a backup of the file and removing it from the server is the recommended option.
Scan core files against repository version for changes
This scan checks if your core files match what exists in the official WordPress core repository. If your files have changed then it and do the comparison with the correct version. This scan applies to all themes installed on your WordPress installation, not just the active theme.
Also note that this scan does not apply to commercial themes or themes that are not in the official WordPress repository. If you do have any commercial themes on your system, we will. In cases like this Wordfence will alert you to the fact that the plugin code you have does not match what is in the repository. That is why we recommend you always use the feature that Wordfence provides to view changes in your plugin files before's likely that the plugin file has been infected by something.
Scan wp-admin and wp-includes for files not bundled with WordPress
The wp-admin and wp-includes directories should typically only include files that are a part of WordPress core, plus possibly some log files or files related to file uploads, depending on your hosting company's configuration and other settings. This scan shows files in wp-admin or wp-includes which are not a normal part of WordPress or other files that we recognize. Sometimes, files from an old version of WordPress may still exist in core folders, and they are generally safe to remove, if that is the case.
As usual, be sure that you have a backup before deleting files, especially if you.
One of the common patterns we look for is several techniques that are used by hackers to hide their malicious code including encoding their code using base64, URL encoding, hex encoding and others. We also look for patterns that indicate a file contains code that is downloading and executing something without the normal security patterns that you see in WordPress development.
Scan posts by directly accessing your database (rather than doing a site crawl which is slower) and checks if they contain known dangerous URLs that are linked to phishing or hosting malware. It also checks for suspicious content that may have been generated by an infection or a hack. we strongly recommend that site owners enable this scan.
Scan comments for known dangerous URLs and suspicious content
This scans all your comments by directly accessing your database and scanning the comments table. It checks comments that are in a published state for known malicious URLs and other patterns that indicate an infection. As with the posts scan above, we do a full scan every time this scan is performed because the list of known dangerous URLs is constantly changing so even if a comment has not changed, we need to re-verify that any URLs it contains are clean.
This is an important scan because it prevents your site from linking to known dangerous URLs that have's operating directly on your database and using an efficient algorithm.
Scan for out of date plugins, themes and WordPress versions
This simply alerts you via email if you are using any out of date themes or plugins. We strongly recommend you leave this enabled because upgrading as soon as possible to new versions of WordPress core, or themes and plugins is the most effective way to keep your site secure.
Plugins and themes that have released a new version will appear as a "warning" in the scan results, while any plugins and themes with an update that fixes known vulnerabilities will appear as a "critical" item in the scan results..
Use low resource scanning
Low-resource scanning spreads out the scan's work over a longer period of time, to help decrease the chance of high resource usage in a short period of time. This can be helpful on shared hosting providers that have lower resources available to your hosting account, or on lower VPS and dedicated hosting plans. This may make your.
Limit the number of issues sent in the scan results email
When scan results are sent to you by email at the end of a scan, this option limits the total number of issues that will be sent. The default limit should work well for most sites, but if cleaning an infected site or working on a site with a lot of users who have bad passwords reported in the scan results, you can raise this limit to ensure that all issues are sent, as long as the host has a high enough memory limit. On sites with a low memory limit, it might be necessary to lower this option to allow emails to be sent.
Time limit that a scan can run in seconds
You can set a limit for how long Wordfence scans will run on your site. Some options combined with a large number of files can make scans take a long time, especially on slower servers. If a scan runs out of time before it is finished, you will be notified and it will not resume automatically, but the next scheduled scan will still attempt to run. Changing some options to help the scans run within the limit is the best option, but the time limit can also be increased if necessary. Leaving this option blank will allow Wordfence to use the default limit, currently 3 hours. You can also set a lower limit to keep tighter control of resource usage. See the Scan time limit page for more details.
Rate Limiting Rules
Wordfence includes a rate limiting firewall that controls how your site content can be accessed.
Immediately block fake Google crawlers:
If you are having a problem with people stealing your content and pretending to be Google as they crawl your site, then you can enable this option which will immediately block anyone pretending to be Google.
The way this option works is that we look at the visitor User-Agent HTTP header which indicates which browser the visitor is running. If it appears to be Googlebot then we do a reverse lookup on the customer IP address to verify that the IP does belong to Google. If the IP is not a Google IP then we block it if you have this option enabled.
Be careful about using this option because we have had reports of it blocking real site visitors, especially for some reason visitors from Brazil. It's possible, although we haven't confirmed this, that some internet service providers in Brazil use transparent proxies that for some reason modify their customer user-agent header to pretend to be Googlebot rather than the real header. Or it may be possible that these providers are engaging in some sort of crawling activity pretending to be Googlebot using the same IP address that is the public IP for their customers. Whatever the cause is, the result is that if you enable this you may block.
Enabling this option prevents hackers from being able to discover usernames using these methods,'t actually use that amount of memory, but setting this will ask PHP to increase the memory limit to whatever you specify so that in case Wordfence does use that amount of memory, PHP will only throw an error if the new maximum you have requested is reached.
On sites that have limited memory, this option does not always work to increase the memory limit. If you have tried to use this option and are still running out of memory, it is best to open a support ticket with your hosting provider to ask them for more memory.
Maximum execution time for each scan stage
Wordfence scans can take several minutes or longer on very large websites. Wordfence runs as a PHP application on your web server. Web servers are not a scan to complete, then there may be another problem or you may have to ask your hosting provider to increase the amount of time a web server process is allowed to execute.
The goal is to find a value that is long enough to allow Wordfence to do some work, but short enough so that it does not exceed the maximum allowed time that a web server process is allowed to execute.
Wordfence uses cookies for three tasks:
-.. | https://docs.wordfence.com/index.php?title=Wordfence_options&oldid=829 | 2018-03-17T06:26:06 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.wordfence.com |
Changing the starting state of the world
When you deploy a SpatialOS project, the initial state of the SpatialOS world is defined by a
snapshot. The example projects come with a default snapshot
-
snapshots/default.snapshot - but you can modify it or create new ones.
In the Unreal Starter Project, there’s an Unreal commandlet to generate a snapshot, located
in
workers/unreal/Game/Source/StarterProject/ExportSnapshotCommandlet.cpp.
The commandlet uses the C++ worker API.
The commandlet creates a spawner entity that will exist in the world when the game starts. You can add your own entities).
Once your new snapshot is built, you can launch your new world with
spatial local launch --snapshot=<your_snapshot_name>.snapshot. | https://docs.improbable.io/reference/12.0/unrealsdk/configuration/change-snapshot | 2018-03-17T06:01:26 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.improbable.io |
Use T-Bot to help users with Microsoft Teams
For help while using Microsoft Teams, ensure your users and champions get familiar with T-Bot. T-Bot is a bot which users can interact with to ask it questions about how to use Microsoft Teams and get answers to a wide range of questions.
Microsoft Teams provides localized language support for T-Bot and help content. New languages are being added all the time. For the most current list of supported languages, see Microsoft Teams supported languages for help content.
T-Bot also provides alternative assistance methods for the users who will prefer browsing the content instead of asking questions to a bot.
Providing a full slate of Help, FAQ, Videos and Release Notes sections via the tabs within the bot.
| https://docs.microsoft.com/en-us/microsoftteams/t-bot | 2018-03-17T06:41:34 | CC-MAIN-2018-13 | 1521257644701.7 | [array(['media/use_t-bot_to_help_users_with_microsoft_teams_image1.png',
'Screenshot of the T-Bot page in Microsoft Teams.'], dtype=object)
array(['media/use_t-bot_to_help_users_with_microsoft_teams_image2.png',
'Screenshot of T-Bot response to a user question.'], dtype=object)
array(['media/use_t-bot_to_help_users_with_microsoft_teams_image3.png',
'Screenshot of assistance options on the T-Bot page, including Conversation, Help, FAQ, Videos, and Release Notes.'],
dtype=object)
array(['media/use_t-bot_to_help_users_with_microsoft_teams_image4.png',
'Screenshots of various assistance options within T-Bot, including Help, FAQ, Videos, and Release Notes.'],
dtype=object) ] | docs.microsoft.com |
Replace an untrusted or expired third-party SSL certificate When an SSL connection is required in an integration, there are circumstances when the certificate provided by the third-party vendor is either not yet trusted in ServiceNow or has expired. You can replace it or add a new certificate. Before you beginRole required: sn_ti.write into. | https://docs.servicenow.com/bundle/kingston-security-management/page/product/security-operations-integrations/task/t_Import3rdPartySSLCert.html | 2018-03-17T06:41:42 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.servicenow.com |
6.12
bzip2
The file/bzip2 module provides support for compressing and decompressing data using the bzip2 file format.
Returns an input port that reads and decompresses bzip2 data from in.
Returns an output port that bzip2-compresses the data written to it, and writes the compressed data to out. The returned output port must be closed to ensure that all compressed data is written to the underlying port.
The optional block-size parameter controls size of the internal buffer used for compression. In general, the larger the block size, the better the compression — but also the higher the memory use. To reduce memory use, choose a lower value for block-size. | http://docs.racket-lang.org/bzip2/index.html | 2018-03-17T06:20:28 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.racket-lang.org |
Changes related to "J1.5:Password parameter type"
← J1.5:Password parameter type
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
22 August 2015
19:56(Page translation log) MATsxm (Talk | contribs) marked Password form field type for translation
19:56Password form field type (diff; hist; +203) MATsxm
| https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&hideanons=1&target=J1.5%3APassword_parameter_type | 2015-08-28T00:46:23 | CC-MAIN-2015-35 | 1440644060103.8 | [array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object) ] | docs.joomla.org |
The Items property controls the items available for selection in the Data Table control.
Property Type:
Dynamic
Default Value: null
Hierarchical Reference: ControlNameListData
The default value of the property can be changed by any of the following methods:
Properties that are common between controls will be displayed in the properties list when controls are multi-selected.
Multi-selecting controls to apply the same property value is supported.
Value can be controlled by a rule. | https://docs.driveworkspro.com/Topic/ItemsDataTable | 2022-06-25T05:23:26 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.driveworkspro.com |
Links may not function; however, this content may be relevant to outdated versions of the product.
runDataTransform: Public JS API for control actions
runDataTransform
Run a Data Transform.
Syntax
var options = {
name: “dataTransformName ",
parameters: [{name: "param1", value: "Page1.prop1", isProperty: true}, {name: "param2", value: 123, isProperty: false}],
contextPage: "page1.page2 ",
event: eventObject
};
pega.api.ui.actions.runDataTransform(options);
Parameters
This API accepts a JavaScript object which can have the following key-values.
- name: The name of the Data Transform.
- parameters: Optional. Array of the data transform parameters in JSON format.
- contextPage: Optional. The Page that provides the context that the data transform run in. When not set, the data transform run in the primary page context.
- event: The event refers to a DOM eventObject. | https://docs.pega.com/rundatatransform-public-js-api-control-actions | 2022-06-25T05:34:00 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.pega.com |
MSSP Settings gives you access to various administrative tasks you might be required to perform.
Under MSSP—such as a company logo—to the Umbrella dashboard and MSSP console. You can also create a co-branded login page.
- API Keys—Allows you to create an API key that is used for authentication to the Console Reporting API. For more information, see About the API for the Umbrella Console.
- PSA Integration Details—Allows you to integrate a PSA with Umbrella for ticket creation and usage data. The following PSAs are supported: Connectwise and AutoTask. For more information, see the Umbrella and ConnectWise PSA Integration Setup Guide and Autotask and Umbrella Integration.
- Log Management—Allows you to store the DNS, URL and IP logs of your customers offline in cloud storage. The storage is in Amazon S3 and after the logs have been uploaded, they can be downloaded and kept for compliance reasons or security analysis. For more information, see Centralized Umbrella Log Management.
- Purchasing—Allows you to select how you will procure licenses on behalf of your customers—Global Price List (GPL) or Managed Service License Agreement (MSLA). For more information, see Manage Licensing.
Configure Advanced Settings < Manage MSSP Settings > Add a New Administrator
Updated 2 months ago | https://docs.umbrella.com/mssp-deployment/docs/mssp-settings | 2022-06-25T05:10:02 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.umbrella.com |
Roadmap¶
In Progress¶
- Improved the administrator's experience when creating new groups and policies through Attribute-based Access Control (ABAC). (LO-2749)
- Marketplace is a community-driven exchange of resources developed by LifeOmic and third-parties. For more information on Marketplace, see (PHC-905)
On Deck¶
- The search field on the Users page within Administration now returns results when searching by username. (LO-6987)
Last update: 2022-06-20
Created: 2022-06-20
Created: 2022-06-20 | https://docs.us.lifeomic.com/roadmap/ | 2022-06-25T04:57:37 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.us.lifeomic.com |
Push Messaging
Push Notifications are messages that pop up on mobile devices. App publishers can send them at any time; even if the recipients aren’t currently engaging with the app or using their devices.
Before continuing, please ensure that you have added the WebEngage SDK to your app.
Configure Push Messaging
Here's how you can enable Push Messaging for your Xamarin.iOS app:
Step 1: Follow the steps mentioned in Push messaging integration document for iOS.
Step 2: Log in to your WebEngage dashboard and navigate to Integrations > Channels. In Push tab, under the iOS section, make sure you have uploaded either your push certificate or auth key.
Step 3: In your Xamarin.iOS app's
Entitlements.plist, check Enable Push Notifications.
Step 4: Add
remote-notifications as type string under App Background Modes in your app's
info.plist.
Step 5: Set
autoregister to
true while initializing WebEngage SDK as shown below.
... using WebEngageXamariniOS; namespace YourNamespace { ... [Register("AppDelegate")] public class AppDelegate : UIApplicationDelegate { ... public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions) { ... WebEngage.SharedInstance().Application(application, launchOptions, true); return true; } ... } ... }
Enable Rich Push Notifications
Follow the below steps to use rich push notifications in your Xamarin.iOS app.
1. For Banner Push Notifications
Step 1: Add a new project named
NotificationService with Notification Service Extension as target in your main app.
Step 2: Download WebEngage Banner Push Notification Extension SDK.
Step 3: Add
WebEngageBannerPushXamariniOS.dll to References in your
NotificationService project.
Step 4: Replace
NotificationService.cs with the below code.
using Foundation; using WebEngageBannerPushXamariniOS; namespace NotificationService { [Register("NotificationService")] public class NotificationService : WEXPushNotificationService { } }
2. For Rating and Carousel Push Notifications
Step 1: Add a new project named
NotificationViewController with Notification Content Extension as target in your main app.
Step 2: Download WebEngage Notification App Extension SDK.
Step 3: Add
WebEngageAppExXamariniOS.dll to References in your
NotificationViewController project.
Step 4: Open the
Info.plist file for
NotificationViewController. Expand NSExetnsion > NSExtensionAttributes. Look for
UNNotificationExtensionCategory under
NSExtensionAttributes. If it is not present, add it and set the type as Array. In its items, add the following values:
WEG_CAROUSEL_V1for Carousel Push Notifications
WEG_RATING_V1for Rating Push Notifications
Step 5: Replace
NotificationViewController.cs with the below code.
using System; using Foundation; using UserNotifications; using WebEngageAppExXamariniOS; namespace NotificationViewController { public partial class NotificationViewController : WEXRichPushNotificationViewController { protected NotificationViewController(IntPtr handle) : base(handle) { } public override void ViewDidLoad() { base.ViewDidLoad(); } [Export("didReceiveNotification:")] public override void DidReceiveNotification(UNNotification notification) { base.DidReceiveNotification(notification); } } }
3. Set App Groups of All 3 Projects
Set App Groups as
group.[app-bundle-id].WEGNotificationGroup in
Entitlements.plist of all three projects (your Xamarin.iOS app,
NotificationService and
NotificationViewController).
And you're good to go!
Please feel free to drop in a few lines at [email protected] in case you have any further queries. We're always just an email away.
Updated over 2 years ago | https://docs.webengage.com/docs/xamarin-ios-push-messaging | 2022-06-25T05:17:17 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.webengage.com |
Problem Catalog template
Are:
The Problem Catalog template is a design that allows you to quickly and easily create a catalog of problems. It can be used for anything from an inventory list of bugs, to features in need of improvement. The Problem Catalog template makes it easy to document your ideas and prioritize them by assigning each one to a different category.
Features:
This template contains various problem categories like:
- BI Report, P&L, Coloring).
| https://iso-docs.com/products/problem-catalogue-template | 2022-06-25T05:26:31 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/problemcatalogueTemplate_1445x.png?v=1643054570',
'Problem Catalogue Template'], dtype=object)
array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/2021-07-07_12-21-44_1445x.png?v=1643054570',
'Problem Catalogue Excel Template'], dtype=object) ] | iso-docs.com |
$ oc edit machineset <machineset> -n openshift-machine-api
Apply the following best practices to scale the number of worker machines in your OpenShift Container Platform.
To make changes to a OpenShift Container Platform one is created to take its place. When a machine is deleted, you see a
machine deleted event.
To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the
maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention.
To stop the check, remove the resource., resembles the following YAML file:
apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example (1) namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> (2) machine.openshift.io/cluster-api-machine-type: <role> (2) machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> (3) unhealthyConditions: - type: "Ready" timeout: "300s" (4) status: "False" - type: "Ready" timeout: "300s" (4) status: "Unknown" maxUnhealthy: "40%" (5). In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
The
maxUnhealthy field can be set as either an integer or percentage.
There are different remediation implementations depending on the
maxUnhealthy | https://docs.openshift.com/container-platform/4.10/scalability_and_performance/recommended-cluster-scaling-practices.html | 2022-06-25T05:10:38 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.openshift.com |
Go to Administration > Scan Settings to perform the following tasks:
Task
Steps
Select the scan type.
Under Scanning and Collection, select one of the following
scan types:
Default scan: Scans system memory, registries, and
critical system files (for example, the drivers folder).
Quick scan: Scans system memory only.
Full scan: Scans system memory, registries, and all
files.
Select the sample file collection scope.
Select an option for Collection Scope under
Scanning and Collection.
Add suspicious objects.
Under User-defined Suspicious Objects, type an object name
in a field.
You can specify up to 1000 object names for each text field. You may copy
and paste object names from a text file.
Separate entries using a vertical bar "|".
File names, full paths, IP addresses, SHA-1 hash values, and URLs
cannot contain the following special characters: ~!@#$%^&*()_+|
Configure scan exceptions.
Under Exceptions, type an object name in a field.
File names and full paths cannot contain the following special
characters: ~!@#$%^&*()_+| | https://docs.trendmicro.com/en-us/enterprise/advanced-threat-assessment-service-15/administration/scan_settings_screen.aspx | 2022-06-25T05:26:55 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.trendmicro.com |
Overview
Run device automation through Mobile Device Manager without ever touching the device. To quickly understand how to automate a mobile project, set up Mobile Device Manager (MDM), and then create and run a workflow on a cloud device.
Step 1: Prepare The Test Automation Framework.
Step 2: Create a Mobile Testing Project
Step 3: Configure Mobile Device Manager
Step 4: Start the Application
Step 5: Record Actions
Step 6: Create and Run Mobile Automation
Prerequisites
- Studio license.
- UiPath.MobileAutomation.Activities. For more information, see Managing Packages. Alternatively, you can use the Mobile Testing Project default template, as this will install the activity package for you.
- Device farm or Physical Mobile Device.
- Appium endpoint.
Prepare The Test Automation Framework
Prepare your test automation framework through Node.js and Appium.
- Download and install Node.js.
- Open Node.js command prompt and run the following command to install Appium:
npm install -g appium
- For more information on Appium configuration, see Appium Getting Started.
- To prepare your environment for local devices, see Local Devices.
Create a Mobile Testing Project
Create a mobile testing project in Studio.
- Open Studio.
- Select a Mobile Testing Project from the default templates.
- Configure project details and then click Create.
- Start with the default test case created through the project.
Configure Mobile Device Manager
To prepare your test environment, open Studio and navigate to Mobile Automation > Mobile Device Manager to launch MDM.
Note
The Mobile Device Manager button is added to the ribbon after you install UiPath.MobileAutomation.Activities, or if you open a Mobile Testing Project template.
Continue by adding a device and an application to MDM.
Add a Device
Add your first cloud device to MDM.
- In the left navigation panel go to Devices.
- Click Add a device.
- Configure your device as follows:
- Name - Enter a name to identify your device in the Devices tab.
- Appium URL - Enter the Appium server where your device is hosted. For example:.
- Platform - Click the field to select Android or iOS from the dropdown.
- Device Name - Enter the device name.
- Platform Version - Add the version number of your Android OS.
- Additional Desired Capabilities (Optional) - Add specific capabilities to customize your automation session. For more information, see Appium Desired Capabilities.
- Set Geo Location (Optional) - Set your device location to test applications that use Location Services to generate location data.
- Click Save & Close to add your device.
Your device is added to the Devices list. To add a local device, see Local Devices.
Add an Application
Add an application to be used by your device.
- In the left navigation panel go to Applications.
- Click Add a new application and enter a name for your application.
- Select App and configure the following Android settings:
- App - Enter the app location. You can download and use the UiPath Android Demo Application.
- Additional Desired Capabilities - Add specific capabilities to customize your automation session. For more information, see Appium Desired Capabilities.
- Click Save & Close to add your application.
Start the Application
You can now start your mobile device emulator.
- In the Welcome tab, click Start an application.
- Select a device and an application by choosing the ones you have just created.
- Click Connect.
It may take a while to establish a connection due to multiple connection layers and distance to your device farm.
Record Actions
Record your actions using the interaction bar, right next to your mobile device emulator. Through this, you can indicate your actions on-screen. Alternatively, you can design your workflow in Studio.
- Open the Recorded Actions panel on the right side to keep track of your executed actions.
- On the right side of the mobile device emulator, click the Android Home button.
- Click the Google search bar on the emulator screen and then click No Thanks to dismiss the overlay if needed.
- Double click on the search bar to send text and type in "Uipath Test Suite".
- Select Press Enter key after sending text and then click Send text.
Create and Run Mobile Automation
Open Studio to import your recorded actions and run your mobile automation.
- Open Studio and add a Mobile Device Connection activity to your sequence.
- Click Select Connection Details and select your device and application.
- Select Do within the sequence.
- Navigate to Mobile Automation > Import Recorded Actions to add them to the sequence.
- Navigate to Debug File > Run File.
A new connection is established to the emulated mobile device to execute your actions in real-time.
Next Steps
To learn more about creating workflows for your mobile automation tests, understanding device interaction, and debugging, see the following topics:
Updated about a month ago | https://docs.uipath.com/test-suite/docs/mobile-device-automation-getting-started | 2022-06-25T05:28:12 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['https://files.readme.io/73ac1f2-GifMaker_20201007225948979.gif',
'GifMaker_20201007225948979.gif'], dtype=object)
array(['https://files.readme.io/73ac1f2-GifMaker_20201007225948979.gif',
'Click to close...'], dtype=object) ] | docs.uipath.com |
With.
Baba – We were really excited by the idea of an animated Loading Docs film, and though Joel had never made a documentary before, he had a very strong track record as a filmmaker, a commitment to documentary and the support of a terrific producer. Three minutes of animation required a great deal of work, but Joel was clearly passionate about making this very personal film and was very committed to making it.
Homing – The proposal for this film focused on the use of sound to tell a story, which was a unique approach that was both original and ambitious and a great interpretation of the theme of Home.
Living Like Kings – Zoe’s proposal had a very clear creative vision and offered an insight into life in post-quake Christchurch that had never been seen before. Zoe had already done a lot of research and submitted some great images to support her proposal.
Queer Selfies – The concept for Queer Selfies fitted really well with the three-minute format and Robyn and Paula had a very sound production plan based on a single day’s shooting at The Big Gay Out and demonstrated an excellent understanding of their target audience.
Stop/Go – this film promised to show a side of New Zealand most people only catch a glimpse of. The proposal fitted well with our theme of Home and Greg and Jack provided a very compelling visual treatment to support their proposal that assured us that this film would be stunning.
The Jump – a never-before-told story about the origins of bungy featuring mullets, stubby and some serious can-do Kiwi attitude, with reels of incredible archive footage and a solid filmmaking team. The proposal had a clear plan of action and sold us on a great story with strong audience appeal.
The Road to Whakarae – the creative vision for this film was very clearly articulated in the proposal, and promised a very original approach. Like many other films that made the final cut, The Road to Whakarae offered a totally unique perspective on a very special place and people. The musical approach was risky, but that’s what Loading Docs is all about!
Today – The detailed observational treatment for this film was well-developed and the filmmakers had secured the permission they would need to film inside a rest home. In their initial proposal Prisca and Nick intended to spend around three days inside the home researching and getting to know staff and residents. In fact, they spent a great deal more time than this, and their commitment to handling this topic with sensitivity was very clear from the outset. They submitted their remarkable short film Le Taxidermiste in support of their application, and this gave us a great deal of confidence in their filmmaking ability.
Wayne – Kirsty and Viv also showed us a very strong commitment to spending a great deal of time with Wayne and his carers and we felt assured that they would tell his story with respect and care. Permission to film had been secured, the production schedule was realistic and just from looking at the fantastic photos of Wayne that were submitted in support of the application we could tell that Kirsty and Viv had a great rapport with him.
We received many fantastic proposals for Loading Docs 2014 and in making our final selection we considered how the films would work as a group, with a range of different styles, stories and communities represented. Don’t be afraid to go out on a limb and propose something that you think is a little unusual or challenging, just as long as your proposal can realistically be achieved within the budget and timeframe available.
For more tips on submitting, check out our FAQs…
FAQs
Can I submit multiple proposals?
Yes, you can but bear in mind that each proposal requires quite a lot of preparation. We’re looking for ideas that are well developed and feasible within the time frame and budget available, so it’s best to focus on developing one or two ideas well rather than taking a scattergun approach.
What makes a good proposal?
A good proposal has a really clear concept and creative vision. We should immediately understand what your film is about and how you plan realise your ideas. Examples of previous work and visual references that demonstrate your creative vision are extremely useful. Your proposal should also identify potential challenges or logistical issues and indicate how you will deal with these. A major consideration is whether your project can be realised within the timeframe and budget available, so where access and permissions are necessary we require evidence that these have been secured.
How much filmmaking experience do I need to have?
Loading Docs is an initiative that aims to give filmmakers who have some solid filmmaking experience the opportunity to push their ideas further and to create work of the highest possible standard (within some major constraints) that will challenge, inspire and captivate audiences. The initiative is not aimed at students or first-time filmmakers. However, if you are a less-experienced filmmaker with a really great idea we suggest you partner with an experienced production team (such as a good producer and DOP) who can support you if you wish to submit a proposal.
For students and young people in New Zealand keen to make short documentaries we recommend two other fantastic filmmaking initiatives: Inspiring Stories and Outlook for Someday.
What kind of stories is Loading Docs looking for?
The theme of CONNECTION is one that we hope will inspire a wide range of stories and filmmaking styles. In the selection process we will be aiming to curate a selection of films that will appeal to different audiences with a range of subjects and styles with a diversity of representation, creative form, and audience. You may wish to focus on a specific audience (such as children and young people), or a unique place, person or subject. Look at the selection of films from Loading Docs 2014 to gain a better understanding of the kind of films that Loading Docs aims to support.
Why does my film have to be 3-minutes long?
Prior to launching Loading Docs we watched a lot of short films online and decided that 3 really is a magic number when it comes to online films, (particularly if you’re watching on a mobile device), and we want Loading Docs films to be viewed and shared as widely as possible.
We encourage you to embrace this challenge and use the 3-minute constraint to be creative with documentary storytelling.
Why do I need to raise money for my film through crowdfunding?
In addition to making incredible films, an important objective for Loading Docs as an initiative is to support filmmakers to broaden their audience reach and to become more skilled in fundraising, marketing and outreach. Crowdfunding is an increasingly valuable tool to enable filmmakers to connect with audiences who are truly invested and interested in their work, and to start a journey that an audience can be actively involved in.
Crowdfunding also helps to make the funding received from NZ On Air and the New Zealand Film Commission go further, but we have set a very achievable fundraising target. Loading Docs filmmakers will run matched funding crowdfunding campaign. For each film, Loading Docs will contribute $1 for every $1 raised through crowdfunding with a target of $2000. That means that each successful campaign will raise at least $4,000. All funds raised go directly to the film.
What happens in the workshops?
In the first two-day workshop (31 Jan/1 Feb) we will spend one-day concentrating on storytelling, creative treatment and other aspects of production, and one day focusing on crowdfunding strategy and audience outreach. We will bring in specialists in each of these areas to work with filmmakers.
The second workshop, held closer to the launch of the films, will focus more on outreach and distribution, with an international guest who will share their expertise and offer filmmakers advice. In 2014 we were thrilled to have Vimeo curator and Short of the Week founder Jason Sondhi as our guest, and he provided great insights into how filmmakers can build an online presence that will enhance their careers.
Who owns the films?
All films remain the intellectual property of the filmmakers and Loading Docs retains the rights to distribute promote the films for a minimum period of two years. The films are primarily distributed online via Loading Docs’ Vimeo channel, but are freely available to share and embed. This means that the films collectively support each other’s success. We work hard to achieve the best possible exposure for the films, and encourage filmmakers to actively support the films through their own outreach efforts.
In 2014, Loading Docs films were featured prominently on The New Zealand Herald, screened widely on New Zealand television (including in primetime slots), and appeared in local and international film festivals. You can even watch Loading Docs films on international Air New Zealand flights.
If you have a question regarding submissions that is not answered here, please email us. | https://loadingdocs.net/loading-docs-2015-what-are-we-looking-for/ | 2022-06-25T05:49:25 | CC-MAIN-2022-27 | 1656103034170.1 | [] | loadingdocs.net |
Sending a new post in an individual or group is easy. At the top of your Feed you will see a button reading "Write a New Message."
Simply click this button and select to whom you would like to write the new message. From there the composer will open, where you can write your new message.
Try mentioning people to notify them about your post, and optionally give it a title, and/or attach images or other files.
Don't hesitate to reach out using the support chat below with any questions. | http://docs.sixcycle.com/en/articles/2613-sending-a-new-post | 2022-06-25T04:31:38 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['https://downloads.intercomcdn.com/i/o/79551258/038589148b538d426a129a06/image.png',
None], dtype=object) ] | docs.sixcycle.com |
Action Objects & Methods
When working with custom actions, there are a few event-related objects that will be of use depending on whether you are building a custom action or event action.
In this article();
Params Object
The params object contains a data property that contains information about the event that invoked the action. Actions that will have a params object are the ones where an item is clicked. They are:
- After Events Rendered
'fromFilterChange' : Was this initiated from a filter being toggled on or off
'fromRefresh': Was this triggered by the calendar being refreshed
'fromScheduleChange': Was this triggered by a calendar being toggled on or off
'fromViewStateChange': Was this triggered by the calendar view changing (view changed, date changed, window resized, sidebar closed/opened)
- After Source Selection
'isLast': Is this the last calendar source in a folder that is set to be applied. When selecting a folder, this trigger is fired once for each source
'item': Details on the calendar source that was toggled
- After Filter Selection
'filterType': The filter that was changed (statuses or resources)
'isLast': Is this the last filter in a folder that is set to be applied. When selecting a folder, this trigger is fired once for each filter inside that folder
'item': Details on the filter that was toggled
- On Field Change
'field': The name of the popover field that was changed
'value': The resulting value that was changed
'selected': Was the value toggled on or off
'objectName': Salesforce Only - the object name of the related record);
dbk.toggleMultiSelect(event, shiftKey, targetElement, view, forceDeselect);
- event: (object) the event object of the item being selected or deselected
- shiftKey: (boolean) the equivalent of the shift key being pressed or not
- targetElement: (HTML Element) the DOM element on the calendar that corresponds to the event object
- view: (object) the current DayBack view. Typically will be seedcodeCalendar.get('view);
- forceDeselect: (boolean) allows you to specify that all events should be deselected (When this is true, the first 3 parameters should be null)
dbk.addEvent(event);
dbk.addEvents(eventArray);
dbk.createEvent(paramObject);
This function is used to create events on a specific calendar source. Unlike dbk.addEvent this function will not only add the event to the calendar display but will also create the event in whatever data source is specified. This is useful if you wish to bypass any user interaction to create the event as there is not a separate save step involved.
The parameter for this function expects an object with the following properties available:
event: This is an object that needs to at least contain a title property and a start property formatted as a moment object. Optionally this can be an array of objects when creating more than one event. The event object can contain any valid DayBack field properties listed here. You may also just pass the event object directly for this property if calling this function from an event action. In that case specifying this property as "event: event" would clone the original event.
calendarID: The internal ID of the calendar you want to add this event to, only required if calendarName is not set.
calendarName: The name of the calendar to add the event to, only required if the calendarID is not set.
preventWarnings: Set to true to avoid any warnings about data formatting or payload size if creating many events. For example a warning dialog will show if adding more than 200 events at once as too many events added at once could trigger rate limiting with the provider.
renderEvent: A boolean, when set to true the event will be rendered on the calendar. This is useful if the desired result is to add the event to the calendar source and give immediate feedback on the calendar to show the created event.
callback: A function to run once the creation process has completed. The result of the callback is an object containing the following properties:
event: The event data that was created or failed to create.
isShown: Whether the event is shown on the calendar display or not. Reasons for not showing could be that calendar isn't selected or there is a filter applied to prevent the event from displaying.
error: An error object that contains a message property if there was an error. For example error: {message: 'The event could not be created'}
A simple example of using this function could look like the following...
var event = { title: 'Meeting', start: moment('2021-12-15') };
var params = {
event: event,
calendarName: 'Team Calendar',
renderEvent: true,
callback: function(result) {}
}
dbk.createEvent(params);
dbk.deleteEvent(paramObject);
This function is used to delete events. This function will remove the event from the calendar view and delete the event from the calendar source it is stored in.
The parameter for this function expects an object with the following properties available:
event: This is a native DayBack event object. You can get the currently viewed event objects by calling seedcodeCalendar.get('element').fullCalendar('clientEvents'). You may also just pass the event object directly for this property if calling this function from an event action. In that case specifying this property as "event: event" would clone the original event.
editEvent: Optional if the event property is not specified. This is available in the context of an event action when the popover is open.
callback: A function to run once the deletion process has completed. The result of the callback is null unless there was an error. An error will return an object containing the following properties:
error: An error object that contains a message property if there was an error. For example error: {message: 'The event could not be deleted'}
dbk.localTimeToTimezoneTime(date, isAllDay);
This function will take a local date/time and convert it to that date/time based on a timezone set in config.clientTimezone (such as the timezone chosen in this timezone selector for DaBack). It does not change the timezone of the date object but applies the appropriate offset. It accepts a "date" parameter that must be a valid moment object, and "isAllDay" which is a boolean value indicating whether times should be ignored.
dbk.timezoneTimeToLocalTime(date, isAllDay);
This function will take a date/time that has been converted to a specific timezone set in config.clientTimezone and change it back to the local time set in the operating system. It does not change the timezone of the date object but removes the previously applied offset. It accepts a "date" parameter that must be a valid moment object, and "isAllDay" which is a boolean value indicating whether times should be ignored.
dbk.getCustomFieldIdByName(storeInFieldName, schedule);
This function will take the Store in Field name of a Custom Field and the schedule object of the calendar where the Custom Field name is defined and will return the DayBack's numerical ID for use in Calendar Actions. Every event and editEvent object has a schedule object, so you can simply pass editEvent.schedule, or event.schedule as the second parameter when you are calling this function. Please see usage examples.
dbk.setEventSortPriority(event, sortValue);
This function will create a sort priority for the provided event. It accepts two required parameters. The event parameter is the event object where the sort priority will be set. The sortValue is either a string or number that will be used to determine the sort order of events. This function is meant to be used in the "Before Event Render" action, although there may be uses for it elsewhere. Below are a couple of examples on how to use this function in a "Before Event Render" action.
// // Sort events based on values from a mapped field // var sortValue = "[[Summary]]" // This should be your field name that is used in field mapping dbk.setEventSortPriority(event, sortValue);
// // Sort events based on a specified sort order of statuses // var sortOrder = [ 'Cancelled', 'OutOfOffice', 'Busy', ]; var fieldValue = event.status[0]; var sortPriority; if (fieldValue) { sortPriority = sortOrder.indexOf(fieldValue); if (sortPriority === -1) { sortPriority = null; } } dbk.setEventSortPriority(event, sortPriority);
Button Actions
In Button
event.beforeDrop - This object will be present when an event is edited via drag and drop. It will contain the values of the properties before the drag began. It's useful for detecting if an event is being edited via drag and drop and for potentially rolling back values. hovers over an event
Before Event Rendered - runs just before the event is rendered on the calendar
On Field Change - runs when a field is changed in the event popover
params - This contains info on the change that was made. See the Params section here for more details']
To make your app action code more readable, it is often preferable to refer to your numerical Field IDs by a human-readable name inside your JavaScript. Here's an example that retrieves the Custom Field 'truckNumber' defined inside the current event's schedule using the dbk.getCustomFieldIdByName() helper function:
var customFieldId = dbk.getCustomFieldIdByName('truckNumber',editEvent.schedule);
var roomNumber = editEvent[customFieldId];.
"content" is the HTML or string value that you want to show in the message bar.
"showDelay" and "hideDelay" expect a number value in milliseconds.
The "type" parameter can either be null for a normal message or 'error' for an error message with a red background.
"actionFunction" is the JavaScript function that you want to run if the message bar is clicked on. Can be null.
utilities.hideMessages([type]);
Parameters are:
'message' (default) - Hides any messages that have been queued for display in the alert bar.
'modal' - Hides any modal window that is being displayed on the calendar
utilities.getDBKPlatform();
Returns a string representing the platform of the user connected to DayBack.
Resulting values are:
'dbkfmwd': FileMaker WebDirect
'dbkfmjs': FileMaker Client
'dbksf': Salesforce
'dbko': Browser
utilities.popover(config, template); - Creates a custom floating popover or modal
The rich text editor example in the docs here is a great example of using this function in a custom action. You'll also find this in our dialog-with-buttons example.
Parameters are:
'config' - An object containing the popover configuration that contains the details of how the popover should behave
Config properties:
'template' - An HTML template that defines the content of the popover.
Buttons can have an "ng-click" property pointing to "popover.config.yourCustomFunction();', where 'yourCustomFunction' is a user-defined property in the config object.
environment Object
The environment object contains useful properties in custom actions.
environment.isMobileDevice
A property for determining if the user is on a mobile device. This is not a method and should be called like the below.
if ( environment.isMobileDevice ) {
}(‘multiSelect’);
This will return the "multiSelect" object, which contains a collection of objects that each have details on a selected events. If you just want the IDs of the events that are currently selected in multi-select, use this: Object.values(seedcodeCalendar.get('multiSelect')).map(a => a.event.eventID)
seedcodeCalendar.get('config');
Returns an object containing the global calendar configuration.
Config Properties:
.defaultTimedEventDuration - String: The default duration for new events
.databaseDateFormat - String: The default date format for the associated source
.isShare - Boolean: Is the calendar being viewed a share
.account - String: The account (email) of the logged in user
.accountName - String: The full name of the logged in user
.admin - Boolean: Is the logged in user an admin
.firstName - String: First name of the logged in user
.lastName - String: Last name of the logged in user
.isMobileDevice - Boolean: Is the user on a mobile device
Filter Objects
Resource and Status filters are stored as arrays of objects inside the seedcodeCalendar object. You can modify these objects, or build your own list of filters when the calendar starts up using an On Statuses Fetched or On Resources Fetched app action.
Below are the properties of a filter object:
- name - (required) string: The name of the filter.
- id - (optional/autogenerated) string: The id of the filter.
- sort - (optional) number: The desired position the filter should be placed.
- shortName - (optional) string: Resources only. The short name of the filter.
- class - (optional) string: Resources only. The CSS class to assign to the filter
- description - (optional) string: Resources only. The description for the filter. Can contain HTML.
- color - (optional) string: Statuses only. The RGB or RGBA value for the filter color.
- folderID - (optional) string: The folder ID that the filter belongs to.
- folderName - (optional) string: The folder Name that the filter belongs to.
- isFolder - (optional) boolean: Is this a folder. Defaults to false
- nameSafe - (read only) string: The name of the filter without special characters.
- display - (read only) string: The display value of the filter name.
- status - (optional) object: The state of the filter.
status object properties:
- selected - (optional) boolean: Is the filter selected (toggled on). Defaults to false.
- folderExpanded - (optional) boolean: Is the folder expanded. Defaults to false.
- tags - (optional) Array of objects: Resources Only. Tags to assign to the filter.
tag object properties:
- name - (required) string: The name of the tag.
- class - (optional/autogenerated) sting: The CSS class that should be assigned to the tag. ope
Note that if you are trying to update the popover from an async operation like a callback from an api request, you need to trigger a digest cycle so wrap your update in a $timeout as shown below.
$timeout();
Can be used to update data in view when that data is updated from an asynchronous operation. Setting data inside a $timeout will trigger a digest cycle and will update the view.
$timeout(function() { // Set the editEvent data here // // }, 0);
('
seedcodeCalendar.get('element').fullCalendar('refetchEvents');
This will refresh the whole calendar, retrieving all your events again. | https://docs.dayback.com/article/124-action-objects-methods | 2022-06-25T04:57:30 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.dayback.com |
Project Phases
What does a Glia project look like?What does a Glia project look like?
Phase 1 – Device Design & Prototype Creation
After deciding on a project that both addresses a need and is within our capabilities, the device design begins. Many cycles of design, redesign and prototyping go into the refinement process. This is often the shortest phase.
Phase 2 – Research & Ethics Board Application, Experimental Calibration & Validation
The design is further refined and validated through approved experimentation.
Phase 3 – Health Canada Approval, Publication
The device works but needs Health Canada approval to function as an accepted medical device. Studying the device ensures that we have a quality product that is proven to do what we claim. This process is often the longest.
Phase 4 – Assembly Instruction, Knowledge Translation & Product Development/Dissemination
The device is safe, it works, and we know it. Time to get it into the hands of the people! This is an ongoing process of knowledge translation and dissemination. | https://docs.glia.org/docs/getting-started/project-phases/ | 2022-06-25T04:39:21 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.glia.org |
Configure a VNet-to-VNet VPN gateway connection using PowerShell
This article helps PowerShell. You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list: the VNet-to-VNet steps. When you use the Site-to the change..
Which VNet-to-VNet steps should I use?
In this article, you see two different sets of steps. One set of steps for VNets that reside in the same subscription and one for VNets that reside in different subscriptions. The key difference between the sets is that you must use separate PowerShell sessions when configuring the connections for VNets that reside in different subscriptions.
For this exercise, you can combine configurations, or just choose the one that you want to work with. All of the configurations use the VNet-to-VNet connection type. Network traffic flows between the VNets that are directly connected to each other. In this exercise, traffic from TestVNet4 does not route to TestVNet5.
VNets that reside in the same subscription: The steps for this configuration use TestVNet1 and TestVNet4.
VNets that reside in different subscriptions: The steps for this configuration use TestVNet1 and TestVNet5.
How to connect VNets that are in the same subscription
Before you begin.
Because it takes 45 minutes or more to create a gateway, Azure Cloud Shell will timeout periodically during this exercise. You can restart Cloud Shell by clicking in the upper left of the terminal. Be sure to redeclare any variables when you restart the terminal.
If you would rather install latest version of the Azure PowerShell module locally, see How to install and configure Azure PowerShell.
Step 1 - Plan your IP address ranges
In the following steps, you create two virtual networks along with their respective gateway subnets and configurations. You then create a VPN connection between the two VNets. It’s important to plan the IP address ranges for your network configuration. Keep in mind that you must make sure that none of your VNet ranges or local network ranges overlap in any way. In these examples, we do not include a DNS server. If you want name resolution for your virtual networks, see Name resolution.
We use the following values in the examples:
Values for TestVNet1:
- VNet Name: TestVNet1
- Resource Group: TestRG1
- Location: East US
- TestVNet1: 10.11.0.0/16 & 10.12.0.0/16
- FrontEnd: 10.11.0.0/24
- BackEnd: 10.12.0.0/24
- GatewaySubnet: 10.12.255.0/27
- GatewayName: VNet1GW
- Public IP: VNet1GWIP
- VPNType: RouteBased
- Connection(1to4): VNet1toVNet4
- Connection(1to5): VNet1toVNet5 (For VNets in different subscriptions)
- ConnectionType: VNet2VNet
Values for TestVNet4:
- VNet Name: TestVNet4
- TestVNet2: 10.41.0.0/16 & 10.42.0.0/16
- FrontEnd: 10.41.0.0/24
- BackEnd: 10.42.0.0/24
- GatewaySubnet: 10.42.255.0/27
- Resource Group: TestRG4
- Location: West US
- GatewayName: VNet4GW
- Public IP: VNet4GWIP
- VPNType: RouteBased
- Connection: VNet4toVNet1
- ConnectionType: VNet2VNet
Step 2 - Create and configure TestVNet1
Verify your subscription settings.
Connect to your account if you are running PowerShell locally on your computer. If you are using Azure Cloud Shell, you are connected automatically.
Connect-AzAccount
Check the subscriptions for the account.
Get-AzSubscription
If you have more than one subscription, specify the subscription that you want to use.
Select-AzSubscription -SubscriptionName nameofsubscription
Declare your variables. This example declares the variables using the values for this exercise. In most cases, you should replace the values with your own. However, you can use these variables if you are running through the steps to become familiar with this type of configuration. Modify the variables if needed, then copy and paste them into your PowerShell console.
$RG1 = "TestRG1" $Location1 = "East US" $VNetName1 = "TestVNet1" $FESubName1 = "FrontEnd" $BESubName1 = "Backend" $VNetPrefix11 = "10.11.0.0/16" $VNetPrefix12 = "10.12.0.0/16" $FESubPrefix1 = "10.11.0.0/24" $BESubPrefix1 = "10.12.0.0/24" $GWSubPrefix1 = "10.12.255.0/27" $GWName1 = "VNet1GW" $GWIPName1 = "VNet1GWIP" $GWIPconfName1 = "gwipconf1" $Connection14 = "VNet1toVNet4" $Connection15 = "VNet1toVNet5"
Create a resource group.
New-AzResourceGroup -Name $RG1 -Location $Location1
Create the subnet configurations for TestVNet1. This example. For this reason, it is not assigned via variable below.
The following example uses the variables that you set earlier. In this example, the gateway subnet is using a .
"GatewaySubnet" -AddressPrefix $GWSubPrefix1
Create TestVNet1.
New-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 ` -Location $Location1 -AddressPrefix $VNetPrefix11,$VNetPrefix12 -Subnet $fesub1,$besub1,$gwsub1
Request a public IP address to be allocated to the gateway you will create for your VNet. Notice that the AllocationMethod is Dynamic. You cannot specify the IP address that you want to use. It's dynamically allocated to your gateway.
$gwpip1 = New-AzPublicIpAddress -Name $GWIPName1 -ResourceGroupName $RG1 ` -Location $Location1 -AllocationMethod Dynamic
Create the gateway configuration. The gateway configuration defines the subnet and the public IP address to use. Use the example to create your gateway configuration.
$vnet1 = Get-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 $subnet1 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet1 $gwipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName1 ` -Subnet $subnet1 -PublicIpAddress $gwpip1
Create the gateway for TestVNet1. In this step, you create the virtual network gateway for your TestVNet1. VNet-to-VNet configurations require a RouteBased VpnType. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
New-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 ` -Location $Location1 -IpConfigurations $gwipconf1 -GatewayType Vpn ` -VpnType RouteBased -GatewaySku VpnGw1
After you finish the commands, it will take 45 minutes or more to create this gateway. If you are using Azure Cloud Shell, you can restart your Cloud Shell session by clicking in the upper left of the Cloud Shell terminal, then configure TestVNet4. You don't need to wait until the TestVNet1 gateway completes.
Step 3 - Create and configure TestVNet4
Once you've configured TestVNet1, create TestVNet4. Follow the steps below, replacing the values with your own when needed.
Connect and declare your variables. Be sure to replace the values with the ones that you want to use for your configuration.
$RG4 = "TestRG4" $Location4 = "West US" $VnetName4 = "TestVNet4" $FESubName4 = "FrontEnd" $BESubName4 = "Backend" $VnetPrefix41 = "10.41.0.0/16" $VnetPrefix42 = "10.42.0.0/16" $FESubPrefix4 = "10.41.0.0/24" $BESubPrefix4 = "10.42.0.0/24" $GWSubPrefix4 = "10.42.255.0/27" $GWName4 = "VNet4GW" $GWIPName4 = "VNet4GWIP" $GWIPconfName4 = "gwipconf4" $Connection41 = "VNet4toVNet1"
Create a resource group.
New-AzResourceGroup -Name $RG4 -Location $Location4
Create the subnet configurations for TestVNet4.
$fesub4 = New-AzVirtualNetworkSubnetConfig -Name $FESubName4 -AddressPrefix $FESubPrefix4 $besub4 = New-AzVirtualNetworkSubnetConfig -Name $BESubName4 -AddressPrefix $BESubPrefix4 $gwsub4 = New-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -AddressPrefix $GWSubPrefix4
Create TestVNet4.
New-AzVirtualNetwork -Name $VnetName4 -ResourceGroupName $RG4 ` -Location $Location4 -AddressPrefix $VnetPrefix41,$VnetPrefix42 -Subnet $fesub4,$besub4,$gwsub4
Request a public IP address.
$gwpip4 = New-AzPublicIpAddress -Name $GWIPName4 -ResourceGroupName $RG4 ` -Location $Location4 -AllocationMethod Dynamic
Create the gateway configuration.
$vnet4 = Get-AzVirtualNetwork -Name $VnetName4 -ResourceGroupName $RG4 $subnet4 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet4 $gwipconf4 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName4 -Subnet $subnet4 -PublicIpAddress $gwpip4
Create the TestVNet4 gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
New-AzVirtualNetworkGateway -Name $GWName4 -ResourceGroupName $RG4 ` -Location $Location4 -IpConfigurations $gwipconf4 -GatewayType Vpn ` -VpnType RouteBased -GatewaySku VpnGw1
Step 4 - Create the connections
Wait until both gateways are completed. Restart your Azure Cloud Shell session and copy and paste the variables from the beginning of Step 2 and Step 3 into the console to redeclare values.
Get both virtual network gateways.
$vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 $vnet4gw = Get-AzVirtualNetworkGateway -Name $GWName4 -ResourceGroupName $RG4
Create the TestVNet1 to TestVNet4 connection. In this step, you create the connection from TestVNet1 to TestVNet4. You'll see a shared key referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
New-AzVirtualNetworkGatewayConnection -Name $Connection14 -ResourceGroupName $RG1 ` -VirtualNetworkGateway1 $vnet1gw -VirtualNetworkGateway2 $vnet4gw -Location $Location1 ` -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3'
Create the TestVNet4 to TestVNet1 connection. This step is similar to the one above, except you are creating the connection from TestVNet4 to TestVNet1. Make sure the shared keys match. The connection will be established after a few minutes.
New-AzVirtualNetworkGatewayConnection -Name $Connection41 -ResourceGroupName $RG4 ` -VirtualNetworkGateway1 $vnet4gw -VirtualNetworkGateway2 $vnet1gw -Location $Location4 ` -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3'
Verify your connection. See the section How to verify your connection.
How to connect VNets that are in different subscriptions
In this scenario, you connect TestVNet1 and TestVNet5. TestVNet1 and TestVNet5 reside in different subscriptions. The subscriptions do not need to be associated with the same Active Directory tenant.
The difference between these steps and the previous set is that some of the configuration steps need to be performed in a separate PowerShell session in the context of the second subscription. Especially when the two subscriptions belong to different organizations.
Due to changing subscription context in this exercise, you may find it easier to use PowerShell locally on your computer, rather than using the Azure Cloud Shell, when you get to Step 8.
Step 5 - Create and configure TestVNet1
You must complete Step 1 and Step 2 from the previous section to create and configure TestVNet1 and the VPN Gateway for TestVNet1. For this configuration, you are not required to create TestVNet4 from the previous section, although if you do create it, it will not conflict with these steps. Once you complete Step 1 and Step 2, continue with Step 6 to create TestVNet5.
Step 6 - Verify the IP address ranges
It is important to make sure that the IP address space of the new virtual network, TestVNet5, does not overlap with any of your VNet ranges or local network gateway ranges. In this example, the virtual networks may belong to different organizations. For this exercise, you can use the following values for the TestVNet5:
Values for TestVNet5:
- VNet Name: TestVNet5
- Resource Group: TestRG5
- Location: Japan East
- TestVNet5: 10.51.0.0/16 & 10.52.0.0/16
- FrontEnd: 10.51.0.0/24
- BackEnd: 10.52.0.0/24
- GatewaySubnet: 10.52.255.0.0/27
- GatewayName: VNet5GW
- Public IP: VNet5GWIP
- VPNType: RouteBased
- Connection: VNet5toVNet1
- ConnectionType: VNet2VNet
Step 7 - Create and configure TestVNet5
This step must be done in the context of the new subscription. This part may be performed by the administrator in a different organization that owns the subscription.
Declare your variables. Be sure to replace the values with the ones that you want to use for your configuration.
$Sub5 = "Replace_With_the_New_Subscription_Name" $RG5 = "TestRG5" $Location5 = "Japan East" $VnetName5 = "TestVNet5" $FESubName5 = "FrontEnd" $BESubName5 = "Backend" $GWSubName5 = "GatewaySubnet" $VnetPrefix51 = "10.51.0.0/16" $VnetPrefix52 = "10.52.0.0/16" $FESubPrefix5 = "10.51.0.0/24" $BESubPrefix5 = "10.52.0.0/24" $GWSubPrefix5 = "10.52.255.0/27" $GWName5 = "VNet5GW" $GWIPName5 = "VNet5GWIP" $GWIPconfName5 = "gwipconf5" $Connection51 = "VNet5toVNet1"
Connect to subscription 5. Open your PowerShell console and connect to your account. Use the following sample to help you connect:
Connect-AzAccount
Check the subscriptions for the account.
Get-AzSubscription
Specify the subscription that you want to use.
Select-AzSubscription -SubscriptionName $Sub5
Create a new resource group.
New-AzResourceGroup -Name $RG5 -Location $Location5
Create the subnet configurations for TestVNet5.
$fesub5 = New-AzVirtualNetworkSubnetConfig -Name $FESubName5 -AddressPrefix $FESubPrefix5 $besub5 = New-AzVirtualNetworkSubnetConfig -Name $BESubName5 -AddressPrefix $BESubPrefix5 $gwsub5 = New-AzVirtualNetworkSubnetConfig -Name $GWSubName5 -AddressPrefix $GWSubPrefix5
Create TestVNet5.
New-AzVirtualNetwork -Name $VnetName5 -ResourceGroupName $RG5 -Location $Location5 ` -AddressPrefix $VnetPrefix51,$VnetPrefix52 -Subnet $fesub5,$besub5,$gwsub5
Request a public IP address.
$gwpip5 = New-AzPublicIpAddress -Name $GWIPName5 -ResourceGroupName $RG5 ` -Location $Location5 -AllocationMethod Dynamic
Create the gateway configuration.
$vnet5 = Get-AzVirtualNetwork -Name $VnetName5 -ResourceGroupName $RG5 $subnet5 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet5 $gwipconf5 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName5 -Subnet $subnet5 -PublicIpAddress $gwpip5
Create the TestVNet5 gateway.
New-AzVirtualNetworkGateway -Name $GWName5 -ResourceGroupName $RG5 -Location $Location5 ` -IpConfigurations $gwipconf5 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1
Step 8 - Create the connections
In this example, because the gateways are in the different subscriptions, we've split this step into two PowerShell sessions marked as [Subscription 1] and [Subscription 5].
[Subscription 1] Get the virtual network gateway for Subscription 1. Sign in and connect to Subscription 1 before running the following example:
$vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1
Copy the output of the following elements and send these to the administrator of Subscription 5 via email or another method.
$vnet1gw.Name $vnet1gw.Id
These two elements will have values similar to the following example output:
PS D:\> $vnet1gw.Name VNet1GW PS D:\> $vnet1gw.Id /subscriptions/b636ca99-6f88-4df4-a7c3-2f8dc4545509/resourceGroupsTestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW
[Subscription 5] Get the virtual network gateway for Subscription 5. Sign in and connect to Subscription 5 before running the following example:
$vnet5gw = Get-AzVirtualNetworkGateway -Name $GWName5 -ResourceGroupName $RG5
Copy the output of the following elements and send these to the administrator of Subscription 1 via email or another method.
$vnet5gw.Name $vnet5gw.Id
These two elements will have values similar to the following example output:
PS C:\> $vnet5gw.Name VNet5GW PS C:\> $vnet5gw.Id /subscriptions/66c8e4f1-ecd6-47ed-9de7-7e530de23994/resourceGroups/TestRG5/providers/Microsoft.Network/virtualNetworkGateways/VNet5GW
[Subscription 1] Create the TestVNet1 to TestVNet5 connection. In this step, you create the connection from TestVNet1 to TestVNet5. The difference here is that $vnet5gw cannot be obtained directly because it is in a different subscription. You will need to create a new PowerShell object with the values communicated from Subscription 1 in the steps above. Use the example below. Replace the Name, ID, and shared key with your own values. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete.
Connect to Subscription 1 before running the following example:
$vnet5gw = New-Object -TypeName Microsoft.Azure.Commands.Network.Models.PSVirtualNetworkGateway $vnet5gw.Name = "VNet5GW" $vnet5gw.Id = "/subscriptions/66c8e4f1-ecd6-47ed-9de7-7e530de23994/resourceGroups/TestRG5/providers/Microsoft.Network/virtualNetworkGateways/VNet5GW" $Connection15 = "VNet1toVNet5" New-AzVirtualNetworkGatewayConnection -Name $Connection15 -ResourceGroupName $RG1 -VirtualNetworkGateway1 $vnet1gw -VirtualNetworkGateway2 $vnet5gw -Location $Location1 -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3'
[Subscription 5] Create the TestVNet5 to TestVNet1 connection. This step is similar to the one above, except you are creating the connection from TestVNet5 to TestVNet1. The same process of creating a PowerShell object based on the values obtained from Subscription 1 applies here as well. In this step, be sure that the shared keys match.
Connect to Subscription 5 before running the following example:
$vnet1gw = New-Object -TypeName Microsoft.Azure.Commands.Network.Models.PSVirtualNetworkGateway $vnet1gw.Name = "VNet1GW" $vnet1gw.Id = "/subscriptions/b636ca99-6f88-4df4-a7c3-2f8dc4545509/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW " $Connection51 = "VNet5toVNet1" New-AzVirtualNetworkGatewayConnection -Name $Connection51 -ResourceGroupName $RG5 -VirtualNetworkGateway1 $vnet5gw -VirtualNetworkGateway2 $vnet1gw -Location $Location5 -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3'
How to verify a tenants?
Yes, VNet-to-VNet connections that use Azure VPN gateways work across Azure AD tenants.
Is VNet-to-VNet traffic secure?
Yes, it
- Once your connection is complete, you can add virtual machines to your virtual networks. See the Virtual Machines documentation for more information.
- For information about BGP, see the BGP Overview and How to configure BGP.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps | 2022-06-25T05:52:02 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.microsoft.com |
Requirements and Limitations
- Before starting the process please make sure you have the AI Center service enabled as explained in the Enabling the Service in Automation Cloud page.
- The default number of files that can be simultaneously uploaded is set up to 5 in order to enhance the flow's performance. Furthermore, a speed metric is displayed on the Upload page to provide information about the uploaded files/minute.
- The selected directory must contain .zip files so the Uploader can access and manipulate the files. The application won't recognize and upload nested folders.
- For a Data Analysis, the datasets you upload should contain at least 10,000 actions, but we recommend 50K actions as a good baseline. The model can support up to 200K actions collected in a given project. As a quick way of estimating, an active user typically provides 200-1K actions per hour, depending on the scenario they are recording.
Authenticating in the StudyUploader
- Access the PC location where the StudyUploader.exe is saved. Run it.
- The authentication pop up is displayed. Click the Sign In button to proceed.
- Your system default web browser opens to complete the Cloud sign-in.
If the browser does not open, copy/paste the URL displayed in the Uploader into your browser.
- If your organization contains several tenants, select the tenant where Task Mining and AI Center services are enabled.
- Select the Project you created in AI Center.
- Select the following details related to your project:
- the name of the project you created in the Task Mining instance;
- the name of the project you created in AI Center;
- the name of the Dataset created for the AI Center project.
- Click Start upload.
- Wait for the files to be transferred.
- Once all the files are uploaded a confirmation message is displayed. Close the window.
Video Tutorial
Check out the below video that presents the Task Mining service process of sharing the data through the integration with the AI Center service:
Caution:
For the analysis to work, it's very important to keep the files as they are and not to change them in any way, not even zipping them again.
Details about the JSON file
The JSON file contained in the ZIP along the StudyUploader executable contains its configuration and application data path to write logs.
Warning!
Auth and Endpoints sections of the JSON file should not be modified in any way.
In the JSON file you can find the default path to write application logs: %AppData%\Task Mining Study Uploader
The Limits section sets the minimum and maximum threshold of how many files (user actions) should be uploaded in a single upload run.
The Uploading section sets how many files could be uploaded simultaneously (MaxParallelUploads), how many times a single file that has failed to upload will be attempted to be uploaded again (MaxFileUploadAttempts), and how many files could be skipped during the upload once they reach the MaxFileUploadAttempts before the uploader application would throw an error (FailedFilesThreshold).
You can modify the MaxParallelUploads parameter to control the network bandwidth used by the Uploader app. Lowering this number could improve the OS performance during the upload but will slow down the upload process. Increasing the number could speedup the upload when a fast network connection is available.
Uploader Network Bandwith Usage
Uploader application requires a minimum of 8 Mbps internet connection speed, >40 Mbps recommended.
Logs and Troubleshooting
You can find details about the upload process in the application logs. After the first application execution, the logs are created in the AppData directory of the user who launched it:
C:\Users{username}\AppData\Roaming\Task Mining Study Uploader
or copy and paste the following path into the File Explorer:
%AppData%\Task Mining Study Uploader
In case of any issues with the Uploader, share the logs folder content with the support team.
In the case of network and connectivity issues, you can cancel the upload or close the app to continue the upload later or retry it - files already uploaded to the selected dataset won't be uploaded repeatedly.
Updated 6 months ago | https://docs.uipath.com/task-mining/docs/using-studyuploader | 2022-06-25T04:34:23 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['https://files.readme.io/eb9a7b2-Screenshot_3.png',
'Screenshot_3.png'], dtype=object)
array(['https://files.readme.io/eb9a7b2-Screenshot_3.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/312e2d4-Uploader_settings_json.png',
'Uploader_settings_json.png'], dtype=object)
array(['https://files.readme.io/312e2d4-Uploader_settings_json.png',
'Click to close...'], dtype=object) ] | docs.uipath.com |
Get-BrokerRebootCycle¶
Gets one or more reboot cycles.
Syntax¶
Get-BrokerRebootCycle -Uid <Int64> [-Property <String[]>] [-AdminAddress <String>] [-BearerToken <String>] [-TraceParent <String>] [-TraceState <String>] [-VirtualSiteId <String>] [<CommonParameters>] Get-BrokerRebootCycle [-CatalogName <String>] [-CatalogUid <Int32>] [-DesktopGroupName <String>] [-DesktopGroupUid <Int32>] [-EndTime <DateTime>] [-IgnoreMaintenanceMode <Boolean>] [-MachinesCompleted <Int32>] [-MachinesFailed <Int32>] [-MachinesInProgress <Int32>] [-MachinesPending <Int32>] [-MachinesSkipped <Int32>] [-Metadata <String>] [-RebootDuration <Int32>] [-RebootScheduleName <String>] [-RebootScheduleUid <Int32>] [-RestrictToTag <String>] [-StartTime <DateTime>] [-State <RebootCycleState>] [Cycle cmdlet is used to enumerate reboot cycles that match all of the supplied criteria.
See about_Broker_Filtering for information about advanced filtering options.
Brokerrebootcycle Object¶
The reboot cycle object returned represents a single occurrence of the process of rebooting a portion (or all) of the machines in a desktop group.
CatalogName (System.String) Name of the catalog whose machines are rebooted by this cycle if the cycle is associated with a catalog.
CatalogUid (System.Int32?) Uid of the catalog whose machines are rebooted by this cycle if the cycle is associated with a catalog.
DesktopGroupName (System.String) Name of the desktop group whose machines are rebooted by this cycle.
DesktopGroupUid (System.Int32) Uid of the desktop group whose machines are rebooted by this cycle.
EndTime (System.DateTime?) Time at which this cycle was completed, canceled or abandoned.
IgnoreMaintenanceMode (System.Boolean) Boolean value to optionally reboot machines in maintenance mode
MachinesCompleted (System.Int32) Number of machines successfully rebooted by this cycle.
MachinesFailed (System.Int32) Number of machines issued with reboot requests where either the request failed or the operation did not complete within the allowed time.
MachinesInProgress (System.Int32) Number of machines issued with reboot requests but which have not yet completed the operation.
MachinesPending (System.Int32) Number of outstanding machines to be rebooted during the cycle but on which processing has not yet started.
MachinesSkipped (System.Int32) Number of machines scheduled for reboot during the cycle but which were not processed either because the cycle was canceled or abandoned or because the machine was unavailable for reboot processing throughout the cycle.
MetadataMap (System.Collections.Generic.Dictionary<string, string>) Map of metadata associated with this cycle.
RebootDuration (System.Int32) Approximate maximum number of minutes over which the reboot cycle runs.
RebootScheduleName (System.String) Name of the Reboot Schedule which triggered this cycle if the cycle is associated with a reboot schedule.
RebootScheduleUid (System.Int32?) Uid of the Reboot Schedule which triggered this cycle if the cycle is associated with a reboot schedule.
RestrictToTag (System.String) An optional Tag which limits the reboot cycle to machines within the desktop group with the specified tag.
StartTime (System.DateTime) Time of day at which this reboot cycle was started.
State (Citrix.Broker.Admin.SDK.RebootCycleState) The execution state of this cycle.
Uid (System.Int64) Unique ID of this reboot cycle.
WarningDuration (System.Int32) Number of minutes to display the warning message for.
WarningMessage (System.String) Warning message to display to users in active sessions prior to rebooting the machine.
WarningRepeatInterval (System.Int32) Number of minutes to wait before showing the reboot warning message again.
WarningTitle (System.String) Title of the warning message dialog.
Related Commands¶
Parameters¶
Input Type¶
None¶
Input cannot be piped to this cmdlet.
Return Values¶
Citrix.Broker.Admin.Sdk.Rebootcycle¶
Returns matching reboot cycles.
Examples¶
Example 1¶
C:\PS> Get-BrokerRebootCycle
Description¶
Enumerate all reboot cycles.
Example 2¶
C:\PS> Get-BrokerRebootCycle -State Completed
Description¶
Enumerates all reboot cycles that have successfully completed.
Example 3¶
C:\PS> Get-BrokerRebootCycle -DesktopGroupName CallCenter
Description¶
Enumerates all reboot cycles related to the desktop group named CallCenter. | https://developer-docs.citrix.com/projects/citrix-virtual-apps-desktops-sdk/en/latest/Broker/Get-BrokerRebootCycle/ | 2022-06-25T04:01:11 | CC-MAIN-2022-27 | 1656103034170.1 | [] | developer-docs.citrix.com |
ovirt.ovirt.ovirt_permission module – Module to manage permissions of users/groups in oVirt/RHV_permission.
New in version 1.0.0: of ovirt.ovirt
Synopsis
Module to manage permissions of users/groups in oVirt/RHV.
Requirements
The below requirements are needed on the host that executes this module.
python >= 2.7
ovirt-engine-sdk-python >= 4.4.0
Parameters
Notes
Note: - name: Add user user1 from authorization provider example.com-authz ovirt.ovirt.ovirt_permission: user_name: user1 authz_name: example.com-authz object_type: vm object_name: myvm role: UserVmManager - name: Remove permission from user ovirt.ovirt.ovirt_permission: state: absent user_name: user1 authz_name: example.com-authz object_type: cluster object_name: mycluster role: ClusterAdmin - name: Assign QuotaConsumer role to user ovirt.ovirt.ovirt_permissions: state: present user_name: user1 authz_name: example.com-authz object_type: data_center object_name: mydatacenter quota_name: myquota role: QuotaConsumer - name: Assign QuotaConsumer role to group ovirt.ovirt.ovirt_permissions: state: present group_name: group1 authz_name: example.com-authz object_type: data_center object_name: mydatacenter quota_name: myquota role: QuotaConsumer - ovirt.ovirt.ovirt_permission: user_name: user1 authz_name: example.com-authz object_type: mac_pool object_name: Default role: MacPoolUser
Return Values
Common return values are documented here, the following are the fields unique to this module:
Collection links
Issue Tracker Homepage Repository (Sources) | https://docs.ansible.com/ansible/latest/collections/ovirt/ovirt/ovirt_permission_module.html | 2022-06-25T04:04:21 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.ansible.com |
Advanced Search
Search Results
605 total results found
FeetPort Web Console
FeetPort Android App
FeetPort Chatbot
FeetPort KB
Knowledge base about the basics of FeetPort and beyond.
FeetPort iOS
FAQs
Commonly asked queries which are asked by users of FeetPort on Web & Android.
Configuration
Learn to setup FeetPort
How To Guides
Quickly learn about using FeetPort
Tasks
Team
Location Analytics
Expenses
Collection
Presence
Performance
Reports & Analysis
Users & Identity
Learning
Basics
Attendance
Activities
Contacts
Products
Learning
Calendar
Collection
Templates
SPAM
Tasks
SPAM 2
SPAM 3
Communication Center
Performance
Activity
Presence
Workflow
Purchase Orders
SPAM
Data APIs
Chatbot
Team
Android App
iPhone App
Web Console
Admin Portal
Activity Interfaces
Interface defines the entire set of data fields whose values can be transferred to a form by Fee...
Identity Manager
MOBILE USER Mobile users are those field workers who works in the field, whether that field is a...
How to create Order Form in Activity and map with activity?
Order Management is a very critical process for any organization dealing with Sale of products or...
Create Collection
FeetPort facilitates you to create your own database from back end and map it with your Activity....
How to add Scores to a task in activity?
Scoring gives an Extra boost to the team. FeetPort has this unique feature to include/ define sco...
How to define OTP for closure of any task ?
If you have a requirement to validate the information by sending an OTP SMS on your customer's/ p...
Configure Email
Want to send Email to your customer / partner, after every visit done by your field team, don't w...
Why does battery optimization require to be disabled?
Battery optimization settings helps conserve battery power which is why it is turned on by defaul...
How to send broadcast messages to mobile users?
Web users can send communication by composing messages and send them to one user or multiple user...
Create, Upload & Assign Tasks in Urva
FeetPort facilitates you with option to allocate task in bulk, hence multiple tasks can be assign...
View, Edit, Filter Tasks on Urva web
Detailed view of a task in Urva- It consists of full details of a particular task, which are fil...
Export tasks in PDF from Urva web and mobile app
Task can be exported in the form of PDF file. Task can be exported in excel format for any territ...
How to install URVA1 on your device?
One can easily download the URVA1 app on their device.Please visit the following links to downloa...
How to login to the Urva Android App?
Login in Urva1: Open Urva App on after installation. Click on 'USER SIGN IN'. A logi...
How to login in Urva Web Portal?
To login on 'Urva Web Portal', please visit the link mentioned below and follow the steps: Go to...
I am unable to login to Urva. What shouId I do?
Sometimes a user faces issue of not been able to sign in to Urva application on mobile device. P...
Why Urva Mobile App showing "Please enable location services from settings"?
Location Settings needs to be enabled at High Accuracy for carrying out many activities on the Ur...
Why am I getting a Prompt to Enable Location when its already Enabled?
If you are getting the message even if your location is enabled, then it indicates that your loca...
Why I am getting "User account already is use. Try with another or contact your admin" Error?
A User will get this error when he will try to login from a different device without logging out ...
Why am I getting "Username and password combination is invalid" error?
Field User face this error message when he is using incorrect FeetPort User ID or Password to log... | https://docs.feetport.com/search | 2022-06-25T05:24:16 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.feetport.com |
Interest
Interest is an analytics extension that provides contextual visualization of your interest position.
Do note that all the charting in this extension are based on the current savings amount at the month or year selected, not the historical amount that are in these accounts. The figures are only useful if you're just starting out with interest assessments, if the amount saved in these savings did not change drastically for the period that is in scope and/or if you're aware of the changes made and are using these visualization to analyse the impact of changes.
Date scope and annualizationDate scope and annualization
The annualized figure in this extension is calculated on a pro rata basis, based on the month or year in scope. If the year in scope is the current year, the pro-rata is calculated based on actual current month. For previous years, there's no pro-ration as the entire year's interest is considered.
Summary viewSummary view
The summary view shows the actual interest that is received for the period that you chose.
These percentages are calculated by summing up all of the interest that had been received in the interest income account (default:
Interest:Income).
Detailed viewDetailed view
There are 4 charts in the detailed view.
Savings balances (pre-interest)Savings balances (pre-interest)
This chart shows you the current balance that is in your savings accounts (default:
Assets:Savings).
Interests receivedInterests received
This chart shows you the interest received for each savings account. The interest income needs to be credited to the corresponding interest income account (default:
Interest:Income:*). See sample below for more info.
Contribution to interestContribution to interest
This chart shows how each of the interest income received contributes to the overall interest received. Here, we see that interest from RosettaBank makes up 68.63% of all interest received.
Annualized receivedAnnualized received
This chart shows the annualized interest received for each savings account in % form. This is useful to check the actual interest rate that is achieved by each savings account defined in a way that is conventional for interest rates (annualized).
Sample journalSample journal
The summary and detailed charts above can be reproduced in your Prudent client locally with this sample journal (with settings file, to be placed together in the same directory).
UsefulnessUsefulness
The overall interest summary provides a quick high-level overview of the interest performance of your savings whereas the detailed view shows this for each savings account.
Going through the detailed view, you can potentially identify areas that can be optimised. There are many scenarios that looking at each chart or a combination of charts can help make things better.
For example, with the Savings balances view, you can see if your savings are distributed evenly among the savings accounts or has clustered around a couple of banks or less. This may trigger some thoughts around the reasons for a cluster, especially if there are limits to which savings are insured in a particular bank (distributing the risk may be desirable).
Looking at the Interest received view, you can identify banks that gives poor interest returns and take action to move the funds saved or reclassify the account as a current account (for which interest return is not the objective).
Contribution to interest shows where the interests are coming from. This may not necessarily correspond to the amount saved and further enforces the performance of each savings account as you go thorough the charts.
Lastly, the Annualized interest provides straight to the point and immediately actionable performance differences in % form. This differs from the Interest received view as different weightings are applied, giving different perspectives to the same reality. | https://docs.prudent.me/docs/addons/interest | 2022-06-25T05:38:15 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['/img/InterestSummary.png', None], dtype=object)
array(['/img/SavingBalances.png', None], dtype=object)
array(['/img/InterestReceived.png', None], dtype=object)
array(['/img/ContributionToInterest.png', None], dtype=object)
array(['/img/AnnualizedInterest.png', None], dtype=object)] | docs.prudent.me |
The following information is meant to aid administrators when determining the number and locations of Umbrella virtual appliances (VAs) in their environment. The key factors are ensuring the hardware prerequisites are met, the network latencies between hops, as well as the overall number of Umbrella Sites and users for each VA.
What is an Umbrella Virtual Appliance
The VA is a non-caching conditional DNS forwarder. It is a virtualized machine that uses Ubuntu as its OS and lives in a virtualized environment. Its purpose is to append identity information to external queries sent up to the Umbrella servers.
Minimum virtualized hardware requirements:
- Number of dedicated CPU cores per VA: 1
- Amount of memory per VA: 512MB (1GB recommended)
- Hard drive space per VA: 7GB
For sizing guidance, increasing the number of CPU cores on a VA will improve its performance, but the amount of RAM allocated to the machine must scale along with the number of CPU cores present. It is required that at least 512MB of RAM be allocated per CPU core on the VA. For example, a VA deployed with two CPU cores should have a minimum of 1GB of RAM allocated to it. While this is the minimum requirement, it is recommended that you configure 1GB of RAM per core.
VAs deployed on platforms such as Amazon Web Services and Google Cloud Platform require a minimum of 1GB RAM per CPU core.
A high-traffic site is one that has more than 500 DNS queries per second coming from clients pointed at the pair of VAs.
High-traffic sites with VAs should use multiple virtual CPUs and corresponding RAM per VA as per the following sizing table.
1 CPU, 1GB RAM
2300
2 CPU, 2 GB RAM
5000
4 CPU, 4 GB RAM
9000
8 CPU, 8 GB RAM
16000
16 CPU, 16 GB RAM
28000
Network Prerequisites
In order for the VAs to properly communicate with Umbrella for information and updates, review the applicable network prerequisites.
Deployment Considerations
The number and location of VAs deployed in your environment will depend on the following:
- Overall latency:
- Latency between VA and the Umbrella Anycast DNS resolvers
- Latency between users and the VAs
- Number of Umbrella Sites
- Number of users served by the VAs
Overall Latency
In general, clients on the network have the best web browsing experience when the total time to retrieve web resources is under 300ms. This total time to obtain web resources (such as documents, images, and stylesheets) includes both the time to retrieve a DNS response and the time needed to establish a connection with the server indicated in the DNS response. Umbrella aims to minimize the distance that a DNS packet must travel from a client device to our DNS resolvers. However, we do not control the responsiveness of those web servers or how traffic from various locations on the Internet is routed.
TOTAL TIME = Time to retrieve DNS response + Time to retrieve a web resource
There are two factors to consider when optimizing DNS response time: the distance between the VA and the Umbrella Anycast DNS resolvers, and the distance between the client and the VA.
The VA, when deployed, will forward all externally-bound DNS requests to the Umbrella DNS resolvers, 208.67.222.222 and 208.67.220.220. Therefore, when determining the latency between the VA and the closest Umbrella data center, we recommend an average DNS response time under 150ms for the best user experience.
When determining where to deploy VAs in your environment, you will want to take into account the distance between the clients that will utilize the VAs and the VAs themselves. For optimal performance, an average ping time between a client and the VM host on which the VA lives should not exceed 50ms.
Number of Umbrella Sites
Umbrella's sites allow administrators to segregate their Umbrella deployments. Each Umbrella Site is an isolated deployment in which the components will only communicate with other components in the same Umbrella Site. This is primarily useful in environments containing locations with high-latency connections or in environments with locations whose internal IP space overlaps.
We require each Umbrella site to have at least two VAs deployed. This ensures high availability and that VAs are receiving timely updates from Umbrella.
Number of Users per VA
A typical VA deployed with minimum hardware requirements has a tested throughput of at least 2000 queries per second.
Taking into account these metrics, a single VA can handle DNS requests from at least 57,000 concurrent users. Umbrella defines a single user as a client that generates an average of 3000 DNS requests in a typical eight-hour work day. Therefore, Umbrella defines concurrent users as the number of users or devices sending DNS requests to a VA at the same time.
If the VA specifications are increased to two virtual CPUs and 1GB RAM, a single VA will be able to handle DNS requests from at least 115,000 concurrent users. The number of users on the network likely will NOT be the limiting factor when determining the number of VAs to deploy.
Updates < Sizing Guide > SNMP Monitoring
Updated about a month ago | https://docs.umbrella.com/deployment-umbrella/docs/appx-b-sizing-guide | 2022-06-25T05:31:28 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.umbrella.com |
Armature Deform Parent¶
Reference
- Mode
Object Mode and Pose Mode
- Hotkey
Ctrl-P
Armature Deform Parenting is a way of creating and setting up an Armature Modifier.
To use Armature Deform Parenting you must first select all the child objects that will be influenced by the armature and then lastly, select the armature object itself. Once all the child objects and the armature are selected, press Ctrl-P and select Armature Deform in the Set Parent To pop-up menu.
The armature will be the parent object of all the other child objects and each child object will have an Armature Modifier with the armature associated (Object field).
With Empty Groups¶
When parenting it will create empty vertex groups on the child objects (if they do not already exist) for and named after each deforming bone in the armature. The newly created vertex groups will be empty. This means they will not have any weights assigned. Vertex groups will only be created for bones which are setup as deforming ( ).
You can then manually select the vertices and assign them to a particular vertex group of your choosing to have bones in the armature influence them.
Choose this option if you have already created (and weighted) all the vertex groups the mesh requires.
Example¶
For example, if you have an armature which consists of three bones named "BoneA", "BoneB" and "BoneC" and cube mesh called "Cube". If you parent the cube to the armature, the cube will get three new vertex groups created on it called "BoneA", "BoneB" and "BoneC". Notice that each vertex group is empty.
With Automatic Weights¶
With Automatic Weights parenting works similar to With Empty Groups, but it will not leave the vertex groups empty. It calculates how much influence a particular bone would have on vertices based on the distance from those vertices to a particular bone ("bone heat" algorithm). This influence will be assigned as weights in the vertex groups.
This method of parenting is certainly easier to setup, but it can often lead to armatures which do not deform child objects in ways you would want. Overlaps can occur when it comes to determining which bones should influence certain vertices when calculating influences for more complex armatures and child objects. Symptoms of this confusion are that when transforming the armature in Pose Mode, parts of the child objects do not deform as you expect; If Blender does not give you the results you require, you will have to manually alter the weights of vertices in relation to the vertex groups they belong to and have influence in.
With Envelope Weights¶
Works in a similar way to With Automatic Weights. The difference is that the influences are calculated based on the Bone Envelopes settings. It will assign a weight to each vertex group the vertices that is inside its bone's influence volume, depending on their distance to this bone.
This means newly included/excluded vertices or new envelope settings will not be taken into account. You will have to apply Armature Deform With Envelope Weights parenting again.
ちなみに
If you want the envelope setting to be used instantly, bind the Armature Modifier to Bone Envelopes.
警告
If you had defined vertex groups using same names as skinned bones, their content will be completely overridden by both Automatic and Envelope Weights. In this case With Empty Groups could be used instead. | https://docs.blender.org/manual/ja/2.82/animation/armatures/skinning/parenting.html | 2022-06-25T05:24:16 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.blender.org |
This document is for Kombu's development version, which can be significantly different from previous releases. Get the stable docs here: 5.0.
Abstract Classes -
kombu.abstract¶
Object utilities.
- class kombu.abstract.MaybeChannelBound(*args: Any, **kwargs: Any)[source]¶
Mixin for classes that can be bound to an AMQP channel.
- bind(channel: Channel) _MaybeChannelBoundType [source]¶
Create copy of the instance that is bound to a channel.
- maybe_bind(channel: Channel) _MaybeChannelBoundType [source]¶
Bind instance to channel if not already bound.
- revive(channel: Channel) None [source]¶
Revive channel after the connection has been re-established. | https://docs.celeryq.dev/projects/kombu/en/latest/reference/kombu.abstract.html | 2022-06-25T04:55:32 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.celeryq.dev |
Verifying Verifiable Credentials
As users collect more credentials, having to manually manage them (figuring out which are current, and which are appropriate for a given relying party) becomes infeasible. Unlike unstructured credentials (e.g., many .pdf files), which would require a user to manually select which data to share, verifiable credentials lend themselves to credential exchange protocols that are easier for end-users. These protocols are typically implemented for the user through the user's wallet (or other software agents) interacting with the requesting party.
Verite uses the concepts and data models from DIF's Presentation Exchange for this purpose. This document describes how a consumer of Verifiable Credentials, referred to as a verifier or relying party, informs users what types/formats of credentials they accept, and how the user's wallet/agent uses this information to select the appropriate credentials and respond.
Presentation Requests and Definitions
A presentation definition is the way a relying party describes the inputs it requires, proof formats, etc. A presentation request is a generic term for a transport conveying this. It's meant to be flexibly embedded in a variety of transports, such as OIDC or WACI. Verite uses a JSON object that somewhat resembles the schema defined by WACI, but with additional fields including a challenge and reply URL.
Wallet Interactions
Assuming a mobile wallet stores the credentials, for the convenience of the user a verifier may initiate the process of sending the presentation request either by scanning a QR code (desktop) or a deep-link (mobile). Due to size limitations of a QR code, wallet and credential interactions often do not include the full presentation request in the QR code; instead the QR code encodes an endpoint with a unique URL. The wallet decodes the QR code, subsequently retrieving the presentation request from that endpoint.
See example Verite Presentation Request
Credential Submission
The wallet parses the presentation definition to determine what types of inputs, proofs, and formats the verifier requires. The wallet displays a summary of the information requested to the wallet holder, asking for approval and/or asking the user to select the desired credential(s) from the set of matches. On confirmation, the wallet gathers the credentials and creates a verifiable presentation containing the credential and signs the presentation with the credential subject’s private key. It embeds the VP in a presentation submission, and signs it along with the
challenge to provide proof of identifier control.
Finally, the wallet sends the packaged credential to the
reply_url contained in the presentation request.
See example Verite presentation submission.
Verification
The verifier receives the presentation submission, unwraps it, and maps the presentation to the original presentation request. Mapping the submission to the original request can be done in many ways. The Verite demos use a JWT in the
reply_url to store the mapping. Next, the verifier verifies the submitted contents against the required inputs, ensures its signed by the subject's keys, and checking the credential's status to determine if it is revoked out not.
Verification cannot always occur immediately. In these cases, the presentation request has an optional
status_url that can be used to check its status.
There is no required output or side-effect of verification. However, we have a pattern for integrating with Ethereum using an on-chain Verification Registry. A web app, however, might simply update its state and allow the user to continue some action.
Verification Flow
In this specific example, a user wants to verify using their mobile wallet and have the resulting Verification Record to later register with a on-chain registry.
- Verifier prompts user for the Ethereum address the Verification Record will be bound to
- User provides their Ethereum address (e.g. copy pasting, or by connecting a wallet)
- Verifier generates a JWT that encodes the user's address, that will later be used to generate the URL the mobile wallet will submit to.
- Verifier shows QR Code
- User scans QR Code with their wallet.
- Wallet parses the QR code, which encodes a JSON object with a
challengeTokenUrlproperty.
- Wallet performs a GET request at that URL to return a Verification Offer, a wrapper around a Presentation Request, with three supplementary properties:
- The verifier DID.
- A URL for the wallet to submit the Presentation Submission, using the unique JWT generated earlier.
- The wallet prompts the user to select credential(s) from the set of matches.
- Wallet prepares a Presentation Submission including
- Wallet DID is the holder, proving control over the DID. In the Verite examples, the holder must match the credential subjects, validating the holder and subject are the same.
- Any Verifiable Credential(s) necessary to complete verification.
- Wallet is the Presentation Request holder and signs it along with the challenge
- Wallet submits the Presentation Submission to the URL found in the Verification Offer.
- The Verifier validates all the inputs
- Verifiers generates a Verification Record and adds it to the registry | https://docs.centre.io/verite/patterns/verification-flow | 2022-06-25T04:14:54 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['/assets/images/sequence_exchange-78a731c18c24fcb8826d0f43b05246c1.png',
'Exchanging a Credential Exchanging a Credential'], dtype=object) ] | docs.centre.io |
Solucionador
Reference
- Panel
The settings in the Soft Body Solver panel determine the accuracy of the simulation.
- Step Size Min
Minimum simulation steps per frame. Increase this value, if the soft body misses fast-moving collision objects.
- Máximo control how the soft body will react (deform). | https://docs.blender.org/manual/pt/dev/physics/soft_body/settings/solver.html | 2022-06-25T05:35:06 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.blender.org |
."
INC-135015 · Issue 591980
Handling added for overriding questions rules
Resolved in Pega Version 8.4.46091 · Issue 598908
Activity updated for Mobile attachment access
Resolved in Pega Version 8.4.4.
INC-136202 · Issue 603905
Needed offline mobile resource added for upgraded channels
Resolved in Pega Version 8.4.4. | https://docs.pega.com/platform/resolved-issues?f%5B0%5D=%3A29991&f%5B1%5D=resolved_capability%3A9091&f%5B2%5D=resolved_capability%3A9096&f%5B3%5D=resolved_version%3A34296&f%5B4%5D=resolved_version%3A34746&f%5B5%5D=resolved_version%3A35821 | 2022-06-25T04:00:19 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.pega.com |
In order for you to test 7-Eleven Pago en efectivo payment method successfully, you don’t need any given test data.
7-Eleven Pago en efectivo Payment Flow
The customer enters the required details: email address, first name and last name.
The customer receives a voucher with a barcode. In order to complete the payment, he needs to print the voucher with the barcode and pay it at any 7-Eleven store. | https://docs.smart2pay.com/s2p_testdata_1029/ | 2022-06-25T05:06:40 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.smart2pay.com |
Interpreting the Security Report - Part II
- Interpreting the Security Report - Part I
- Interpreting the Security Report - Part II
Access the Security Report
- On the main console, click the Security Report button.
- Identify the most common threat by looking at the chart under Threat Types.
- Under Threat History, customize the chart to show only the type of threat that caused the most trouble. You can do that by clicking the threat type buttons at the bottom of the chart to hide other types of threats.
Once you see what threats have been found and when, think back to what you did with your computer at those times. You can, for example, check your browser's history to see which websites you visited.
Further Analysis
- Click View Detailed Logs to see full lists of all the threats found and what was done in response.
- From the list next to View, select the type of threat most commonly found.
- Click the action taken against any threat on the list that appears to see additional details about it, including when it was found. For web threats, you can also see the address of the website involved. For viruses or spyware, you can see where it was found on your computer and when.
What's next?
You have finished this tutorial. Check out some others: | https://docs.trendmicro.com/en-us/consumer/titanium2013/tutorials/interpreting-the-security-report-part-ii.aspx | 2022-06-25T05:26:14 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.trendmicro.com |
The Endpoints screen displays the scan history and results for each
endpoint.
Field
Description
Host Name
Displays the name of an endpoint.
Click to view detailed information.
IP Address
Displays the IPv4 address of an endpoint.
User
Displays the name of the user an endpoint belongs to.
This information is obtained from the imported CSV file. The Advanced Threat Assessment Service matches the tag information from
the CSV file to endpoints (see Endpoint Tagging Tab).
Department
Displays the department an endpoint belongs to.
Operating System
Displays the operating system on an endpoint.
Risk Level
Displays the risk level assigned to the detected object.
See About Risk Levels and Endpoint Statuses.
Last Scanned
Displays the date and time the endpoint was last scanned.
Scanned
Displays the date and time a scan task was performed on an
endpoint.
Security Threat
Displays the name of the detected file with the
highest aggregated risk rating.
Action
Click to view detailed
information about a scan task. Click to download the
results for a scan task. | https://docs.trendmicro.com/en-us/enterprise/advanced-threat-assessment-service-15/endpoint.aspx | 2022-06-25T04:37:39 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.trendmicro.com |
There are five components relating to Marvin OLE which the user can expect log files generated from.
Component specific folders can be found in the ChemAxon root folder under the following locations:
C:\Documents and Settings\{CURRENT USER}\Application Data\ChemAxon
C:\Users\{CURRENT USER}\AppData\Roaming\ChemAxon
If the users turn to support help, these folders need to be zipped and sent to the relevant support team/department. Please take care that log files might contain confidential local information (molecules) therefore log files are better to be sent directly to the support teams. Forum posts are public and searchable for other users therefore should be avoided for any log file uploading. Support teams handle the logging information confidentially.:
C:\PROGRA~1\ChemAxon\Shared\MARVIN~1\MARVIN~1.EXE
All CXN components, like Marvin OLE, also place error level log information in the System Event Log (in addition to the log file).
The event viewer shows all types of logs, ChemAxon generally reports problems under the Application section. For more information about the Microsoft Event Log Guide, see. | https://docs.chemaxon.com/display/docs/logging.md | 2021-04-10T19:11:57 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.chemaxon.com |
Browse Office Visits
Nursing staff can browse past visits. The most common use would be to locate a case when a nurse remembers specific details about the case but not the student’s name, and it can also be used to help tally types of visits for reporting when a pre-built report that does not provide the required breakdown.
To Get There:
- From the PowerSchool start page, click on Extended Reports under the Functions heading on the left side.
- Click on the Health tab.
- Click on Browse Office Visits under the Office Visits heading
- This page lists office visits that have been entered into PowerSchool and their details. At the top of the page are filters that can be applied to limit the types of visits that appear on the page.
- The student IDs which appear in blue can be clicked to open the matching students office visits page.
Drop-down filters:
- Visit period (the month of the visit), visit type, guardian contacted, and outcome are all drop down filters.
- Click Choose a value… on the filter that you want to apply. A drop-down box will appear will the possible values. Once a value is selected, the table will immediately filter down to only matching visits.
- Note: You can choose multiple values at once. Just, click Choose a value… a second time and choose a new value.
- All the chosen values will appear in blue boxes. To remove one of the values from the filters, click the X at the left of that box.
Search Filters:
- Student ID, reason description, and actions are all searchable text fields that can be filtered based on text that you enter.
- For example, by typing ankle into the reason description text box, the table will immediately filter down to only those visits that have the word “ankle” somewhere within the reason description | https://docs.glenbard.org/index.php/ps-2/admin-ps/health/browse-office-visits/ | 2021-04-10T18:15:44 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.glenbard.org |
9. Managing Tuple Tables¶
As explained in Section 4, a data store uses tuple tables as containers for facts – that is, triples and other kind of data that RDFox should process. Each tuple table is identified by a name that is unique for.
9.1. Types of Tuple Tables¶
RDFox supports three kinds of tuple tables.
In-memory tuple tables are the most commonly used kind of tuple table, which, as the name suggests, store facts in RAM. RDFox uses in-memory tuple tables of arity three to store triples of the default graph and the named graphs of RDF. In particular, an in-memory tuple table called created automatically when a fresh data store is created to act as the default graph, and RDFox will create additional in-memory tuple tables for each named graph it encounters. RDFox provides ways to add and delete facts in in-memory tuple tables.
Data source tuple tables provide a ‘virtual view’ over data in non-RDF data sources, such as CSV files, relational databases, or a full-text Solr index. Such tuple tables must be created explicitly by the user, and doing so requires specifying how the external data is to be transformed into a format compatible with RDF. The facts in data source tuple tables are ‘virtual’ in the sense that they are constructed automatically by RDFox based on the data in the data source — that is, there is no way to add/delete such facts directly. Finally, data source tuple tables can be of arbitrary arity — that is, such tuple tables are not limited to containing just triples. Data source tuple tables and the process of importing external data are described in detail in Section 10.
Built-in tuple tables contain some well-known facts that can be useful in various applications of RDFox. The facts in such tuple tables cannot be modified by users; rather, they are produced on the fly by RDFox as needed. They are described in more detail in Section 9.5.
9.2. Fact Domains¶
Each fact in a tuple is associated with one or more fact domains.
The
EDBfact domain contains facts that were imported explicitly by the user. The name EDB is an abbreviation of Extensional Database.
The
IDBfact domain contains facts that were derived using rules. The name IDB is an abbreviation of Intensional Database. This fact domain is used as the default in all operations that take a fact domain as argument.
The
IDBrepfact domain contains the representative facts of the IDB domain. This fact domain differs only in data stores for which equality reasoning (i.e., reasoning with
owl:sameAs) is turned on.
The
IDBrepNoEDBfact domain contains facts of the IDB domain that are not in the EDB domain, which are essentially facts that were derived during reasoning and were not present in the input.
A fact can belong to more than one domain. For example, facts added to
the store are stored into the
EDB domain, and during reasoning they
are transferred into the
IDB domain.
Only the
EDB fact domain can be directly affected by users. That is, all
explicitly added facts are added to the
EDB domain, and only those facts
can be deleted. It is not possible to manually delete derived facts since the
meaning of such deletions is unclear.
Many RDFox operations accept a fact domain as an argument. For example, SPARQL
query evaluation takes a fact domain as an argument, which determines what
subset of the facts the query should be evaluated over. Thus, if a query is
evaluated with respect to the
EDB domain, it will ‘see’ only the facts that
were explicitly added to a data store, and it will ignore the the facts that
were derived by reasoning.
9.3. Managing and Using Tuple Tables¶
RDFox provide ways for creating and deleting tuple tables: this can be
accomplished in the shell using the
tupletable command (see
Section 16.2.2.41), and the relevant APIs are described in
Section 14.7. When creating a tuple table, one must specify a
list of key-value parameters that determine what kind of tuple table is to be
created. The parameters for data source tuple tables depend on the type of data
source and are described in detail in Section 10. Moreover, the
parameters for in-memory and built-in tuple tables are described in
Section 9.4 and Section 9.5,
respectively.
RDFox provides ways to add and delete facts to in-memory tuple tables: this can
be accomplished in the shell using the
import command (see
Section 16.2.2.20), and the relevant APIs are described in
Section 14.5.5.
Facts in a tuple table can be accessed during querying and reasoning. In
queries, tuple tables corresponding to the default graph and the named graphs
can be accessed using standard SPARQL syntax for triple patterns and the
GRAPH operator — that is, a triple pattern outside a
GRAPH operator
will access the tuple
table, and a triple pattern inside a
GRAPH :G operator will access the
in-memory tuple table with name
:G. To access tuple tables of other types,
RDFox extends the SPARQL syntax with the
TT operator, which is described in
Section 5.3. Note that the default graph and the named
graphs can also be accessed using the
TT operator. Moreover, tuple tables
can be accessed in rules using the general atom syntax described in
Section 6.4.1.3. Since only in-memory tuple tables can be modified by
users, any atom occurring in the head of a rule is allowed to mention only an
in-memory tuple table.
9.4. In-Memory Tuple Tables¶
RDFox uses in-memory tuple tables to store facts imported by the users. At
present, RDFox supports only tuple tables of arity three, thus allowing the
system to store only triples. An in-memory tuple table called is created automatically
when a fresh data store is created to act as the default graph. Moreover,
in-memory tuple tables can be created in the following three ways.
When instructed to import data containing triples in graphs other than the default one, RDFox will automatically create a tuple table for each named graph it encounters.
The SPARQL 1.1 Update command
CREATE GRAPHcreates an in-memory tuple table for each named graph.
In-memory tuple tables can be created using tuple table management APIs. The main benefit of this over the above two methods is the ability to specify additional parameters, as described in the following table.
9.5. Built-In Tuple Tables¶
Built-in tuple tables are similar to built-in functions; however, whereas a
built-in function returns just one value for a given number of arguments, a
built-in tuple table can relate sets of values. Thus, facts in built-in tuple
tables are not stored explicitly; rather, they are produced on the fly as query
and/or rule evaluation progresses. Other than this internal detail, built-in
tuple tables are used in queries and rules just like any other tuple table:
they are referenced in queries using the proprietary
TT operator (see
Section 5.3), and they are referenced in rules using
general atoms (see Section 6.4.1.3). Built-in tuple tables are the only
ones for which the minimal and the maximal arity are not necessarily the same.
Each built-in tuple table is identified by a well-known name, which cannot be
changed. The names of all of built-in tuple tables starts with, which is abbreviated in the rest of this
section as
rdfox:. For example, the
rdfox:SKOLEM built-in tuple table
is always available under that name. When a data store is created, all built-in
tuple tables supported by RDFox will be created automatically. It is very
unlikely that users will ever need to delete built-in tuple tables;
nevertheless, for the sake of consistency, RDFox allows such tuple tables to be
deleted just like any other tuple table. In case a built-in tuple table is
deleted, it can be recreated using standard methods, by simply specifying the
tuple table name without any parameters. (Please note that, as a consequence of
this, it is not possible to create an in-memory or a data source tuple table
with a name that is reserved for a built-in tuple table.)
9.5.1.
rdfox:SKOLEM¶
The
rdfox:SKOLEM tuple table can have arity from one onwards. Moreover, in
each fact in this tuple table, the last resource of the fact is a blank node
that is uniquely determined by all remaining arguments. This can be useful in
queries and/or rules that need to create new objects. This is explained using
the following example.
Example: Let us assume we are dealing with a dataset where each person
is associated with zero or more companies using the
:worksFor
relationship. For example, our dataset could contain the following triples.
:Peter :worksFor :Company1 . :Peter :worksFor :Company2 . :Paul :worksFor :Company1 .
Now assume that we wish to attach additional information to each individual
employment. For example, we might want to say that the employment of
:Peter in
:Company1 started on a specific date. To be able to
capture such data, we will ‘convert’ each
:worksFor link to a separate
instance of the
:Employment class; then, we can attach arbitrary
information to such instances. This presents us with a key challenge: for
each combination of a person and company, we need to ‘invent’ a fresh
object that is uniquely determined by the person and company.
This problem is solved using the
rdfox:SKOLEM built-in tuple table. In
particular, we can restructure the data using the following rule.
:Employment[?E], :employee[?E,?P], :inCompany[?E,?C] :- :worksFor[?P,?C], rdfox:SKOLEM("Employment",?P,?C,?E) .
The above rule can be understood as follows. Body atom
:worksFor[?P,?C]
selects all combinations of a person and a company that the person works
for. Moreover, atom
rdfox:SKOLEM("Employment",?P,?C,?E) contains all
facts where the value of
?E is uniquely determined by the fixed string
"Employment", the value of
?P, and the value of
?C. Thus, for
each combination of
?P and
?C, the built-in tuple table will
produce a unique value of
?E, which is then used in the rule head to
derive new triples.
How a value of
?E is computed from the other arguments is not under
application control: each value is a blank node whose name is guaranteed to
be unique. However, what matters is that the value of
?E is always the
same whenever the values of all other arguments are the same. Thus, we can
use the following rule to specify the start time of Peter’s employment in
Company 1.
:startDate[?E,"2020-02-03"^^xsd:date] :- rdfox:SKOLEM("Employment",:Peter,:Company1,?E) .
After evaluating these rules, the following triples will be added to the
data store. We use blank node names such as
_:new_1 for clarity: the
actual names of new blank nodes will me much longer in practice.
_:new_1 rdf:type :Employment . _:new_1 :employee :Peter . _:new_1 :inCompany :Company1 . _:new_1 :startDate "2020-02-03"^^xsd:date . _:new_2 rdf:type :Employment . _:new_2 :employee :Peter . _:new_2 :inCompany :Company2 . _:new_3 rdf:type :Employment . _:new_3 :employee :Paul . _:new_3 :inCompany :Company1 .
When creating fresh objects using the
rdfox:SKOLEM built-in tuple table, it
is good practice to incorporate object type into the argument. The above
example achieved this by passing a fixed string
"Employment" as the first
argument of
rdfox:SKOLEM. This allows us to create another, distinct blank
node for each combination of a person and a company by simply varying the first
argument of
rdfox:SKOLEM.
Atoms involving the
rdfox:SKOLEM built-in tuple table must satisfy certain
binding restrictions in rules and queries. Essentially, it must be possible
to evaluate a query/rule so that, once an
rdfox:SKOLEM atom is reached,
either the value of the last argument, or the values of all all but the last
argument must be known. This is explained using the following example.
Example: The following query cannot be evaluated by RDFox — that is, the system will respond with a query planning error.
SELECT ?P ?C ?E WHERE { TT rdfox:SKOLEM { "Employment" ?P ?C ?E } }
This query essentially says “return all
?P,
?C, and
?E where
the value of
?E is uniquely defined by
"Employment",
?P, and
?C”. The problem with this is that the values of
?P and
?C have
not been restricted in any way, so the query should, in principle, return
infinitely many answers.
To evaluate the query, one must provide the values of
?P and
?C, or
for
?E, either explicitly as arguments or implicitly by binding the
arguments in other parts of the query. Thus, both of the following queries
can be successfully evaluated.
SELECT ?E WHERE { TT rdfox:SKOLEM { "Employment" :Paul :Company2 ?E } } SELECT ?T ?C ?P WHERE { TT rdfox:SKOLEM { ?T ?C ?P _:new_1 } }
The latter query aims to unpack
_:new_1 into the values of
?T,
?C, and
?P for which
_:new_1 is the uniquely generated fresh
blank node. Note that such
?T,
?C, and
?P may or may not exist,
depending on the algorithm RDFox uses to generate blank nodes. The
following is a more realistic example of blank node ‘unpacking’.
SELECT ?T ?C ?P WHERE { ?E rdf:type :Employment . TT rdfox:SKOLEM { ?T ?C ?P ?E } }
9.5.2.
rdfox:SHACL¶
RDFox supports the RDF constraint validation language SHACL by the means of the built-in tuple table
called
rdfox:SHACL. The tuple table has 5 arguments. The first argument
specifies the name of the data graph — that is, the graph whose
content is to be validated. The second argument specifies the name of the
shapes graph — that is, the
graph that contains the SHACL constraints. The last three arguments receive the
subject, the predicate and the object of each triple in the validation
report that results from
validating the data graph with respect to the constraints in the shapes graph.
Basic SHACL Validation
Example: Assume that the following data graph about employees and
their employers is imported into the named graph
:data.
@prefix sh: <>. @prefix : <>. :John a :Employee; :worksFor :Company1. :Jane a :Employee; :worksFor [ a :Employer ].
Furthermore, assume that the following shapes graph, which asserts that
each value of the property
:worksFor is of type
:Employer, is
imported into the named graph
:shacl.
@prefix sh: <>. @prefix : <>. :ClassShape sh:targetClass :Employee ; sh:path :worksFor ; sh:class :Employer.
One can now query the SHACL tuple table to generate the validation
report resulting from the validation of the data graph
:data using the
shapes graph
:shacl as follows.
PREFIX : <> PREFIX rdfox: <> SELECT ?s ?p ?o { TT rdfox:SHACL { :data :shacl ?s ?p ?o } }
The validation report should look as follows, modulo blank node names and prefix abbreviations:
_:anonymous1001 rdf:type sh:ValidationReport . _:anonymous1001 sh:conforms false . _:anonymous1001 sh:result _:anonymous1002 . _:anonymous1002 rdf:type sh:ValidationResult . _:anonymous1002 sh:focusNode :John . _:anonymous1002 sh:sourceConstraintComponent sh:ClassConstraintComponent . _:anonymous1002 sh:sourceShape :ClassShape . _:anonymous1002 sh:resultPath :worksFor . _:anonymous1002 sh:value :Company1 . _:anonymous1002 sh:resultSeverity sh:Violation . _:anonymous1002 sh:resultMessage "The current value node is not a member of the specified class <>." .
Saving a Validation Report
A validation report can be saved into a named graph using the
INSERT update
of SPARQL. This is illustrated in the following example.
Example: The following update saves the validation report into the
named graph
PREFIX sh: <> PREFIX : <> PREFIX rdfox: <> INSERT { GRAPH :report { ?s ?p ?o } } WHERE { TT rdfox:SHACL { :data :shacl ?s ?p ?o } }
Rejection of Non-Conforming Updates
Certain use cases may require the content of a data store to be kept consistent
with SHACL constraints at all times — that is, any updates that result in a
violation of a SHACL constraint should be rejected. To achieve this behaviour
in RDFox, one can query the
rdfox:SHACL tuple table before committing a
transaction as follows and, in case any violations are detected, adding an
instance of the
rdfox:ConstraintViolation class in the default graph; As
discussed in Section 12.2, the latter will prevent
a transaction from committing. This technique is demonstrated in the following
example.
Example: Consider the data and shape graphs from the previous examples and assume the insertion of the data graph is performed using the following RDFox commands.
begin import > :data data.ttl INSERT { ?report a rdfox:ConstraintViolation } \ WHERE { TT rdfox:SHACL { :data :shacl ?report sh:conforms false } } # the transaction fails commit
The
INSERT update checks whether the SHACL constraints are satisfied,
and if not, adds the value of
?report as an instance of
rdfox:ConstraintViolation. As discussed earlier, the constraints are
not satisfied for the data in this example, so the
WHERE part of the
update will bind variable
?report to
_:anonymous1001; thus, triple
_:anonymous1001 a rdfox:ConstraintViolation will be added to the default graph,
which will prevent the transaction from completing successfully.
In contrast, if we fix the data prior to committing the transaction as in the following example, the transaction will be successfully committed.
begin import > :data data.ttl # the following tuple makes the data in data.ttl consistent with the SHACL graph import > :data ! :Company1 a :Employer. INSERT { ?report a rdfox:ConstraintViolation } \ WHERE { TT rdfox:SHACL { :data :shacl ?report sh:conforms false } } # the transaction succeeds commit
If we now attempt to remove the triple
:Company1 a :Employer using the
same approach, the transaction in question will be rejected, since the
remaining data would no longer conform with the constraints in the SHACL
graph.
begin # attempting to remove a tuple that would invalidate the remaining of the data import > :data - ! :Company1 a :Employer. INSERT { ?report a rdfox:ConstraintViolation } \ WHERE { TT rdfox:SHACL { :data :shacl ?report sh:conforms false } } # the transaction fails commit
If we want the error message to contain additional information about the constraint violation, we can insert other triples with the rdfox:ConstraintViolation instance in the subject postion into the default graph, for exmaple:
begin import > :data - ! :Company1 a :Employer. INSERT { \ ?s a rdfox:ConstraintViolation . \ ?s ?p ?o \ } WHERE { \ TT rdfox:SHACL { :data :shacl ?s ?p ?o} . \ FILTER(?p IN (sh:sourceShape, sh:resultMessage, sh:value)) \ } commit
This should produce an error message like this:
An error occurred while executing the command: The transaction could not be committed because it would have introduced the following constraint violation: _:anonymous1 sh:resultMessage "The current value node is not a member of the specified class <>."; sh:value <>; sh:sourceShape <> .
Scope of SHACL support:
RDFox supports SHACL Core.
SHACL validation is available during query answering, but not in rules.
The definitions of SHACL Subclass, SHACL Superclass, and SHACL Type rely on a limited form of taxonomical reasoning. This is not automatically performed during SHACL validation, since the desired consequences can be derived using the standard reasoning facilities of RDFox.
The support for SHACL property paths is limited to predicate paths.
owl:importsin shapes graph is not supported.
sh:shapesGraphin data graphs is not supported. | https://docs.oxfordsemantic.tech/tuple-tables.html | 2021-04-10T18:14:56 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.oxfordsemantic.tech |
Joining the Perks Program¶
The Perks Program and Monetization¶
Joining the Perks Program allows Core creators to make money with their games using a complete monetization system that can be customized to any game type.
For a complete overview of the program, see the Perks Program introduction, and the Perks reference for instructions on how to implement them in game.
Requirements to Join the Perks Program¶
To qualify to join the Perks Program, there are minimum requirements that must be met:
- Your account must be at least 30 days old and active in the last 90 days. You must also be in good standing, meaning that you have not been banned or repeatedly suspended for violating the Code of Conduct, Content Policies, or Terms of Service.
- You need to reach 50 Daily Average Users (DAU) across all your games over one month.
Info
Daily Average Users counts each account that connects to your game in a day. Reaching 50 DAU in a month means a total of 1500 users, but each user counts again on a new day. To learn more about the data available at how many users are playing your games, see the Creator Analytics reference.
For more details about joining the program, see Joining the Perks Program in the Core Help Center. To learn about ways to improve the DAU across your games, see the Improving Your Game guide.
Enrolling in the Program¶
Once you meet the minimum requirements to qualify for the Perks Program, you can enroll through your Creator Dashboard.
You will need to register your tax and payment information with the Tipalti payment system, and accept the terms of participation.
For more details about joining the program, see Joining the Perks Program in the Core Help Center.
Learn More¶
The Perks Program | Implementing Perks | Creator Analytics | Joining the Perks Program | https://docs.coregames.com/perks/joining_perks/ | 2021-04-10T18:32:00 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.coregames.com |
Module com.gluonhq.attach.storage
Package com.gluonhq.attach.storage
Interface StorageService
- All Known Implementing Classes:
AndroidStorageService,
DesktopStorageService,
IOSStorageService
public interface StorageServiceThe storage service provides access to the private and public storage locations for the application offered by the native platform.
Example
File privateStorage = StorageService.create() .flatMap(StorageService::getPrivateStorage) .orElseThrow(() -> new FileNotFoundException("Could not access private storage.")););}
Android Configuration
The permissionsNote: these modifications are handled automatically by Client plugin if it is used.
android.permission.READ_EXTERNAL_STORAGEand
android.permission.WRITE_EXTERNAL_STORAGEare required if you want to access the external storage on the device for read and/or write operations respectively. Defining write permissions implicitly activate read permissions as well.
<manifest ...> <uses-permission android: <uses-permission android: ... <activity android: </manifest>
iOS Configuration: none
- Since:
- 3.0.0
Method Detail
create
static java.util.Optional<StorageService> create()Returns an instance of
StorageService.
- Returns:
- An instance of
StorageService.
getPrivateStorage
java.util.Optional<java.io.File> getPrivateStorage()Get a storage directory that is private to the environment that is calling this method. In the case of iOS or Android, the returned directory is private to the enclosing application.
- Returns:
- an optional with a private storage directory for an application
getPublicStorage
java.util.Optional<java.io.File> getPublicStorage(java.lang.String subdirectory)Get a public storage directory location.
Note that on Android the public location could be mapped to a removable memory device and may not always be available. Users of this method are advised to call
isExternalStorageWritable()or
isExternalStorageReadable()to avoid surprises.
Note also that on Android, permissions will need to be set to access external storage. See:.
- Parameters:
subdirectory- under the root of public storage that is required. On Android the supplied subdirectory should not be null.
- Returns:
- an Optional of a File representing the requested directory location. The location may not yet exist. It is the responsibility of the programmer to ensure that the location exists before using it.
isExternalStorageWritable
boolean isExternalStorageWritable()Checks if external storage is available for read and write access.
- Returns:
- true if the externalStorage is writable (implies readable), false otherwise
isExternalStorageReadable
boolean isExternalStorageReadable()Checks if external storage is available for read access.
- Returns:
- true if the externalStorage is at least readable, false otherwise | https://docs.gluonhq.com/attach/javadoc/4.0.10/com.gluonhq.attach.storage/com/gluonhq/attach/storage/StorageService.html?is-external=true | 2021-04-10T19:39:30 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.gluonhq.com |
Newshive has multiple layout options that you can use to display a single page beautifully on your website.
- Go to Dashboard >> Appearance >> Customize >> Design Settings >> Page Settings.
- Choose the suitable sidebar layout from available options under Page Sidebars.
- Then, Click on Save & Publish button. | https://docs.mysterythemes.com/easy-store/configure-page-settings/ | 2021-04-10T19:46:47 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.mysterythemes.com |
About the Dashboard Editor
You can use the Dashboard Editor to create and edit dashboards without writing a single line of XML code. From the Dashboard Editor you can do the following:
- Create dashboards
- Add panels to dashboards
- Add form inputs to convert the dashboard to a form
- Rearrange dashboard panels using a drag-and-drop interface.
- Edit the searches that drive data in the dashboard.
- Specify different visualizations for a panel.
- Specify formatting options for a panel visualization.
- Edit the source code for a dashboard.
- Convert a dashboard.
See Add panels to dashboards and Create and edit forms with the Dashboard Editor.
- Click Done to create.
Panel options vary depending on the panel type and whether you are adding the panel to an existing dashboard or creating a new one.
Panel search permissions
The search that drives a dashboard panel can run using the permissions of the user who created the search (the search owner), or a user who views the dashboard (a search user).
Depending on the results data access that you want to provide, you can adjust the permissions context for the search in the Reports listing page. Locate the search on this page and select Edit > Edit Permissions to change whether the search runs with the owner or user context.
Specify visualizations for the dashboard panel
When you run a new search or open a report, the visualizations recommended to you depend on the results of the search. If the search does not include transforming commands, only the events list is available. If you run Enterprise and you manage its permissions accordingly. Your user role (and capabilities defined for that role) may limit the type of access you can define.
For example, if your user role is "user" with the default set of capabilities, then you can only create dashboards that are private to you. You can, however, provide read and write access to other users.
If your user role is "admin" with the default set of capabilities, then you can create dashboards that are private, visible in a specific app, or visible in all apps. You can also provide access to other Splunk user roles.
For additional information on setting up permissions for dashboards and other knowledge objects see Manage knowledge object permissions in the Knowledge Manger manual.
Edit permissions example
The following example shows how an admin user can set or a form input to a dashboard.
The underlying simple XML updates to convert the dashboard to a form.
- Edit the source simple XML for a dashboard to include form elements.
Customize a dashboard
There are several options to customize a dashboard, adding features not available from the Dashboard Editor.
- Edit the underlying simple XML to implement advanced features.
Typically, you edit the simple XML to edit visualization features that are not available from the interactive editors. You can also take advantage of tokens from search strings to customize the appearance of text. See About editing simple XML and Token usage in dashboards.
- Edit the style sheets for the dashboard or add custom CSS style sheets.
See CSS, JavaScript, and other static files and Customize simple XML.
- Add custom JavaScript for the dashboard.
See CSS, JavaScript, and other static files and Customize simple XML.
- Convert or export the dashboard as HTML.
After converting the dashboard, edit the HTML code, JavaScript, and style sheets to specify custom behavior. See Convert a dashboard to! | https://docs.splunk.com/Documentation/Splunk/6.3.11/Viz/CreateandeditdashboardsviatheUI | 2021-04-10T18:31:42 | CC-MAIN-2021-17 | 1618038057476.6 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
ScrollBarOptions.BarColor Property
Gets or sets the bar's color in a scroll bar.
Namespace: DevExpress.XtraCharts
Assembly: DevExpress.XtraCharts.v18.2.dll
Declaration
[XtraSerializableProperty] public Color BarColor { get; set; }
<XtraSerializableProperty> Public Property BarColor As Color
Property Value
Property Paths
You can access this nested property as listed below:
Remarks
A Scroll bar's appearance is determined by its ScrollBarOptions.BackColor, BarColor, ScrollBarOptions.BorderColor and ScrollBarOptions.BarThickness properties.
NOTE
If the BarColor property is set to Empty, the bar color is obtained from the appearance settings (a chart's appearance is specified via its ChartControl.AppearanceName property).
The following images demonstrate the BarColor property in action.
For more information on customizing the scroll bars, refer to Panes.; | https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.ScrollBarOptions.BarColor?v=18.2 | 2021-04-10T18:31:08 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.devexpress.com |
Business Office: Void/Reissue Checks
Void a Check
If a check should not have been written or needs changes and will need to be re-written, a check can be voided.
- Go to Accounts Payable > Void Accounts Payable Run
- Click Add Void Accounts Payable Run near the upper-right.
- Fill in all fields and uncheck Will Create ACH
- Click Save & Select Void Checks
- Check the check(s) to be voided.
- Click Save & Update Invoices
- Click Next, click Next, click Next, and click Run Process
- Click Accounting Register and click Close.
- Click Accounts Payable Update and click Run Process and Close.
If the check needs to be reissued with changes, clone the original invoice, make the need changes, and follow normal procedures to print the check.
Reissuing Checks
If a check was lost in the mail and just needs to be reissued, there is no need to reverse accounting and reenter the accounting. The check can be reissued with a new check number.
- Go to Accounts Payable
- Click Reissue Accounts Payable under Utilities
- Select the AP run that the check is a part of.
- Choose Individual Select
- Choose Reissue with New Check Number
- If the date check date needs to be changed, use the Check Date Override otherwise set that date to the original date of the check.
- Click Next
- Select the check(s) and click Next
- Click Next and Print Checks | https://docs.glenbard.org/index.php/technology/skyward/business-office/business-office-void-reissue-checks/ | 2021-04-10T19:49:49 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.glenbard.org |
Assigning Asset Tags to Devices
Asset tags can be used to easily identify your devices or to link them to your existing inventory system. The asset tags are shown in the devices overview and on the devices details page in the "Inventory Details" section. Additionally, you can use a device's asset tag in profiles, device names, and wallpapers by using the %AssetTag% variable.
Assigning Asset Tags
Assigning an Asset Tag Using a Placeholder
You can assign asset tags to devices manually or by using a placeholder. For more information on how to assign asset tags by using a placeholder, see Adding a Placeholder Device.
Manually Assigning an Asset Tag
In Jamf School, navigate to Devices > Devices in the sidebar.
Click the device name in the device overview pane.
Click Edit Details in the upper-right corner of the device details pane.
Enter an asset tag in the Asset Tag field.
Click Save.
Related Information
For related information, see the following section in this guide:
Setting the Wallpaper on Mobile Devices
Find out how to add a device's asset tag to the wallpaper on the device. | https://docs.jamf.com/jamf-school/deploy-guide-docs/Assigning_Asset_Tags_to_Devices.html | 2021-04-10T19:39:00 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.jamf.com |
Setting Up Shared iPads
Shared iPad allows you to deliver personalized experiences on an iPad shared by multiple users. You can set up Shared iPads using Apple School Manager and Jamf School.
General Requirements
To set up Shared iPad, you need:
Supervised iPads with Shared iPad enabled and iOS 9.3 or later enrolled via Automated Device Enrollment (formerly DEP) (For more information, see Automated Device Enrollment.)
Note: You can also configure the storage quota per user on Shared iPads with iOS or iPadOS 13.4 or later by using the Shared iPad User Storage Quota pop-up menu in the Automated Device Enrollment profile (DEP profile) settings. This overrides the Max. number of users setting when it is configured.
Teachers and classes synced to Apple School Manager (For more information, see Synchronization.)
Education profile installed on teacher devices
An iPad Pro, iPad Air 2, or iPad mini 4
iPads with at least 32 GB of storage
Note: On devices with 32 GB of storage, each user needs a minimum of 1 GB. On devices with 64 GB of storage, each user needs a minimum of 2 GB.
Configuring Shared iPad Settings
You can configure settings for iPads with Shared iPad enabled in Jamf School.
In Jamf School, navigate to Organization > Settings in the sidebar.
Click the Shared iPad payload.
Enter the name that you want to display on the sign in screen in the Organization name field.
To allow teachers to sign in to the iPad with their Managed Apple ID and populate the Classroom app with information from Apple School Manager, select the Allow teacher users to sign in on iPads with Shared iPad enabled checkbox.
Note: The Automatically configure Apple Classroom based on Classes and Users in Jamf School setting must be disabled in the Apple Classroom settings for this feature to work.
Click Save.
Related Information
For related information about how to prepare Shared iPads, see Prepare Shared iPad in Apple's Mobile Device Management Settings.
For related information about how to configure a Shared iPad passcode using Apple School Manager, see Create Shared iPad passcodes in Apple School Manager in the Apple School Manager User Guide. | https://docs.jamf.com/jamf-school/deploy-guide-docs/Setting_Up_Shared_iPads.html | 2021-04-10T18:22:53 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.jamf.com |
>
Impersonating the Windows Identity is the identity of the ASP.NET process. On Microsoft Windows 2000 and Windows XP Professional, this is the identity of the ASP.NET worker process, which is the local ASPNET account. On Windows Server 2003, this is the identity of the IIS Application Pool that the ASP.NET application is part of..
Enabling Authorization using NTFS ACLs).
Note
You can also use ASP.NET roles to manage user authorization for pages and sections of your Web application. For more information, see Managing Authorization Using Roles.
See Also
Tasks
How to: Create a WindowsPrincipal Object
How to: Create GenericPrincipal and GenericIdentity Objects
Other Resources
ASP.NET Web Application Security | https://docs.microsoft.com/ko-kr/previous-versions/aspnet/907hb5w9(v=vs.100) | 2021-04-10T20:53:25 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.microsoft.com |
- Replication >
- Replica Set Deployment Tutorials >
- Deploy a Replica Set
Deploy a Replica Set¶ production deployments, you should maintain as much separation between
members as possible by hosting the
mongod
instances on separate machines. When using virtual machines for
production deployments, you should place each
mongod
instance on a separate host server serviced by redundant power circuits
and redundant network paths.
Before you can deploy a replica set, you must install MongoDB on each system that will be part of your replica set. If you have not already installed MongoDB, see the installation tutorials.
Considerations When Deploying a Replica Set¶
Architecture¶
In production, deploy each member of the replica set to its own machine
and if possible bind to the standard MongoDB port of
27017.
See Replica Set Deployment Architectures for more information.
Hostnames¶
Tip
When possible, use a logical DNS hostname instead of an ip address, particularly when configuring replica set members or sharded cluster members. The use of logical DNS hostnames avoids configuration changes due to ip address changes.
IP Binding¶
Use the
bind_ip option to ensure that MongoDB listens for
connections from applications on configured addresses.:
Connectivity¶
Ensure that network traffic can pass securely between all members of the set and all clients in the network .
Consider the following:
- Establish a virtual private network. Ensure that your network topology routes all traffic between members within a single site over the local area network.
- Configure access control to prevent connections from unknown clients to the replica set.
- Configure networking and firewall rules so that incoming and outgoing packets are permitted only on the default MongoDB port and only from within your deployment. hostname/ip or a comma-delimited list of hostnames) for your
mongod instance that remote
clients (including the other members of the replica set) can use to
connect to the instance.
Alternatively, you can also specify the
replica set name and the
ip addresses in a configuration file:
To start
mongod with a configuration file, specify the
configuration file’s path with the
--config option:
In production deployments, you can configure a init script to manage this process. Init scripts are beyond the scope of this document.
Connect.
Tip
When possible, use a logical DNS hostname instead of an ip address, particularly when configuring replica set members or sharded cluster members. The use of logical DNS hostnames avoids configuration changes due to ip address changes.. | https://docs.mongodb.com/v4.0/tutorial/deploy-replica-set/ | 2021-04-10T18:25:45 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.mongodb.com |
Drug. More information about DrugBank can be found here.
In its raw form, the DrugBank database is a single XML file. Users must create an account with DrugBank and request permission to download the database. Note that this may take a couple of days.
The
dbparser package parses the DrugBank XML database into
R tibbles that can be explored and analyzed by the user, check this tutorial for more details.
Also, the package offers the option to save these tibbles in databases including SQL Server DB and Maria DB just by enabling
save_table option, check this tutorial for more details.
If you are waiting for access to the DrugBank database, or do not intend to do a deep dive with the data, you may wish to use the
dbdataset package, which contains the DrugBank database already parsed into
R tibbles. Note that this is a large package that exceeds the limit set by CRAN. It is only available on GitHub.
dbparser is tested against DrugBank versions 5.1.0 through 5.1.6 successfully. If you find errors with these versions or any other version please submit an issue here.
You can install the released version of dbparser from CRAN with:
install.packages("dbparser")
or you can install the latest updates directly from the repo
library(devtools) devtools::install_github("ropensci/dbparser")
Please note that the ‘dbparser’ project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.
👍🎉 First off, thanks for taking the time to contribute! 🎉👍 Please review our Contributing Guide. | https://docs.ropensci.org/dbparser/ | 2021-04-10T19:36:36 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.ropensci.org |
Custom Materials in Core¶
Overview¶
Materials are the way to change the appearance of an object, using a complete package of shaders and textures. Core has a variety of different materials to chose from, and each can be modified in different ways to create custom materials.
Applying Materials¶
Find the Current Material¶
The material on an object can be found by in the Properties menu. Most shapes start out with the blue Grid Basic material, which is the default for all basic shapes.
Multiple Materials¶
Some objects have multiple material slots, such as the Cube - Arcade 04 object, allowing you to further customize the look of your game.
Change Materials by Dragging and Dropping from Core Content¶
The easiest way to change materials is to drag the material onto an object in the Main Viewport. You can also drag the material into a specific slot in the Properties window.
Change Materials Using the Properties Window¶
The Material Picker allows you to select a new material for an object from a list of all the available materials.
- Select the object and open the Properties window.
- Double click the image of the material to open the Material Picker.
- Select a material to be applied to the object.
Change the Color of a Material¶
The base color of a material can be changed using the Material Override property.
- Open the object's Properties window and scroll to the appropriate material section.
- Double click the colored box next to the Material Override.
- Select a color in the Color Picker window that pops open, and click OK to apply the color to the material.
Smart Material¶
Smart Materials are textures that align to the world, rather than the object. This makes it easy to seamlessly connect two objects using the same material.
Enable and Disable Smart Material¶
Use Smart Material is enabled by default on materials that have repeating patterns.
- Select an object and open the Properties window.
- Scroll down to the Material section.
- Check or uncheck Use Smart Material.
Smart Materials On¶
These two cubes both have the red brick material applied. Because they both have Use Smart Material checked, the brick pattern is projected the same way onto the two objects.
Smart Materials Off¶
These two cubes both have the red brick material applied but they do not have "Use Smart Material" checked. The texture is aligned to the object, so it is stretched and looks different on these two differently sized cubes.
Z Fighting¶
Objects with different materials or objects not using the Smart Materials feature may exhibit z-fighting. This flickering is caused when different materials are layered over one another. Z-fighting can be distracting when playing games, so it's best to avoid it.
U/V Tiling¶
When you uncheck Use Smart Material, two more customization options appear: U Tiling Override and V Tiling Override.
- U Tiling Factor controls how many times the pattern repeats on the X axis.
- V Tiling Factor controls how many times the pattern repeats on the Y axis.
U Tiling and V Tiling both set to 1¶
U Tiling set to 3 and V Tiling set to 1¶
U Tiling set to 1 and V Tiling set to 3¶
U Tiling and V Tiling both set to 3¶
Custom Materials¶
Custom materials allow you to finely tune any Core material beyond one color and the U/V tiling.
Create a Custom Material¶
There are two ways to create a custom material for your project.
From Core Content¶
- Find a material to customize in Core Content.
- Right-click and select New Custom Material.
- Open the Project Content window and select My Materials.
- Double click on the new custom material to open the Material Editor.
Your new custom material can be found in the My Content > Local Materials section under the Project Content tab. Edit your custom material by double clicking its name. It will be called "Custom -Name of Material-".
From an Object¶
- Select an object and open the Properties window.
- Scroll down to the material.
- Click the New Custom Material button.
- Click the
icon to open the Material Editor.
Use the Material Editor¶
The Material Editor allows you to customize values for each type of material.
Some materials have properties specific to them. For example, the ceramic materials have Damage Amount and Cracks, allowing for a more distressed look. Hover over any property name to read what it does.
Rename a Custom Material¶
Change the name of each custom material by editing the text field at the top of the Material Editor. This will allow you to easily find and re-use the material on different objects in the game.
Learn More¶
Environment Art | Modeling Basics | https://docs.coregames.com/tutorials/custom_materials/ | 2021-04-10T18:52:03 | CC-MAIN-2021-17 | 1618038057476.6 | [array(['https://d33wubrfki0l68.cloudfront.net/5514ca05d160e6ea2da0fb6a95e589f426e9241a/3d4d5/img/materials/samplematerials.png',
'Sample Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/aed051a3f4ab1e1f2d58e74b23b141978a0b50c9/ac562/img/materials/image5.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/cfa5ee74135d5b570617d46b93c6f6ccc5b00d0a/128c8/img/materials/image13.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/f955c166174dddce6506b7fd09e6e48d5daf07bb/19179/img/materials/image18.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/bd94dada18c364d127ac40b08664680d9339a807/f38e9/img/materials/materialpicker.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/ab5caca9556c3f521dc8ee99aceb56bd7bf3c9c0/2cf5d/img/materials/colorpicker.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/0597e281dded8f0529e727864a6cd670b858368f/5ecc3/img/materials/image16.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/3bcaf02747abdbb56c4716a7b6b3aa851080a2c4/3dc8f/img/materials/image10.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/025f606f47d12db78661b8d830f1d4cd14737cdf/5fe1e/img/materials/image11.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/f3000123bb8b97e21d99a0b3d571bbbc68d60e94/17b1c/img/materials/image7.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/6f1fb6d8b6ff3578cb355a9d6746fd24ba39fc5c/1713a/img/materials/image8.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/bad557241af324a6e7374e83a3c441e92252aea3/d3f0e/img/materials/image3.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/0f0cf76e50cc90674faab136f6c4e6e3dbcabd36/743e8/img/materials/image12.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/ae0e7fe50c08c7f5165bdddff212318ec619c46b/256e8/img/materials/image17.png',
'Materials Screenshot Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/3c8a244f7b8c6063631e13557cc9e578588f0796/f29d6/img/materials/materialeditor.png',
'Materials'], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/caa3b081b7b1fde9a5445b9dde545efe15aa1c5d/410b2/img/materials/image6.png',
'Materials Screenshot Materials'], dtype=object) ] | docs.coregames.com |
Generating User Account Credentials
For manually created user accounts, you must generate sign-in credentials and send them to users via email.
Requirements
To send the sign-in credentials to a user, the user account must have an email address associated with the account in Jamf School.
Procedure
In Jamf School, navigate to Users > Users in the sidebar.
Select the user accounts you want to generate login credentials for and click More > Generate accounts.
Configure login credential settings, including the email message, password length, and password policy.
Click Save.
Users receive an email with their username and password. | https://docs.jamf.com/jamf-school/deploy-guide-docs/Generating_User_Account_Credentials.html | 2021-04-10T19:37:20 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.jamf.com |
Hi In ECP console I don´t find My Organizations OU´s? ![75479-organization.png][1] [1]: /answers/storage/attachments/75479-organization.png
Hi In ECP console I don´t find My Organizations OU´s? ![75479-organization.png][1] [1]: /answers/storage/attachments/75479-organization.png
The CU over wrote the custom settings.
Reapply the setting:
Thanks this solution works.
8 people are following this question.
How can i permanently release an email ID from getting into quarantine
Helped with unblocking attachments in outlook emails
How to clear AutoComplete in Outlook (Office 365) via GPO and Powershell ?
The usage of Set-Mailbox cmdlet for Exchange Online via new preview module and certificate
Setting up Exchange 365 hybrid for .local domain | https://docs.microsoft.com/en-us/answers/questions/304043/after-install-cu23-and-kb5000871-in-exchange-2013.html | 2021-04-10T20:37:22 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.microsoft.com |
How can I hide menu items for Windows apps on the Start Menu? For the old .lnk files, I could just do:
attrib +h "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Adobe Reader DC.lnk"
Can I do a similar thing for a Windows app listed in the Start Menu? Like I would like to remove "Your Phone", Mobile Plan", "MaxxAudioPro", and maybe some other games. This would apply to new users logging into lab machines.
Also, once I remove them, how would I later add them back? | https://docs.microsoft.com/en-us/answers/questions/31411/hide-windows-apps-on-start-menu.html | 2021-04-10T18:44:52 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.microsoft.com |
Hi There,
I have upgraded the AAD Connect version ( from 1.1.880 to the latest) on the staging server. Verify script is suggesting this the export from this server will delete 300 users. Any idea how can we identify the reason?
It also says that this will update 500 items (user, group and devices).
Cheers,
NG | https://docs.microsoft.com/en-us/answers/questions/3402/bulk-users-delete-after-upgrading-aad-connect-to-l.html | 2021-04-10T20:09:51 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.microsoft.com |
How to enable SNMP Traps with Anycast.
Use Address Manager v8.2.0, customers might not be able to obtain SNMP Traps with Anycast BGP. To resolve this issue, click Update from the SNMP Service Configuration page in the Address Manager. For more information, including details on scenarios that can trigger these messages, refer to Knowledge Base article 06684 on BlueCat Customer Care. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/SNMP-Traps-with-Anycast/8.2.0 | 2021-04-10T18:15:43 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.bluecatnetworks.com |
Closing a User-Initiated Batch
From time to time, all Batches should be closed.Closing a Batch stops all transactions within the Batch and prevents more transactions from being allocated to it.
Prior to closing a User-Initiated Batch, the User must confirm that:
a) All transaction activity to and from this Batch has ceased; and
b) The physical funds available in the Batch match the amount reported by the Batch total in Acorn.
Batch Center > Batches > Select Specific Batch > View > Close Batch
From the ‘View Batch Details’ window of the specific Batch:
- Click ‘Close Batch’ to open the ‘Close Batch’ window.
- Click ‘Close Batch’ to open the dialogue box, requiring confirmation of the Batch closure.
- Click ‘Yes’.
- Print the two reports automatically initiated by Acorn.
- Have the funds deposited to the school’s account.
- Post the GL transactions into the cash accounting system. | https://docs.glenbard.org/index.php/acorn/acorn-acorn/closing-a-user-initiated-batch/ | 2021-04-10T19:25:02 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.glenbard.org |
Executor xref:data-structures:topic.adoc[
ITopic with the necessary execution parameters, and clients listening can act on the message. example also implements a Serializable interface, since it may be sent to another member to be processed. automatically gets involved in the executions started in
MasterMember and start processing.(); // ... } }
Canceling an Executing Task
A task in the code that you execute in a cluster might take longer than expected. If you cannot stop/cancel that task, it keeps, the
future.get() method throws). | https://docs.hazelcast.com/imdg/4.1/computing/executor-service.html | 2021-04-10T19:51:29 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.hazelcast.com |
Motion Motion Sensor Kit includes everything required to build the perfect virtual security guard. These instructions provide a step-by-step overview to assemble and configure your motion sensor to alert you via SMS or email when motion is detected.
As you follow these instructions, if you run into any issues, please refer to the Losant Documentation and the Losant Forums for help.
Your kit should include the following items:
- 1 NodeMCU development board
- 1 PIR sensor
- 3 male-to-female jump wires
- 1 solderless breadboard
- 1 USB cable
1. Environment Setup
The NodeMCU included in this kit is programmed using the Mongoose OS toolchain.
Install USB Drivers
The NodeMCU requires the USB to UART driver to be installed to program it. Download and install the driver for your platform by following the instructions at:
On a Mac, the above link downloads a disk image. Double-click the file to mount it, open the disk image, then double-click the .pkg file to install the driver.
Install Mos Tool
Mongoose OS comes with a CLI tool called
mos. The
mos CLI tool provides a terminal interface to program and flash the NodeMCU.
To install the
mos tool, follow the instructions in the Mongoose OS documentation.
Once you download the
mos tool, make sure you have the latest version:
$ mos update latest
2. Losant Setup
In this section, you’ll register for a Losant account, create your application, and add the device for your motion sensor kit.
Create Account
If you don’t already have an account, navigate to to register.
Create Application
Create an application. You can name it whatever you want.
Add Device
The next step is to register the
movement.
The device attributes specify what state information the device reports. The firmware that you’ll flash in the following sections will report if there is moment every two seconds. select
Device Access Keys from the left navigation. Then click
Add Access Key..
3. Wiring
Disconnect the NodeMCU from USB before wiring.
In this step, we’re going to connect the PIR sensor to the NodeMCU. Below is the PIR sensor diagram and wiring diagram.
If you are not familiar with a breadboard, here is a primer.
- Push the NodeMCU into terminals 1-15 on either side of the center line, which are columns
band
i. The USB port should be facing away from the breadboard.
- Use a female-to-male jump wire to connect the Ground terminal on the PIR sensor to the GND pin on the NodeMCU
a9.
- Use a female-to-male jump wire to connect the Power terminal on the PIR sensor to the 3v3 pin on the NodeMCU
a10.
- Use a female-to-male jump wire to connect the Digital OUT terminal on the PIR sensor to the D1 pin on the NodeMCU
a14.
5. Flash the Firmware
In this step, we are going to program the NodeMCU.
Get Motion Sensor Firmware
Now let’s get the firmware you’ll be flashing to the device. Download and extract the following zip file to your computer.
If you’re familiar with Git, you can also clone the repository from here:
The main file of the application is located in
fs/init.js:
Every 2 seconds the firmware is publishing the state
{ "movement": <value> } to Losant. This value will be
1 or
true when movement is present and
0 or
false in the other cases.
Flashing
Connect the NodeMCU dev kit to USB.
The following commands should be pasted into the terminal; then press Enter to run them.
$ cd /location/to/losant-mqtt-mongoose-os
Build and flash the firmware:
$ mos build --arch esp8266 && mos flash
Configure WiFi:
$ mos wifi WIFI_SSID WIFI_PASSWORD
You must replace the following values:
- WIFI_SSID - Your WiFi SSID.
- WIFI_PASSWORD - Your WiFi password.
Configure MQTT connection to Losant:
$ mos config-set mqtt.client_id=LOSANT_DEVICE_ID \ mqtt.user=LOSANT_ACCESS_KEY \ mqtt.pass=LOSANT_ACCESS_SECRET \ device.id=LOSANT_DEVICE_ID
You have already obtained the
LOSANT_DEVICE_ID,
LOSANT_ACCESS_KEY, and
LOSANT_ACCESS_SECRET.
Now that we’ve configured WiFi and the Losant credentials, our device should be connected and ready to go. In the next section, we will talk about the many ways to debug and verify that your device is connected.
Verify
Mongoose OS has the ability stream the logs from the device via serial to the terminal. These logs will display all the
$ mos console
Mongoose OS also has a web UI where you can monitor logs, flash devices, and update the firmware with a web-based IDE.
To open up the mos web UI:
$ mos
On the other end, if you go to your application overview page in Losant, you’ll see the communication log. This gives you a ton of helpful information about what’s happening in your application. Here you will be able to see successful connections:
Lastly, you can use the data explorer to see the data that is stored in Losant. The Data Explorer allows you to easily explore, aggregate and analyze historical data across all of the devices in an application.
It’s now time to start making use of this data.
6. Set Up Alerts
Now that sensor data is flowing into Losant, we can set up our alerts to be notified by SMS and email whenever the sensor is triggered. For this, we’re going to use Losant Workflows.
First, create a new workflow and name it whatever you want.
The workflow will start with a Device Trigger. Every workflow starts with a workflow trigger. This workflow will execute every time the sensor reports state. The firmware that we flashed to it reports state every 2 seconds, so this workflow will be triggered every 2 seconds.
Next, add a Debug Node. Whenever a payload hits the debug node it is displayed in the Debug tab. This allows you to easily debug workflows as you are building them.
In the workflow, you also have the ability to see the payload flow, in real time, through the workflow. First, you must deploy the workflow. Then, select the debug icon in the top right of the workflow pallet.
Now, you can visualize the payloads in the workflow like so:
Next, we need to read the
movement and act on it. For this, we can use a Latch node, which is a type of conditional node.
Latches work very similarly to conditional nodes, but only allow the true path to be executed once (latched). The true path can only be executed again if a reset condition is met (unlatched).
We use a Latch node here because we don’t want to get an email or SMS every time the sensor reports a
movement - remember that the device is reporting every 2 seconds. We only want to be alerted once when movement happens and then only alerted again if new movement was discovered. We will only unlatch this node if the
movement goes back to false to indicate that we didn’t see movement anymore. At that point, it’s safe to alert us again when our sensor sees movement.
Then, after we deploy, we can see how this payload interacts with the workflow. Notice, because of the Latch node, the payload will only take the success path once.
Lastly, we need to notify ourselves. We can use the SMS node or the Email node. In the example, the SMS node is used. The
SMS and
smsNumber global configuration field and the email node sends a message to the addressed stored in the
8. Build a Dashboard
In this step, we’re going to build a dashboard to visualize the real-time and sensor readings. First, create a new dashboard from the
Dashboards menu.
You can name the dashboard anything you’d like. If you’d like to let other people see your dashboard, you can optionally modify the access control level after creating the dashboard.
Next, add a time series block.
Next, you’ll be able to configure the block settings. Give the block a header and choose your application. We’ll be displaying historical data.
We want to show the average movement, every 30 seconds, over the last hour. Set the
Time Range and
One Point Every appropriately. Next, select your device and the
movement attribute. As soon as all the fields are filled out, you should see a real-time preview on the top-right of the screen.
Click
Add Block to add the graph to your dashboard.
Now, you can add more blocks to your dashboard to complete your virtual security guard.
And with that, your motion sensor kit is now ready to go. If you need some extra challenges, here are some things you can try:
- Add a Streaming Gauge Block to your dashboard to see a real time feed.
- Configure an alert if movement happens between specific time intervals with the Time Range Node. | https://docs.losant.com/getting-started/losant-iot-dev-kits/motion-sensor-kit/ | 2019-07-15T21:10:54 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/motion-sensor-header.jpg',
'Motion Sensor Motion Sensor'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/uart-driver-windows.png',
'Windows Download Windows Driver Download'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/uart-driver-mac.png',
'Mac Driver Download Mac Driver Download'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/mac-driver-disk-image.png',
'Mac Driver Disk Image Mac Driver Disk Image'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/builder-kit/create-application.png',
'Create Application Create Application'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/application-name.png',
'Application Name Application Name'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/add-device.png',
'Add Device Menu Add Device Menu'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/add-from-scratch.png',
'Create From Scratch Create From Scratch'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/device-settings.png',
'Device Settings Device Settings'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/device-attribute.png',
'Device Attribute Device Attribute'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/device-id.png',
'Device ID Device ID'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/access-keys.png',
'Access Keys Access Keys'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/builder-kit/token-restrictions.png',
'Token Restrictions Token Restrictions'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/builder-kit/access-token-popup.png',
'Access Token Popup Access Token Popup'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/pir-diagram.png',
'PIR Diagram PIR Diagram'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/wiring-diagram.png',
'Wiring Diagram Wiring Diagram'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/wiring-image.jpg',
'Wiring Image Wiring Image'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/mos-console.png',
'Mos Console Mos Console'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/mos-ui.gif',
'Mos UI Mos UI'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/losant-iot-communication-log-success.png',
'Communication log Communication log'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/data-explorer.png',
'Losant Data Explorer Losant Data Explorer'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/create-workflow.png',
'Create Workflow Create Workflow'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/workflow-settings.png',
'Workflow Settings Workflow Settings'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/device-trigger.png',
'Device Trigger Device Trigger'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/debug-node.png',
'Debug Debug'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/live-payload.png',
'Live Payload Live Payload'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/debug.gif',
'Conditional Gif Debug Gif'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/conditional.png',
'Conditional Conditional'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/conditional.gif',
'Conditional Gif Conditional Gif'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/sms.png',
'SMS SMS'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/create-dashboard-menu.png',
'Create Dashboard Menu Create Dashboard Menu'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/dashboard-settings.png',
'Dashboard Settings Dashboard Settings'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/add-time.png',
'Add Time Add Time'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/time-settings.png',
'Time Settings Time Settings'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/time-settings-1.png',
'Time Settings Time Settings'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/dashboard-with-time.png',
'Dashboard with Time Dashboard with Time'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/motion-sensor/dashboard.png',
'Full Dashboard Full Dashboard'], dtype=object) ] | docs.losant.com |
Fix modified default rules in Azure AD Connect
Azure Active Directory (Azure AD) Connect uses default rules for synchronization. Unfortunately, these rules don't apply universally to all organizations. Based on your requirements, you might need to modify them. This article discusses two examples of the most common customizations, and explains the correct way to achieve these customizations.
Note
Modifying existing default rules to achieve a needed customization isn't supported. If you do so, it prevents updating these rules to the latest version in future releases. You won't get the bug fixes you need, or new features. This document explains how to achieve the same result without modifying the existing default rules.
How to identify modified default rules
Starting with version 1.3.7.0 of Azure AD Connect, it's easy to identify the modified default rule. Go to Apps on Desktop, and select Synchronization Rules Editor.
In the Editor, any modified default rules are shown with a warning icon in front of the name.
A disabled rule with same name next to it also appears (this is the standard default rule).
Common customizations
The following are common customizations to the default rules:
- Change attribute flow
- Change scoping filter
- Change join condition
Before you change any rules:
Disable the sync scheduler. The scheduler runs every 30 minutes by default. Make sure it's not starting while you're making changes and troubleshooting your new rules. To temporarily disable the scheduler, start PowerShell, and run
Set-ADSyncScheduler -SyncCycleEnabled $false.
The change in scoping filter can result in deletion of objects in the target directory. Be careful before making any changes in the scoping of objects. We recommend that you make changes to a staging server before making changes on the active server.
Run a preview on a single object, as mentioned in the Validate Sync Rule section, after adding any new rule.
Run a full sync after adding a new rule or modifying any custom sync rule. This sync applies new rules to all the objects.
Change attribute flow
There are three different scenarios for changing the attribute flow:
- Adding a new attribute.
- Overriding the value of an existing attribute.
- Choosing not to sync an existing attribute.
You can do these without altering standard default rules.
Add a new attribute
If you find that an attribute is not flowing from your source directory to the target directory, use the Azure AD Connect sync: Directory extensions to fix this.
If the extensions don't work for you, try adding two new sync rules, described in the following sections.
Add an inbound sync rule
An inbound sync rule means the source for the attribute is a connector space, and the target is the metaverse. For example, to have a new attribute flow from on-premises Active Directory to Azure Active Directory, create a new inbound sync rule. Launch the Synchronization Rules Editor, select Inbound as the direction, and select Add new rule.
!Synchronization Rules Editor](media/how-to-connect-fix-default-rules/default3a.png)
Follow your own naming convention to name the rule. Here, we use Custom In from AD - User. This means that the rule is a custom rule, and is an inbound rule from the Active Directory connector space to the metaverse.
Provide your own description of the rule, so that future maintenance of the rule is easy. For example, the description can be based on what the objective of the rule is, and why it's needed.
Make your selections for the Connected System, Connected System Object Type, and Metaverse Object Type fields.
Specify the precedence value from 0 through 99 (the lower the number, the higher the precedence). For the Tag, Enable Password Sync, and Disabled fields, use the default selections.
Keep Scoping filter empty. This means that the rule applies to all the objects joined between the Active Directory Connected System and the metaverse.
Keep Join rules empty. This means this rule uses the join condition defined in the standard default rule. This is another reason not to disable or delete the standard default rule. If there is no join condition, the attribute won't flow.
Add appropriate transformations for your attribute. You can assign a constant, to make a constant value flow to your target attribute. You can use direct mapping between the source or target attribute. Or, you can use an expression for the attribute. Here are various expression functions you can use.
Add an outbound sync rule
To link the attribute to the target directory, you need to create an outbound rule. This means that the source is the metaverse, and the target is the connected system. To create an outbound rule, launch the Synchronization Rules Editor, change the Direction to Outbound, and select Add new rule.
As with the inbound rule, you can use your own naming convention to name the rule. Select the Connected System as the Azure AD tenant, and select the connected system object to which you want to set the attribute value. Set the precedence from 0 through 99.
Keep Scoping filter and Join rules empty. Fill in the transformation as constant, direct, or expression.
You now know how to make a new attribute for a user object flow from Active Directory to Azure Active Directory. You can use these steps to map any attribute from any object to source and target. For more information, see Creating custom sync rules and Prepare to provision users.
Override the value of an existing attribute
You might want to override the value of an attribute that has already been mapped. For example, if you always want to set a null value to an attribute in Azure AD, simply create an inbound rule only. Make the constant value,
AuthoritativeNull, flow to the target attribute.
Note
Use
AuthoritativeNull instead of
Null in this case. This is because the non-null value replaces the null value, even if it has lower precedence (a higher number value in the rule).
AuthoritativeNull, on the other hand, isn't replaced with a non-null value by other rules.
Don’t sync existing attribute
If you want to exclude an attribute from syncing, use the attribute filtering feature provided in Azure AD Connect. Launch Azure AD Connect from the desktop icon, and then select Customize synchronization options.
Make sure Azure AD app and attribute filtering is selected, and select Next.
Clear the attributes that you want to exclude from syncing.
Change scoping filter
Azure AD Sync takes care of most of the objects. You can reduce the scope of objects, and reduce the number of objects to be exported, without changing the standard default sync rules.
Use one of the following methods to reduce the scope of the objects you're syncing:
- cloudFiltered attribute
- Organization unit filtering
If you reduce the scope of the users being synced, the password hash syncing also stops for the filtered-out users. If the objects are already syncing, after you reduce scope, the filtered-out objects are deleted from the target directory. For this reason, ensure that you scope very carefully.
Important
Increasing the scope of objects configured by Azure AD Connect isn't recommended. Doing so makes it difficult for the Microsoft support team to understand the customizations. If you must increase the scope of objects, edit the existing rule, clone it, and disable the original rule.
cloudFiltered attribute
You can't set this attribute in Active Directory. Set the value of this attribute by adding a new inbound rule. You can then use Transformation and Expression to set this attribute in the metaverse. The following example shows that you don’t want to sync all the users whose department name starts with HRD (case-insensitive):
cloudFiltered <= IIF(Left(LCase([department]), 3) = "hrd", True, NULL)
We first converted the department from source (Active Directory) to lowercase. Then, using the
Left function, we took only the first three characters and compared it with
hrd. If it matched, the value is set to
True, otherwise
NULL. In setting the value to null, some other rule with lower precedence (a higher number value) can write to it with a different condition. Run preview on one object to validate sync rule, as mentioned in the Validate sync rule section.
Organizational unit filtering
You can create one or more organizational units (OUs), and move the objects you don’t want to sync to these OUs. Then, configure the OU filtering in Azure AD Connect. Launch Azure AD Connect from the desktop icon, and select the following options. You can also configure the OU filtering at the time of installation of Azure AD Connect.
Follow the wizard, and clear the OUs you don’t want to sync.
Change join condition
Use the default join conditions configured by Azure AD Connect. Changing default join conditions makes it difficult for Microsoft support to understand the customizations and support the product.
Validate sync rule
You can validate the newly added sync rule by using the preview feature, without running the full sync cycle. In Azure AD Connect, select Synchronization Service.
Select Metaverse Search. Select the scope object as person, select Add Clause, and mention your search criteria. Next, select Search, and double-click the object in the search results. Make sure that your data in Azure AD Connect is up-to-date for that object, by running import and sync on the forest before you run this step.
On Metaverse Object Properties, select Connectors, select the object in the corresponding connector (forest), and select Properties….
Select Preview…
In the Preview window, select Generate Preview and Import Attribute Flow in the left pane.
Here, notice that the newly added rule is run on the object, and has set the
cloudFiltered attribute to true.
To compare the modified rule with the default rule, export both of the rules separately, as text files. These rules are exported as a PowerShell script file. You can compare them by using any file comparison tool (for example, windiff) to see the changes.
Notice that in the modified rule, the
msExchMailboxGuid attribute is changed to the Expression type, instead of Direct. Also, the value is changed to NULL and ExecuteOnce option. You can ignore Identified and Precedence differences.
To fix your rules to change them back to default settings, delete the modified rule and enable the default rule. Ensure that you don't lose the customization you're trying to achieve. When you're ready, run Full Synchronization.
Next steps
Feedback | https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-fix-default-rules | 2019-07-15T20:14:29 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['media/how-to-connect-fix-default-rules/default1.png',
'Azure AD Connect, with Synchronization Rules Editor highlighted'],
dtype=object)
array(['media/how-to-connect-fix-default-rules/default2.png',
'Warning icon'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default2a.png',
'Synchronization Rules Editor, showing standard default rule and modified default rule'],
dtype=object)
array(['media/how-to-connect-fix-default-rules/default3b.png',
'Create inbound synchronization rule'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default3c.png',
'Synchronization Rules Editor'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default3d.png',
'Create outbound synchronization rule'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default4.png',
'Azure AD Connect additional tasks options'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default5.png',
'Azure AD Connect optional features'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default6a.png',
'Azure AD Connect attributes'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default7a.png',
'Create inbound synchronization rule options'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default8.png',
'Azure AD Connect additional tasks'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default9.png',
'Azure AD Connect Domain and OU filtering options'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default10.png',
'Azure AD Connect, with Synchronization Service highlighted'],
dtype=object)
array(['media/how-to-connect-fix-default-rules/default11.png',
'Synchronization Service Manager'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default12.png',
'Metaverse Object Properties'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default13a.png',
'Connector Space Object Properties'], dtype=object)
array(['media/how-to-connect-fix-default-rules/default14.png', 'Preview'],
dtype=object)
array(['media/how-to-connect-fix-default-rules/default15a.png', 'Preview'],
dtype=object)
array(['media/how-to-connect-fix-default-rules/default17.png',
'windiff tool output'], dtype=object) ] | docs.microsoft.com |
ASP.NET Globalization and Localization.
In This Section
Localizing ASP.NET Web Pages By Using Resources
How to: Set the Culture and UI Culture for ASP.NET Web Page Globalization
How to: Select an Encoding for ASP.NET Web Page Globalization
HTML Layout Guidelines for ASP.NET Web Page Globalization
Bidirectional Support for ASP.NET Web Applications
Related Sections
Globalizing and Localizing .NET Framework Applications
Provides information about globalizing applications.
Ajax Script Globalization and Localization
Provides information about AJAX features in ASP.NET for globalizing and localizing client script. | https://docs.microsoft.com/en-us/previous-versions/c6zyy3s9(v%3Dvs.140) | 2019-07-15T21:42:55 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
Panel Placement Customization
In The Welkin Suite IDE, you can customize the visibility and placement of the panels using the drag&drop method.
In order to change the location of any panel, simply click on the title part of the panel and drag it to its target place. You can drop it:
- anywhere — it will be opened as another window, which you’ll be able to drag back into the main IDE window,
- into any dock place — dock places are highlighted dynamically, so all you need to do is to drag over these dock places and you’ll see an overlay of where the panel can be located.
You can dock each panel into a separate dock place, or you can group multiple panels in one dock and switch between them using the tabs. If you start dragging the panels in the group, you will drag the whole group, so if you want to drag out only one of the grouped panels, start dragging its tab.
In addition to the panel placement, you can select the way they will be displayed — either pinned or auto-hide. To change the display method, use the buttons in the top right corner of the panel. | https://docs.welkinsuite.com/?id=windows:what_is_this:customize_your_tws:panel_placement_customization | 2019-07-15T20:00:07 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['/lib/exe/fetch.php?media=windows:what_is_this:customize_your_tws:2.7.2-panel-placement.gif',
"Panel's placement using the drag&drop method Panel's placement using the drag&drop method"],
dtype=object)
array(['/lib/exe/fetch.php?media=windows:what_is_this:customize_your_tws:2.7.1-panel-placement.png',
"Placement of the panel's tab from a dock Placement of the panel's tab from a dock"],
dtype=object) ] | docs.welkinsuite.com |
] [-m max_length] [--ssl certificate_path] gpfdist -? | --help gpfdist --version
Description
gpfdist is Greenplum’s). Greenplum Database..
Options
- -d directory
- The directory from which gpfdist will serve files. Defaults to 8080.
- -t timeout
- Sets the time allowed for Greenplum Database to establish a connection to a gpfdist process. Default is 5 seconds. Allowed values are 2 to 600 seconds..)
- -S (use O_SYNC)
- Opens the file for synchronous I/O with the O_SYNC flag. Any writes to the resulting file descriptor block gpfdist until the data is physically written to the underlying hardware.
- -w time
- Sets the number of seconds that Greenplum Database delays before closing a target file such as a named pipe. The default value is 0, no delay. The maximum value is 600 seconds, 10 minutes.
-. After executing gpfdist.
- -v (verbose)
- Verbose mode shows progress and status messages.
- -V (very verbose)
- Verbose mode shows all output messages generated by this utility.
- Displays the online help.
- --version
- Displays the version of this utility.
OR on Solaris
ps -ef | grep gpfdist
--Then kill the process, for example:
kill 3456 | https://gpdb.docs.pivotal.io/4320/client_tool_guides/load/unix/gpfdist.html | 2019-07-15T20:54:47 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['/images/icon_gpdb.png', None], dtype=object)] | gpdb.docs.pivotal.io |
RE_Raw_Refraction) is a color render element that takes materials and their colors into account when rendering refraction. Multiplying these two render elements together produces the Refraction Render Element (vrayRE_Refraction).
To properly calculate the Refraction Filter Render Element, the Refraction Render Element must also be added to the list of render elements being calculated during the rendering process to properly determine all the refraction information in the scene.
UI Path
||Render Settings window|| > Render Elements tab > Refraction FilterractionFilter.vrimg).
Denoise – Enables the render element's denoising, provided the Denoiser render element is present.
Common Uses
The Refraction Filter Render Element is useful for changing the appearance of refractive elements after rendering, using a compositing or image editing application. Below are examples of possible uses.
Refraction Filter Render Element
Original Beauty Composite
Refraction Filter Render Element with added contrast
Tinted Refraction Filter Render Element with added contrast
Refractions with added contrast
Tinted Refractions with added contrast
Underlying Compositing Equation
The Refraction Filter Render Element is multiplied by the Raw Refraction to produce the same information seen in the Refraction pass, but having them separated out allows them to be manipulated individually before combining them together.
vrayRE_Raw_Refraction x vrayRE_Refraction_Filter = vrayRE_Refraction
Notes
- To properly calculate the refraction information in the. | https://docs.chaosgroup.com/display/VRAY4MAYA/Refraction+Filter | 2019-07-15T20:14:29 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.chaosgroup.com |
-
-
haul Internet
The NetScaler SD-WAN solution can backhaul Internet traffic to the MCN site or other SD-WAN client-node sites for access to the Internet. The term “backhaul” indicates traffic destined for the Internet is sent back to another predefined site which has access to the Internet via a WAN link. This may be the case for networks that do not allow Internet access directly at a branch office because of security concerns, or due to the underlay networks topology. An example would be a remote site that lacks an external firewall where the on-board SD-WAN firewall does not meet the security requirements for that site. For some environments, backhauling all remote site internet traffic through the hardened DMZ at the Data Center may be the most desired approach to providing Internet access to users at remote offices. This approach does however have its limitations to be aware of following and the underlay WAN links size appropriately.
Backhaul of internet traffic adds latency to internet connectivity and is variable depending on the distance of the branch site for the data center.
Backhaul of internet traffic consumes bandwidth on the Virtual Path and should be accounted for in sizing of WAN links.
Backhaul of internet traffic may over-subscribe the Internet WAN link at the Data Center.
All NetScaler SD-WAN devices can terminate up to eight distinct Internet WAN links into a single device. Licensed throughput capabilities for the aggregated WAN links are listed per respective appliance on the NetScaler SD-WAN datasheet.
The NetScaler SD-WAN solution supports the backhaul of internet traffic with the following configuration.
Enable Internet Service at the MCN site node, or any other site note where Internet Service is desired.
On the branch nodes where internet traffic will be backhauled, manually add a 0.0.0.0/0 route to default all default traffic to the Virtual Path Service with the next hop denoted as the MCN, or intermediary site.
Verify that the route table of the branch site does not have any other lower cost routes that would steer traffic other than the desired backhaul route through the Virtual Path.
Backhaul Internet | https://docs.citrix.com/en-us/netscaler-sd-wan/10/internet-service/backhaul-internet.html | 2019-07-15T21:08:32 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['/en-us/netscaler-sd-wan/10/media/back-haul-dc-mcn.png',
'localized image'], dtype=object) ] | docs.citrix.com |
Invalid PUBLIC_IP in CBD Profile
Invalid PUBLIC_IP error when starting Cloudbreak.
Error: Invalid PUBLIC_IP error when starting Cloudbreak.
Solution: The
PUBLIC_IP property must be set in the Profile file or
else you won’t be able to log in to the Cloudbreak web UI. If you are
migrating your instance, check the Profile file to make sure that after the start the value
of the
PUBLIC_IP property remains valid. If editing the IP, make sure to
restart Cloudbreak by using
cbd restart. | https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.1/troubleshoot/content/cb_invalid-public_ip-in-cbd-profile.html | 2019-07-15T21:07:34 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.hortonworks.com |
.
- Supports collection of container image garbage either automatically or manually
In summary, using the UCR instead of the Docker Engine:
- Reduces service downtime
- Improves on-the-fly upgradability
- Increases cluster stability
Container Runtime FeaturesContainer Runtime Features
The tables below list the features available with each of the supported container runtimes, which products support the features, and where the feature can be configured. | https://docs.mesosphere.com/1.13/deploying-services/containerizers/ | 2019-07-15T19:59:45 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.mesosphere.com |
sentry repair
Attempt to repair any invalid data.
This by default will correct some common issues like projects missing DSNs or counters desynchronizing. Optionally it can also synchronize the current client documentation from the Sentry documentation server (–with-docs).
Options
--with-docs / --without-docs: Synchronize and repair embedded documentation. This is disabled by default.
--help: print this help page. | https://docs.sentry.io/server/cli/repair/ | 2019-07-15T20:00:08 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.sentry.io |
App Settings- Push Notifications
The Apple Push Notification Service is a platform notification service created by Apple Inc. that enables third-party application developers to send notification data to applications installed on Apple devices. The notification information submitted can include badges, sounds, newsstand updates, or custom text alerts.
The Sandbox Key and passphrase for iOS is used to configure the notifications for the app in the development phase or a pre-production app.
The Production key and passphrase for iOS is used to configure the notifications for the app on a production app that is live for visitors.
While the FCM Server Key is used to pass your token from the FCM server to configuring push notifications on Android. | https://docs.acquire.io/app-settings-push-notifications | 2019-07-15T20:34:38 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['https://media.acquire.io/knowledgebase/kb-img-15628822180161.jpg',
None], dtype=object) ] | docs.acquire.io |
If you’re working on Ansible’s Core code, writing an Ansible module, or developing an action plugin, this deep dive helps you understand how Ansible’s program flow executes. If you’re just using Ansible Modules in playbooks, you can skip this section.
normalaction plugin
Ansible supports several different types of modules in its code base. Some of these are for backwards compatibility and others are to enable flexibility., the template action plugin takes values from the user to construct a file in a temporary location on the controller using variables from the playbook environment. It then transfers the temporary file to a temporary file on the remote system. After that, it invokes the copy module which operates on the remote system to move the file into its final location, sets file permissions, and so on..
New-style PowerShell modules use the Module Replacer framework framework for constructing modules. These modules get a library of PowerShell code embedded in them before being sent to the managed node.
These modules are scripts that include the string
<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>> in their body.
This string is replaced with the JSON-formatted argument string. These modules typically set a variable to that value like this:
json_arguments = """<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>"""
Which is expanded as:
json_arguments = """{"param1": "test's quotes", "param2": "\"To be or not to be\" - Hamlet"}"""
Note
Ansible outputs a JSON string with bare quotes. Double quotes are used to quote string values, double quotes inside of string values are backslash escaped, and single quotes may appear unescaped inside of a string value. To use JSONARGS, your scripting language must have a way to handle this type of string. The example uses Python’s triple quoted strings to do this. Other scripting languages may have a similar quote character that won’t be confused by any quotes in the JSON or it may allow you to define your own start-of-quote and end-of-quote characters. If the language doesn’t give you any of these then you’ll need to write a non-native JSON module or Old-style module instead.
These modules typically parse the contents of
json_arguments using a JSON
library and then use them as native variables throughout the code.
If a module has the string
WANT_JSON in it anywhere, Ansible treats
it as a non-native module that accepts a filename as its only command line
parameter. The filename is for a temporary file containing a JSON
string containing the module’s parameters. The module needs to open the file,
read and parse the parameters, operate on the data, and print its return data
as a JSON encoded dictionary to stdout before exiting.
These types of modules are self-contained entities. As of Ansible 2.1, Ansible only modifies them to change a shebang line if present.
See also
Examples of Non-native modules written in ruby are in the Ansible for Rubyists repository. are similar to
want JSON modules, except that the file that
they take contains
key=value pairs for their parameters instead of
JSON. Ansible decides that a module is old-style when it doesn’t have
any of the markers that would show that it is one of the other types.
When a user uses ansible or ansible-playbook, they specify a task to execute. The task is usually the name of a module along with several parameters to be passed to the module. Ansible takes these values and processes them in various ways before they are finally executed on the remote machine.
The TaskExecutor receives the module name and parameters that were parsed from the playbook (or from the command line in the case of /usr/bin/ansible). It uses the name to decide whether it’s looking at a module or an Action Plugin. If it’s a module, it loads the Normal Action Plugin and passes the name, variables, and other information about the task and play to that Action Plugin for further processing.
normalaction plugin¶
The
normal action plugin executes the module on the remote host. It is
the primary coordinator of much of the work to actually execute the module on
the managed machine.
no_logto the module).
Much of this functionality comes from the BaseAction class,
which lives in
plugins/action/__init__.py. It uses the
Connection and
Shell objects to do its work.
Code in
executor/module_common.py assembles the module
to be shipped to the managed node. The module is first read in, then examined
to determine its type:
After the assembling step, one final
modification is made to all modules that have a shebang line. Ansible checks
whether the interpreter in the shebang line has a specific path configured via
an
ansible_$X_interpreter inventory variable. If it does, Ansible
substitutes that path for the interpreter path given in the module. After
this, Ansible returns the complete module data and the module type to the
Normal Action which continues execution of
the module.
Ansible supports two assembler frameworks: Ansiballz and the older Module Replacer.:
from ansible.module_utils.MOD_LIB_NAME import *is replaced with the contents of the
ansible/module_utils/MOD_LIB_NAME.pyThese should only be used with new-style Python modules.
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>is equivalent to
from ansible.module_utils.basic import *and should also only apply to new-style Python modules.
# POWERSHELL_COMMONsubstitutes the contents of
ansible/module_utils/powershell.ps1. It should only be used with new-style Powershell modules.
ansible.module_utilscode. These are internal replacement patterns. They may be used internally, in the above public replacements, but shouldn’t be used directly by modules.
"<<ANSIBLE_VERSION>>"is substituted with the Ansible version. In new-style Python modules under the Ansiballz framework framework the proper way is to instead instantiate an AnsibleModule and then access the version from :attr:
AnsibleModule.ansible_version.
"<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>"is substituted with a string which is the Python
reprof the JSON encoded module parameters. Using
repron the JSON string makes it safe to embed in a Python file. In new-style Python modules under the Ansiballz framework this is better accessed by instantiating an AnsibleModule and then using
AnsibleModule.params.
<<SELINUX_SPECIAL_FILESYSTEMS>>substitutes a string which is a comma separated list of file systems which have a file system dependent security context in SELinux. In new-style Python modules, if you really need this you should instantiate an AnsibleModule and then use
AnsibleModule._selinux_special_fs. The variable has also changed from a comma separated string of file system names to an actual python list of filesystem names.
<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>substitutes the module parameters as a JSON string. Care must be taken to properly quote the string as JSON data may contain quotes. This pattern is not substituted in new-style Python modules as they can get the module parameters another way.
syslog.LOG_USERis replaced wherever it occurs with the
syslog_facilitywhich was named in
ansible.cfgor any
ansible_syslog_facilityinventory variable that applies to this host. In new-style Python modules this has changed slightly. If you really need to access it, you should instantiate an AnsibleModule and then use
AnsibleModule._syslog_facilityto access it. It is no longer the actual syslog facility and is now the name of the syslog facility. See the documentation on internal arguments for details.
The Ansiballz framework was adopted in Ansible 2.1 and is used for all new-style Python modules. Unlike the Module Replacer, Ansiballz uses real Python imports of things in
ansible/module_utils instead of merely preprocessing the module. It
does this by constructing a zipfile – which includes the module file, files
in
ansible/module_utils that are imported by the module, and some
boilerplate to pass in the module’s parameters. The zipfile is then Base64
encoded and wrapped in a small Python script which decodes the Base64 encoding
and places the zipfile into a temp directory on the managed node. It then
extracts just the Ansible module script from the zip file and places that in
the temporary directory as well. Then it sets the PYTHONPATH to find Python
modules inside of the zip file and.
In Ansiballz, any imports of Python modules from the
ansible.module_utils package trigger inclusion of that Python file
into the zipfile. Instances of
#<<INCLUDE_ANSIBLE_MODULE_COMMON>> in
the module are turned into
from ansible.module_utils.basic import *
and
ansible/module-utils/basic.py is then included in the zipfile.
Files that are included from
module_utils are themselves scanned for
imports of other Python modules from
module_utils to be included in
the zipfile as well.
Warning
At present, the Ansiballz Framework cannot determine whether an import
should be included if it is a relative import. Always use an absolute
import that has
ansible.module_utils in it to allow Ansiballz to
determine that the file should be included.
Arguments are passed differently by the two frameworks:
_.
Both Module Replacer framework and Ansiballz framework send additional arguments to
the module beyond those which the user specified in the playbook. These
additional arguments are internal parameters that help implement global
Ansible features. Modules often do not need to know about these explicitly as
the features are implemented in
ansible.module_utils.basic but certain
features need support from the module so it’s good to know about them.
The internal arguments listed here are global. If you need to add a local internal argument to a custom module, create an action plugin for that specific module - see
_original_basename in the copy action plugin for an example..
Boolean. If a module supports it, tells the module to show a unified diff of
changes to be made to templated files. To set, pass the
--diff command line
option. To access in a module, instantiate an AnsibleModule and access
AnsibleModule._diff.
Unused. This value could be used for finer grained control over logging..
This parameter controls which syslog facility Ansible module logs to. To set, change the
syslog_facility value in
ansible.cfg. Most
modules should just use
AnsibleModule.log() which will then make use of
this. If a module has to use this on its own, it should instantiate an
AnsibleModule and then retrieve the name of the syslog facility from
AnsibleModule._syslog_facility. The.
This parameter passes the version of Ansible that runs the module. To access
it, a module should instantiate an AnsibleModule and then retrieve it
from
AnsibleModule.ansible_version. This replaces
ansible.module_utils.basic.ANSIBLE_VERSION from
Module Replacer framework.
New in version 2.1..
Ansible can transfer a module to a remote machine in one of two ways:
Pipelining only works with modules written in Python at this time because Ansible only knows that Python supports this mode of operation. Supporting pipelining means that whatever format the module payload takes before being sent over the wire must be executable by Python via stdin.
Passing arguments via stdin was chosen for the following reasons:
The
argument_spec provided to
AnsibleModule defines the supported arguments for a module, as well as their type, defaults and more.
Example
argument_spec:
module = AnsibleModule(argument_spec=dict( top_level=dict( type='dict', options=dict( second_level=dict( default=True, type='bool', ) ) ) ))
This section will discuss.
fallback accepts a
tuple where the first argument is a callable (function) that will be used to perform the lookup, based on the second argument. The second argument is a list of values to be accepted by the callable.
The most common callable used is
env_fallback which will allow an argument to optionally use an environment variable when the argument is not supplied.
Example:
username=dict(fallback=(env_fallback, ['ANSIBLE_NET_USERNAME']))
choices
options implements the ability to create a sub-argument_spec, where the sub options of the top level argument are also validated using the attributes discussed in this section. The example at the top of this section demonstrates use of
options.
type or
elements should be
dict is this case.
apply_defaults works alongside
options and allows the
default of the sub-options to be applied even when the top-level argument is not supplied.
In the example of the
argument_spec at the top of this section, it would allow
module.params['top_level']['second_level'] to be defined, even if the user does not provide
top_level when calling the module.
removed_in_version indicates which version of Ansible a deprecated argument will be removed in. | https://docs.ansible.com/ansible/latest/dev_guide/developing_program_flow_modules.html | 2019-07-15T21:17:36 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.ansible.com |
Microsoft Trust Center.
We know that when you entrust your applications and data to Azure, you’re going to have questions. Where is it? Who can access it? What is Microsoft doing to protect it? How can you verify that Microsoft is doing what it says?
And we have answers. Because it’s your data, you decide who has access, and you work with us to decide where it is located. To safeguard your data, we use state-of-the-art security technology and world-class cryptography. Our compliance is independently audited, and we’re transparent on many levels—from how we handle legal demands for your customer data to the security of our code.
Here's what you find at the Microsoft Trust Center:
- Security – Learn how all the Microsoft Cloud services are secured.
- Privacy – Understand how Microsoft ensures privacy of your Data in the Microsoft cloud.
- Compliance – Discover how Microsoft helps organizations comply with national, regional, and industry-specific requirements governing the collection and use of individuals’ data.
- Transparency – View how Microsoft believes that you control your data in the cloud and how Microsoft helps you know as much as possible about how that data is handled.
- Products and Services – See all the Microsoft Cloud products and services in one place
- Service Trust Portal – Obtain copies of independent audit reports of Microsoft cloud services, risk assessments, security best practices, and related materials.
- What’s New – Find out what’s new in Microsoft Cloud Trust
- Resources – Investigate white papers, videos, and case studies on Microsoft Trusted Cloud
The Microsoft Trust Center has what you need to understand what we do to secure the Microsoft Cloud.
Feedback | https://docs.microsoft.com/en-us/azure/security/security-microsoft-trust-center | 2019-07-15T21:17:40 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
IOCTL_CDROM_READ_SG (Windows CE 5.0)
This IOCTL reads scatter buffers from the CD-ROM and the information is stored in the CDROM_READ structure. The DeviceIoControl function processes this IOCTL.
Parameters
- dwIoControlCode
[in] Set to IOCTL_CDROM_READ_SG to read scatter buffers from the CD-ROM and store the information in the CDROM_READ structure.
- lpInBuf
[in] Set to the address of an allocated SGX_BUF structure.
- nInBufSize
[in] Set to the size of the SGX_BUF.
- lpOutBuf
[in, out] On input, set to the address of an allocated CDROM_READ structure. This is the memory needed for the structure and info storage. On output, a filled CDROM_READ structure.
- nOutBufSize
[in] Set to the size of the CDROM_READ.
- lpBytesReturned
[in, out] On input, address of a DWORD that receives the size in bytes of the data returned. On output, set to the number of bytes written to the supplied buffer.
Return Values
If the function succeeds, the return value is nonzero.
If the function fails, the return value is zero. To obtain extended error information, call GetLastError.
Requirements
OS Versions: Windows CE .NET 4.0 and later.
Header: Cdioctl.h.
See Also
Block Drivers | CDROM_READ | DeviceIoControl | SGX_BUF
Send Feedback on this topic to the authors | https://docs.microsoft.com/en-us/previous-versions/windows/embedded/ms901381%28v%3Dmsdn.10%29 | 2019-07-15T19:54:17 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
GDPR Policy Localization
The SDK is now able to support localization of our GDPR terms and conditions which a user must agree if covered by the GDPR. The language of the policy is set via the search options.
// Country Code searchParams.setCountryCode("BE"); // Language Code searchParams.setLanguageCode("fr");
More info on setting the full SlyceSDK uses native extensions to power some features. These native extensions add 2.4 MB and 3.2 MB for 32-bit and 64-bit architectures, respectively. If your application is not ABI split, the net impact of the full SlyceSDK to your application's
.aar is about 6 MB.
SlyceSDK Lite on Android eliminates all native extensions, which removes the need for ABI-splitting and reduces the net size impact to less than 900 KB.
Link to SDK Lite:
You can download the SDK Lite from our public Github repository:
- Public Github Repo.
To disable SlyceSearchTasks from being executed automatically when a barcode is detected create an options HashMap and add a boolean flag for KEY_DISABLE_BARCODE_SEARCH_TASK
void initiateSlyceUI(Context context) { Slyce slyce = Slyce.getInstance(context); HashMap<String, Object> options = new HashMap<>(); options.put(SlyceOptions.KEY_DISABLE_BARCODE_SEARCH_TASK, true); try { new SlyceUI .ActivityLauncher(slyce, SlyceActivityMode.PICKER) .customClassName(CustomUIActivity.class.getName()) .options(options) // Add the options map when launching Slyce UI .launch(context); } catch (ClassNotFoundException | SlyceNotOpenedException e) { showError(); } }. | https://docs.slyce.it/hc/en-us/articles/360027559831-Android-5-10-Release-Notes | 2019-07-15T20:02:51 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.slyce.it |
How does it work?
Since The Welkin Suite was created for Force.com developers by fellow developers, it aims to solve a number of problems you may come across in your coding experience. This implies the simplification of some processes to improve coding velocity, introducing alternative ways to solve conventional issues, and a lot of room for customization — all to show you how pleasant Salesforce development can get.
Last modified: 2018/03/14 10:39 | https://docs.welkinsuite.com/?id=mac:how_does_it_work | 2019-07-15T20:41:44 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.welkinsuite.com |
Jenkins is a continuous integration and deployment tool.
Create a new job which will run
gauge run specs.
- In
Source Code Management select
Git give the git repository url.
- In
Build select
Execute Shell and specify the command
gauge run specs.
configuring
Eg.
gauge run --tags "tag1 & tag2" specs
Adding a flag
-p runs them using Parallel Execution.
Run against specific Using environments in a Gauge project using the
--env flag
See the Manpage <> __ for list of all the flags that can be used. | https://docs.gauge.org/master/howto/ci_cd/jenkins.html | 2019-07-15T19:58:30 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.gauge.org |
All content with label amazon+guide+import+infinispan+listener+setup+xaresource+xsd.
Related Labels:
podcast, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, query, deadlock, intro, archetype, jbossas, lock_striping, nexus, schema, cache,
s3, grid, test, jcache, api, ehcache, maven, documentation, youtube, userguide, write_behind, ec2, 缓存, hibernate, aws, custom_interceptor, clustering, eviction, gridfs, out_of_memory, jboss_cache, index, events, batch, configuration, hash_function, buddy_replication, loader, write_through, cloud, mvcc, notification, tutorial, presentation, xml, read_committed, jbosscache3x, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, permission, websocket, transaction, interactive, build, searchable, demo, installation,, - guide, - import, - infinispan, - listener, - setup, - xaresource, - xsd )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+guide+import+infinispan+listener+setup+xaresource+xsd | 2019-07-15T21:20:23 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.jboss.org |
All content with label as5+cache+gridfs+infinispan+listener+pojo+store+write_through+xml.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager,, cloud, mvcc, notification, tutorial, presentation, read_committed, jbosscache3x, distribution, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, websocket, async, transaction, interactive, xaresource, build, searchable, demo, installation, client, migration, non-blocking, jpa, filesystem, tx, user_guide, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, repeatable_read, webdav, hotrod, docs, batching, consistent_hash, whitepaper, jta, faq, spring, 2lcache, jsr-107, jgroups, lucene, locking, rest
more »
( - as5, - cache, - gridfs, - infinispan, - listener, - pojo, - store, - write_through, - xml )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+cache+gridfs+infinispan+listener+pojo+store+write_through+xml | 2019-07-15T21:42:59 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.jboss.org |
All content with label async+cache+consistent_hash+development+grid+hot_rod+infinispan+jboss_cache+listener+notification+release+repeatable_read+user_guide.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, rehash, transactionmanager, dist, partitioning, query, deadlock, intro, archetype, pojo_cache, jbossas, lock_striping, nexus,
guide, schema, state_transfer, amazon, memcached, test, jcache, api, xsd, ehcache, maven, documentation, roadmap,, remoting, mvcc, tutorial, presentation, murmurhash2, xml, read_committed, distribution, jira, cachestore, data_grid, hibernate_search, resteasy, cluster, br, websocket, transaction, interactive, xaresource, build, searchable, demo, scala, cache_server, installation, client, migration, non-blocking, rebalance, jpa, filesystem, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, murmurhash, standalone, snapshot, hotrod, webdav, docs, batching, store, whitepaper, jta, faq, as5, spring, 2lcache, jsr-107, lucene, jgroups, locking, rest
more »
( - async, - cache, - consistent_hash, - development, - grid, - hot_rod, - infinispan, - jboss_cache, - listener, - notification, - release, - repeatable_read, - user_guide )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/async+cache+consistent_hash+development+grid+hot_rod+infinispan+jboss_cache+listener+notification+release+repeatable_read+user_guide | 2019-07-15T20:53:00 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.jboss.org |
All content with label batching+buddy_replication+demo+gridfs+infinispan+installation+jcache+jsr-107+release+repeatable_read.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, transactionmanager, dist, partitioning, query, deadlock, archetype, lock_striping, nexus, guide, schema, listener, cache, amazon,
grid, test, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, getting, aws, interface, custom_interceptor, setup, clustering, eviction, out_of_memory, concurrency, jboss_cache, examples, import, index, events, batch, configuration, hash_function, loader, write_through, cloud, mvcc, tutorial, notification, read_committed, xml, distribution, started, cachestore, data_grid, resteasy, hibernate_search, cluster, br, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, scala, client, non-blocking, migration, jpa, filesystem, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, consistent_hash, store, jta, faq, 2lcache, as5, jgroups, lucene, locking, rest, hot_rod
more »
( - batching, - buddy_replication, - demo, - gridfs, - infinispan, - installation, - jcache, - jsr-107, - release, - repeatable_read )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/batching+buddy_replication+demo+gridfs+infinispan+installation+jcache+jsr-107+release+repeatable_read | 2019-07-15T21:25:51 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.jboss.org |
All content with label ec2+eviction+hibernate_search+hot_rod+infinispan+jboss_cache+listener+release+scala+transaction+xaresource.
Related Labels:
expiration, publish, datagrid, interceptor, server, recovery, transactionmanager, dist, partitioning, query, deadlock, archetype, jbossas, lock_striping, nexus, guide, schema, cache, s3,
amazon, grid, jcache, test, api, xsd, ehcache, maven, documentation, write_behind, 缓存, hibernate, aws, interface, custom_interceptor, setup, clustering, gridfs, out_of_memory, concurrency, import, index, events, configuration, hash_function, batch, buddy_replication, loader, xa, write_through, cloud, mvcc, tutorial, notification, jbosscache3x, read_committed, xml, distribution, meeting, cachestore, data_grid, cacheloader, resteasy, cluster, br, development, websocket, async, interactive, build, searchable, demo, installation, cache_server, ispn, »
( - ec2, - eviction, - hibernate_search, - hot_rod, - infinispan, - jboss_cache, - listener, - release, - scala, - transaction, - xaresource )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/ec2+eviction+hibernate_search+hot_rod+infinispan+jboss_cache+listener+release+scala+transaction+xaresource | 2019-07-15T21:19:01 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.jboss.org |
public interface KeyPartitioner
default void init(HashConfiguration configuration)
The partitioner can also use injection to access other cache-level or global components. This method will be called before any other injection methods.
Does not need to be thread-safe (Infinispan safely publishes the instance after initialization).
configuration-
int getSegment(Object key)
Copyright © 2017 JBoss, a division of Red Hat. All rights reserved. | https://docs.jboss.org/infinispan/9.0/apidocs/org/infinispan/distribution/ch/KeyPartitioner.html | 2019-07-15T21:24:29 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.jboss.org |
Navigation
Window.
Navigation Forward Stack Window.
Navigation Forward Stack Window.
Navigation Forward Stack Window.
Property
Forward Stack
Definition
Gets an IEnumerable that you use to enumerate the entries in back navigation history for a NavigationWindow.
public: property System::Collections::IEnumerable ^ ForwardStack { System::Collections::IEnumerable ^ get(); };
public System.Collections.IEnumerable ForwardStack { get; }
member this.ForwardStack : System.Collections.IEnumerable
Public ReadOnly Property ForwardStack As IEnumerable
Property Value
IEnumerable if at least one entry has been added to forward navigation history, or null if there are no entries or the NavigationWindow does not own its own navigation history.
Remarks
The entries that are returned by ForwardStack include whole content, fragment navigations, and custom state. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.navigation.navigationwindow.forwardstack?view=netframework-4.8 | 2019-07-15T21:05:49 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
Create a Workspace to Work with your Team Project
A workspace includes command.
If you want to have multiple copies of source-controlled items on your computer, you can create more than one workspace for a specific source control server.
Create and Work With Workspaces
Provides an overview of source control workspaces and provides procedures for using workspaces to get your team project files for the first time.
Add and Remove a Working Folder in a Workspace
Explains the steps that are used to add and remove working folders in a workspace.
Edit a Workspace
Explains the steps that are used to modify an existing workspace.
Cloak and Uncloak Folders in a Workspace
Explains the steps used to cloak and uncloak folders in a workspace.
Remove a Workspace
Describes the steps that are used to remove a workspace.
Work Offline when the Server is Unavailable
Describes how to work with a local copy of a version-controlled file when the server is offline.
Reference
Team Foundation Version Control Command-Line Reference
Related Sections
Getting a Local Copy of Files from the Version Control Server
Provides information about retrieving source control files and folders for a team project associated with a local workspace.
Administering Team Foundation Version Control
Lists topics that apply to administrators of Team Foundation version control.
See Also
Tasks
View Pending Changes in Other Workspaces | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/ms181383%28v%3Dvs.100%29 | 2019-07-15T21:36:35 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
Common tasks for creating and deploying configuration baselines with System Center Configuration Manager
Applies to: System Center Configuration Manager (Current Branch)
This topic contains common scenarios to help you learn about how to create and deploy System Center Configuration Manager configuration baselines.
If you are already familiar with compliance settings, you can find detailed documentation about all the features you use in the Create configuration baselines and Deploy configuration baselines topics.
Before you start, read Get started with compliance settings in System Center Configuration Manager to learn some basics about compliance settings, and also read Plan for and configure compliance settings to implement any necessary prerequisites.
Create a configuration baseline
In this example, you've created a configuration item for only Windows 10 PCs that run the Configuration Manager client.
This configuration item enforces a required password of at least 6 characters on Windows 10 PCs. The configuration item is named Windows 10 Password Enforcement.
Use the following procedure to learn how to add this configuration item to a configuration baseline to prepare it for deployment.
In the Configuration Manager console, click Assets and Compliance > Compliance Settings > Configuration Baselines.
On the Home tab, in the Create group, click Create Configuration Baseline.
In the Create Configuration Baseline dialog box, configure the following settings:
- Name - Enter Windows 10 Passwords (or another name of your choice)
Click Add > Configuration Items.
In the Add Configuration Items dialog box, select the Windows 10 Password Enforcement configuration item that you previously created, then click Add.
Click OK to close the Add Configuration Items dialog box and return to the Create Configuration Baseline dialog box.
Click OK to close the Create Configuration Baseline dialog box.
You can now see the configuration baseline in the Configuration Baselines node of the Configuration Manager console.
Deploy the configuration baseline
In this example, you deploy the configuration baseline you created in the previous procedure to a collection of computers.
In the Configuration Manager console, click Assets and Compliance > Compliance Settings > Configuration Baselines.
From the list of configuration baselines, select Windows 10 Passwords.
On the Home tab, in the Deployment group, click Deploy.
In the Deploy Configuration Baselines dialog box, configure the following settings:
Selected configuration baselines - Ensure that the Windows 10 Passwords configuration baseline was automatically added to this list.
Remediate noncompliant rules when supported - Check this box to ensure that if the correct settings are not present on targeted devices, then they are remediated by Configuration Manager.
Collection - Click Browse to choose the collection of computers on which the configuration baseline is evaluated and remediated for compliance. In this example, the configuration baseline was deployed to the built-in All Desktop and Server Clients collection.
Tip
Don't worry if the collection you choose contains computers or devices that don't run Windows 10. As long as you configured supported platforms in the configuration item you created, only Windows 10 PCs are evaluated for compliance.
If necessary, configure the schedule by which the configuration baseline is evaluated. Otherwise, keep the default of 7 Days.
Click OK to close the Deploy Configuration Baselines dialog box and create the deployment.
If you want to take a quick look at compliance statistics for this deployment, in the Monitoring workspace, click Deployments. At the bottom of the screen, you see a Compliance Statistics chart.
Next steps
For more detailed information about how to monitor configuration baselines, see Monitor compliance settings.
Feedback | https://docs.microsoft.com/en-us/sccm/compliance/plan-design/common-tasks-for-creating-and-deploying-configuration-baselines | 2019-07-15T20:38:33 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
Contribute to Third Party Extension ¶
This chapter addresses contributing to third party extension documentation.
For system extensions, see Contribute to System Extension .
You can contribute to the documentation of any publicly available extension, if the repository is public (e.g. hosted on GitHub). This does not mean, the extension author will be willing or is obligated to merge your change. But, most of the time, useful contributions are welcome.
You can add issues or make changes via patches (e.g. ¶ 2: Find the Source on ¶ ¶
You can also find the rendered documentation:
Method 1: Find Rendered Manual on docs.typo3.org ¶
Go to: Extensions by extension key
Method 2: Find Rendered Manual on ¶
Go to the Extension repository
In the search box, enter the name or extension key
Click on “Show Manual”
Note
You cannot find system extensions (extensions that are maintained in the core) on .
Follow Contribution Guide ¶. | https://docs.typo3.org/m/typo3/docs-how-to-document/master/en-us/WritingDocForExtension/ContributeToThirdPartyExtension.html | 2019-07-15T20:52:56 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.typo3.org |
How to Authorize Sync Engine in Corporate Office 365 / Azure Settings¶
Revenue Inbox Sync is ready to be connected to any supported email server out of the box. Similarly to RI Add-In installed for end users’ mail accounts, it is a server app that requires specific server-side permissions to run for individual users. Specifically, security policies configuration established in a company’s Office 365 / Azure infrastructure should explicitly allow the app to run; that can be ensured by the local Administrator via Microsoft 365 Admin center and Azure Active Directory.
This troubleshooting article addresses the three common issues which may prevent RI Sync engine’s functioning on server side.
Tip
Also see this RI FAQ entry to learn what data access permissions the solution requires to perform its functions.
I. Check your corporate firewall configuration¶
See this article for complete information on how to do that.
II. Adjust Azure server Enterprise Applications configuration¶
Steps how to do that:
1. Log in to the Azure management portal with Admin credentials
2. Click on All services in the Main menu
3. Select the directory you are using for the Revenue Inbox server app
4. Click on the Enterprise applications tab
5. Select the application from the list of applications associated with this directory
6. Click the Properties tab
7. Change the Enabled for users to sign-in? toggle to Yes
8. It is also recommended (but not required) to enable the User assignment required? toggle; this allows the end users to authorize Revenue Inbox sync independently from the Admin
9. Click the Save button at the top of the page
10. In addition, check whether the Revenue Inbox application with the ID indicated in the error notification you got is on the list of applications (added/allow-listed for the users to be assigned).
III. To resolve the “You can’t access this application” error on users authentication via a service account¶
If you get an error notification containing the message “Revenue Inbox needs permission to access resources in your organization that only an admin can grant. Please ask an admin to grant permission to this app before you can use it” or a status code AADSTS90094, you need to adjust your Office 365 settings to allow the end users to sign in to apps like Revenue Inbox Sync.
Why does this error occur?¶
The most common cause is when the end users have no permission to confirm OAuth consent screens for an application, unless they have Admin rights within your Office 365 tenant. Enterprise apps like Revenue Inbox use OAuth as a more secure way to authorize scoped access to your Office 365 tenant email and calendar data with a username and password. Learn more about service principals and Enterprise app permissions here.
Additional Microsoft articles for your reference¶
- Assign a user or group to an enterprise app in Azure Active Directory
- How to assign users and groups to an application
- Apps, permissions, and consent in Azure Active Directory
- Assign a user or group to an enterprise app in Azure Active Directory
We would love to hear from you | https://docs.revenuegrid.com/ri/fast/articles/Server-Auth/ | 2021-07-24T02:14:30 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['../../assets/images/Configuration-%26-Settings/Admin-Settings-%26-Actions/config.png',
None], dtype=object)
array(['../../assets/images/Configuration-%26-Settings/Admin-Settings-%26-Actions/recommended.png',
None], dtype=object)
array(['../../assets/images/Configuration-%26-Settings/Admin-Settings-%26-Actions/authorization-admin.jpg',
None], dtype=object)
array(['../../assets/images/faq/fb.png', None], dtype=object)] | docs.revenuegrid.com |
Document.Path property (Word)
Returns the disk or Web path to the document. Read-only String.
Syntax
expression.Path
expression Required. A variable that represents a Document object.
Remarks
The path doesn't include a trailing character — for example, "C:\MSOffice" or "" — unless requesting a path of a document stored at a network drive root (e.g. for N:\file.docx "N:\" is returned provided N is a network drive, compared to "C:" for C:\file.docx where C is a local drive). ().
Example
This example displays the path and file name of the active document.
MsgBox ActiveDocument.Path & Application.PathSeparator & _ ActiveDocument.Name
This example changes the current folder to the path of the template attached to the active document.
ChDir ActiveDocument.AttachedTemplate.Path
See also
Support and feedback
Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback. | https://docs.microsoft.com/en-us/office/vba/api/word.document.path | 2021-07-24T01:01:54 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.microsoft.com |
Using the Search by List Customization Setting¶
On Revenue Inbox Customization page, the central column Objects in Revenue Inbox lists all Salesforce record types displayed in the Add-In / Chrome Extension. Under every record type, there are relevant customization settings. This article specifically explains how to use the Search by field (under the Other settings category) and provides associated best use practices.
The Search by field allows to set the object’s fields to be used both when Revenue Inbox searches for existing associated records and when you are searching for a certain record of this type. Specifically, when you enter a value to search for in the Sidebar, the value will be matched against the contents of the specified fields of all records of this type in Salesforce.
Important
If this field is left blank, then all fields will be covered in the search. Additionally, take into consideration that if too many fields are listed in Search by, Salesforce search will take considerably more time to complete, so it is recommended to limit their number to as few as possible (10 fields being the tentative maximum)
Best use practice tips:¶
- To make records searchable, Revenue Inbox adds the following default Search by fields to its standard objects:
- Contact, Lead: Full Name, Email
- Account: Account Name, Website
- Opportunity: Name
- Case, Task, Event: Subject
If you create custom objects in Salesforce or modify existing objects’ field customizations or the Search by list, make sure to add to this list the field(s) which uniquely identify these objects, to be used in Revenue Inbox search.
- Many Revenue Inbox users find it convenient to set the First name and Last name fields separately in the Search by field instead of Full name. This simplifies search value entry – instead of entering both name and surname into the search box (exact value search is used) you will need to enter only either one of them.
- Another common best-use practice (requires Salesforce admin permissions to set up): create in Salesforce (if it did not exist in your customization) a custom 2nd Email field for your email correspondent record types that will store the interlocutor’s secondary (personal) email address, then include the field in Search by.
Secondary addresses are often used in communication besides the primary (business) ones and, since Search by also defines Revenue Inbox initial search process, this will allow messages received from the secondary address to be properly processed and associated in Salesforce. This is the most convenient way to deal with messages incoming from secondary addresses, however, if creating in Salesforce and populating the 2nd Email field for new records is not an option for you, you can find the relevant objects using Revenue Inbox search and link them manually.
We would love to hear from you | https://docs.revenuegrid.com/ri/fast/articles/Using-the-Search-by-List-Customization-Setting/ | 2021-07-24T02:06:00 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['../../assets/images/d33v4339jhl8k0cloudfrontnet/docs/assets/57398d2e903360669faf1f0a/images/5b96467c0428631d7a8ae264.png',
None], dtype=object)
array(['../../assets/images/faq/fb.png', None], dtype=object)] | docs.revenuegrid.com |
The console provides operation interfaces for managing Jocloud services and searching usage data. This section briefly introduces functions provided by the console.
Provide Jocloud's functions Registration and Login.
This page lists all applications of the current user. You can click Edit button to go to the project modification page. For each registered service, you can go to its management page.
This page includes a usage searching page and a Data Cube page.
Audio/video usage includes: real-time audio/video usage, cloud recording, cloud screenshot, and pushing streams to CDN. Based on the selected billing mode, this service provides usage search accordingly. For details about a specific billing mode, see the billing description of this service mode.
Data Cube page provides full-lifecycle solutions covering audio/video quality monitoring, tracking, and analysis, helping to tackle problems and improve user experience.
This service provides usage search of instant messaging and service configuration.
The usage page provides usage of current package and statistics in dimensions of users, messages, chatrooms, and channels.
Service configuration allows users to set parameters.
The AI security service provides four functions: usage search, data search, debugging and configuration, and word bank management.
It provides usage search and data search for offline audios, images and texts, Jocloud interactive audios, Jocloud interactive videos, video real-time stream pulling, and audio real-time stream pulling.
API debugging is provided for offline audios, images, and texts. You can check the result by uploading audios, images, and texts.
For Jocloud interactive audio and video services, you can identify the usage by configuring the specific room ID, user ID, moderation category, and moderation callback.
Currently, Jocloud provides keyword-based authentication. If the moderation result is not as expected, you can customize the keywords.
You can set statuses and authentication statuses of the project, and get a temporary token to manage existing projects.
Set service authentication.
Get a temporary token.
Add new members and set member roles and permissions specified. Each user can go to specified pages based on role and permission configuration. | https://docs.jocloud.com/cloud/en/platform/console/overview/overview.html | 2021-02-24T20:22:21 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.jocloud.com |
Realtime Signaling (RTS) is a lightweight and high-reliable message transmission service developed on the low-latency and high-concurrency global real-time message system architecture.
The RTS SDK can be integrated to implement high-concurrency, low-latency, and stable message transmission channels. Interworking with the audio/video interaction SDK, RTS can help developers to create service scenarios, including interactive teaching, voice chat room, video live streaming, and calling.
The code of the RTS SDK specific to Jocloud mobile terminals is
Hummer. | https://docs.jocloud.com/cloud/en/product_category/rtm_service/instant_messaging/api/iOS/v3.2.0/category.html | 2021-02-24T20:09:32 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.jocloud.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.