content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
1.2. Installation on Windows¶
There are two ways to install CouchDB on Windows..
- Your installation is not complete. Be sure to complete the Setup steps for a single node or clustered installation.
- Open up Fauxton
- It’s time to Relax!. | http://docs.couchdb.com/en/3.0.0/install/windows.html | 2020-02-16T21:24:57 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.couchdb.com |
Checklist Template DTO v16 This object represents a checklist template. Primary Keys A checklist template is considered unique by the following checklist template properties: Category Name Revision Version In the application, when asked what do with other checklist template versions, this applies to checklist that are the same with the only difference being the version. That means they have the same: Category Name Revision Field Name Type Category Constraint Reference Description checklistCategory Identifier Optional ChecklistCategory : 8, 9, 10 The category which the template belongs to. content byte Required defaultLanguage String Required length >= 1 & length <= 128 The default language. description String Optional length >= 1 & length <= 2147483647 Description of the checklist template. name String Required length >= 1 & length <= 128 The name. status ChecklistTemplateStatus Required Allowed values { ACTIVE, INACTIVE, DEVELOPMENT, TRANSLATION } The status of the checklist template. @See ChecklistTemplateStatus. tag String Optional length >= 1 & length <= 128 The tag used to make a tag on a certain version of the template. In the knowledge management application, this field is referred to as Revision version int Required The version. Checklist Status Description Status Name Possible next Status within same template (incl. Version) Visible in List of Template in Knowledge Management Visible in the List of Templates when creating new Instance Available for Mobile to view and edit existing Instances Field Value “status” Field Value “inactive” Field Value “deleted” Editing Translation, Released, Archived, Deleted x DEVELOPMENT FALSE FALSE Translation Released, Archived, Deleted x TRANSLATION FALSE FALSE Released Archived, Deleted x x x ACTIVE FALSE FALSE Archived Deleted x x INACTIVE FALSE FALSE Deleted INACTIVE TRUE FALSE | https://docs.coresystems.net/api/dtos/checklisttemplatedto_v16.html | 2020-02-16T21:34:54 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.coresystems.net |
Which versions of Apache Cassandra does the driver support?
The driver supports any Apache Cassandra version from 2.0+.
Which versions of DSE does the driver support?
The driver supports any DataStax Enterprise version from 4.8+.
How can I upgrade from the DSE driver to the unified DataStax C# driver?
There is a section in the Upgrade Guide to help you in that process.
Should I create multiple
ISession instances in my client application?
Normally you should use one
ISession instance per application. You should share that instance between classes within your application. In the case you are using CQL and Graph workloads on a single application, it is recommended that you use different execution profiles on the same session.
Can I use a single
ICluster and
ISession instance for graph and CQL?
We recommend using a single session with different execution profiles for each workload, as different different workloads should be distributed across different datacenters and the load balancing policy should select the appropriate coordinator for each workload.
Should I dispose or shut down
ICluster or
ISession instances after executing a query?
No, only call
cluster.Shutdown() once in your application’s lifetime, normally when you shutdown your application. Note that there is an async version, i.e.,
cluster.ShutDownAsync() which is like an async
Dispose. Shutting down the
cluster will automatically shutdown all session instances created by this
cluster. Cassandra Cassandra.Diagnostics.CassandraTraceSwitch.Level = TraceLevel.Info; // Add a standard .NET trace listener Trace.Listeners.Add(new TextWriterTraceListener(Console.Out));). | https://docs.datastax.com/en/developer/csharp-driver/3.13/faq/ | 2020-02-16T21:53:39 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.datastax.com |
Disable the Motorola Q Camera Sound
I searched high and low for an option to disable the camera shutter noise, but as far as I can tell, there isn't one. After a bit of searching, I pieced together a solution. Please note that this is not a supported method of working on your phone and messing with the registry is a good way to brick your device. Play around with this at your own risk.
- Download PHM Registry Editor. I chose the desktop installer.
- Run the installer, but don't worry when it doesn't actually install to your Q.
- Browse to the folder on your PC where the installer dumped the CAB files. Manually them all to your device. I think you only need "regedit.Stngr_ARM.CAB" but they're small so I just copied them all.
- Open the File Manager on the Q, browse to the folder with the CAB files and run regedit.Stngr_ARM.CAB. After it installs you'll get a warning about the app not working because it was made for an earlier version... ignore that. You can now delete all the files you copied to the device.
- From the home screen on the Q, click Start. Start PHM Registry Editor.
- Navigate to \HKLM\System\Pictures\Camera\OEM
- Click the "Values" left soft button.
- Select SoundFile
- Change the string from "\windows\shuttersound_02_secs.wav" to "\windows\*none*\" (asterisks are required.)
Thanks to the guys at for pointing me in the right direction. | https://docs.microsoft.com/en-us/archive/blogs/msdn/ben/disable-the-motorola-q-camera-sound | 2020-02-16T23:38:20 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.microsoft.com |
Exchange 2013/2016/2019 & ExO - a sample Get-Mailbox GUI
Here is a GUI to get mailboxes information in an Exchange 2010, 2013, 2016, 2019 and/or Exchange Online (O365) environments.
I initially created this GUI just to illustrate how we can use Windows Presentation Foundation (WPF) forms to ease up the execution of PowerShell commands.
Download link is at the very end of this article.
Principle
For this GUI, I designed the interface using Windows Presentation Foundation, which is the next generation of Windows Forms, and which enables anyone to design Windows interfaces for underlying programs, to help users to get the most from computer programs.
Here's the principle used to create this Powershell application using WPF:
- First, I designed the Windows Presentation Foundation forms using Visual Studio 2017 Community Edition (which is free), and pasted the generated XAML code from Visual Studio 2007 into a PowerShell script - in this script I pasted directly the XAML code into a PowerShell "here-string", but you can load the XAML code from a separate file using Get-Content for example, that way if you want to change your Windows WPF form design, you just have to modify or paste the new code from Visual Studio directly into that separate file and leave your .ps1 script alone.
- then with a PowerShell code snippet I parsed the XAML code to get the form objects that PowerShell understands - buttons, check boxes, datagrids, labels, text blocks used as inputs as well as outputs sometimes (to put the PowerShell commands corresponding to the users input for example), and the form object itself, which is the container of all the other objects (buttons, checkboxes, etc...)
- and finally I wrote the functions behind the form that I want to run when the form loads, when we click the buttons, when we check or uncheck the checkboxes, when we change the text in text boxes, or when we select items from datagrids => for code to execute when the user interacts with Windows WPF forms, we must use the WPF form object's "events" (add_click, add_textChanged, add_loaded, add_closing, etc...) => you can retrieve Windows WPF form objects on the MSDN, or simply on Visual Studio 2017 when you design your form (switch from object properties to the object events to view all available events for a selected object)
I tried to make a synoptic view of the process in the below schema with small sample screenshots of my Visual Studio 2007 / Vistual Studio Code parts - you'll find the WPF - to - PowerShell code snippet sample I'm referring to in the below schema in this GitHub repository...
Important notes
This PowerShell app requires PowerShell V3, and also requires to be run from a PowerShell console with Exchange tools loaded, which can be an Exchange Management Shell window or a PowerShell window from where you imported an Exchange session, see my TechNet blog post for a summary on how to do this (right-click => Open in a new tab otherwise below sites will load instead of this page):
-
How-to – Load Remote Exchange PowerShell Session on Exchange 2010, 2013, 2016, Exchange Online (O365) – which ports do you need
-
Connect to Exchange Online PowerShell
- Note: If you want to use multi-factor authentication (MFA) to connect to Exchange Online PowerShell, you need to download and use the Exchange Online Remote PowerShell Module. For more information, see Connect to Exchange Online PowerShell using multi-factor authentication
-
How To–Load Exchange Management Shell into PowerShell ISE
Screenshots - because a picture is worth 1000 words...
First window when launching the tool
After a sample Get-Mailbox which name includes "user" string
Note that for cloud mailbox, the "Location" column will tell you that the mailbox is hosted in the cloud:
If you select "Unlimited" under the Resultsize (max number of mailboxes to search), or a number that is greater than 1000, you get a warning asking you if you want to continue
Selecting mailboxes in the grid, notice the "Action on selected" button that becomes active
Action : After selecting some mailboxes in the grid, calling the "List Mailbox Features" action in the drop-down list
Action: Anoter action possible, calling the Single Item Recovery and mailbox dumpster limits for the selected mailboxes
Action: List mailbox quotas, including database quota for each mailbox
Note that mailbox quotas list include the Database info quota - that is useful when mailboxes are configure to use Mailbox Database Quotas
On most actions, you can copy the list in Windows clipboard (will be CSV Formatted) for further analyis, reporting or documentation about your mailboxes
More to come...
You can also retrieve this project on this page of my GitHub site... | https://docs.microsoft.com/en-us/archive/blogs/samdrey/exchange-get-mailboxes-gui | 2020-02-16T23:52:47 | CC-MAIN-2020-10 | 1581875141430.58 | [array(['https://github.com/SammyKrosoft/Code-Snippet-WPF-and-PowerShell/raw/master/DocResources/How-o-CreatePowerShellWPFApp.jpg',
'WPF from Visual Studio to PowerShell'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image0.jpg',
'screenshot1'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image1.jpg',
'screenshot2'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image1-cloud_location.jpg',
'screenshot2.1'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image-Question-LotsOfItems.jpg',
'screenshot3'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image-Question-LotsOfItems2.jpg',
'screenshot3.1'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image-SelectForAction.jpg',
'screenshot4'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image-Action-ListMbxFeatures.jpg',
'screenshot5'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image-Action-SingleItemRecoveryStatus.jpg',
'screenshot6'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image-Action-ListMailboxQuotas.jpg',
'screenshot7'], dtype=object)
array(['https://github.com/SammyKrosoft/Exchange-Get-Mailboxes-GUI/raw/master/DocResources/image-copyToClipBoard.jpg',
'screenshot8'], dtype=object) ] | docs.microsoft.com |
...
W.
Handlers
When When an API is created, a file with its synapse configuration is added to the API Gateway. You can find it in thethe
<AP. the API Gateway . S/he is responsible for creating user roles in the system, assign them roles, managing databases, security etc. The Admin role is available by default., certain types of prevent DoS attacks. You should provide "roles" seperated by commas in UI or as curl parameter when calling REST API:
...
Subscription availability
The subscription availability option has three values as follows. You can set subscription availability to an API through the API Publisher's Manage tab.
...
The diagram below depicts the relationship between the API's visibility and subscription availability:
Refer the article Multi Tenant API Management with WSO2 API Manager for examples and real world usage of the above concepts.
...
API documentation visibility
...
Then, log in to the API Publisher, go to the Docs tab of an API and click Add New Document to see a new drop-down list added to select visibility from:
You set visibility in the following ways:
...
The diagram below shows a resource by the name
CheckPhoneNumber added with four HTTP methods.
When you add resources to an API, you define a URL pattern and HTTP methods. A resource can also have a list of OAuth scopes.
...:
...
...
Cross-origin resource sharing ( CORS ) is a mechanism that allows restricted resources (e.g., fonts, JavaScript) of a Web page to a Web page to be requested from another domain outside elements are described below:
...
...
OAuth OAuth scopes
Scopes enable fine-grained access control to API resources based on user roles. You You define scopes to an API's resources. When a user invokes the API, his/her OAuth 2 bearer token cannot grant access to any API resource beyond its associated scopes.
How scopes work
To illustrate the functionality of scopes, assume you have the following scopes attached to resources of an API:
...
Scope whitelisting
A scope is not always used for controlling access to a resource. You can also use it to simply mark an access token. There are scopes that cannot be associated to roles (e.g., openid, device_). Such Such scopes do not have to have roles associated with them. Skipping role validation for scopes is called scope whitelisting.
....
... | https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=50518328&selectedPageVersions=37&selectedPageVersions=38 | 2020-02-16T21:40:17 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.wso2.com |
This section covers the following topics:
Changing the super admin password
See How do I change the default admin password and what files should I edit after changing it?
Do you have any special characters in passwords?
If you specify passwords inside XML files,[xnvYh?@VHAkc?qZ%Jv855&A4a,%M8B@h]]> </Password>
Recovering a password
See How can I recover the admin password used to log in to the management console?
Logging in via multiple user store attributes
See Authentication using Attributes in the WSO2 IS documentation.es token.
Setting up an e-mail login
See Email Authentication in the WSO2 IS documentation.
Setting up a social media login
You can auto provision users based on a social network login by integrating the API Manager with WSO2 Identity Server. But, this is not supported in a multi-tenant environment.
In a multi-tenant environment, the system cannot identify the tenant domain in the login request that comes to API Manager's Publisher/Store. Therefore, the service provider is registered as a SaaS application within the super tenant's space. Configuring user provisioning is part of creating the service provider. In order to authenticate the user through a third party identity provider such as a social network login, you must enable identity federation. As the service provider is created in the super tenant's space, the provisioned user is also created within the super tenant's space. As a result, it is not possible to provision the user in the tenant's space.
To overcome this limitation, you can write a custom authenticator to retrieve the tenant domain of the user and write a custom login page where the user can enter the tenant domain, which is then added to the authenticator context. Then, write a custom provisioning handler to provision the user in the tenant domain that maintained in the context.
- For information on writing a custom authenticator, see Creating Custom Authenticators in the WSO2 IS documentation.
- For information on writing a custom login page, see Customizing Login Pages in the WSO2 IS documentation. | https://docs.wso2.com/pages/viewpage.action?pageId=45959656&navigatingVersions=true | 2020-02-16T22:36:02 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.wso2.com |
Modifies an existing DB subnet group. DB subnet groups must contain at least one subnet in at least two AZs in the AWS Region.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
modify-db-subnet-group --db-subnet-group-name <value> [--db-subnet-group-description <value>] --subnet-ids <value> [--cli-input-json <value>] [- group.ubnetGroup -> OrderableDBInstanceOption data type.
Name -> (string)The name of the Availability Zone.
SubnetStatus -> (string)Specifies the status of the subnet.
DBSubnetGroupArn -> (string)The Amazon Resource Name (ARN) for the DB subnet group. | https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-subnet-group.html | 2020-02-16T21:18:45 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.aws.amazon.com |
From Admin sidebar, access to “Store” → “Configuration” → “MAGE WORLD EXTENSION”. After this step, look for “Enable Maximum Coupon Discount Amount” field, choose “Yes” to activate the extension.
To manage the maximum amount to be discounted, please go to “Marketing” → “Cart Price Rules” → “Add New Rules”
Insert the value you want it to be for the maximum discount amount.
When you apply the coupon code: | https://docs.mage-world.com/doku.php?id=magento_2:maximum_coupon_discount | 2020-02-16T21:45:30 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.mage-world.com |
Test email from SharePoint using PowerShell
Summary:
The following PowerShell script was written for my post on configuring TLS between SharePoint and Exchange. However, since it was buried in process, I wanted to create a separate post just sharing the script, because it will be easier to maintain and use separately when needed.
Why use a script anyway?
I find this script very useful when testing mail flow from Sharepoint since it uses the "SPUtility::SendEmail" API , sends mail, captures the correct logs and presents them by launching notepad, all from a single server.
The Script:
# } #Parameters While ($web -eq $null){ $web = Get-SPWeb (Read-Host "Input SPWeb URL using http://") } $email = (Read-Host "Input E-mail recipient") $subject = (Read-Host "Input E-mail Subject") $body = (Read-Host "Input E-mail Body") #specify start time of action $StartTime = (Get-Date).AddMinutes(-1).ToString() # Try sending e-mail via SharePoint. $send = [Microsoft.SharePoint.Utilities.SPUtility]::SendEmail($web,0,0,$email,$subject,$body) #what to do if it fails if ($send -eq $false -and $web -ne $null){ write-host "It didn't work, checking ULS for errors. Please stand by..." -foregroundcolor Red -backgroundcolor Yellow #specify end time of action $EndTime = (Get-Date).AddMinutes(+1).ToString() #make dir if it does not exist $TARGETDIR = "c:\logs" if(!(Test-Path -Path c:\logs)){ New-Item -ItemType directory -Path $TARGETDIR } #finding error and creating log start-sleep 5 Get-SPLogEvent -StartTime $StartTime -EndTime $EndTime | Where-Object {$_.Category -eq "E-Mail"} | Export-Csv -LiteralPath "$TARGETDIR\log.csv" #starting notepad to open log start notepad.exe "$TARGETDIR\log.csv" } #what to do if it works else{ if ($send -eq $true -and $web -ne $null){ write-host "It Worked..Congrats!" -foregroundcolor DarkGreen -backgroundcolor White } } $web.Dispose()
Example:
As you can see below the script will ask for input and you will specify the SPWeb url, E-Mail recipient, E-Mail Subject and E-Mail Body. If you enter the SP Web url incorrectly, it will keep asking. Also, if the e-mail is not sent, you will be notified on screen and NOTEPAD will pop-up with the associated ULS logs.
I hope you find this useful and thanks for reading!
-Mike | https://docs.microsoft.com/en-us/archive/blogs/mikelee/test-email-from-sharepoint-using-powershell | 2020-02-16T23:48:03 | CC-MAIN-2020-10 | 1581875141430.58 | [array(['https://msdnshared.blob.core.windows.net/media/2017/12/test-smtp.png',
None], dtype=object) ] | docs.microsoft.com |
Virtual Machine Manual Migration¶
If you have several Ravada servers you may want to copy a virtual machine from one to another.
In this example we copy the base for a virtual machine called Lubuntu-1704.
Temporary space in destination¶
At the destination server, create a temporary directory so you can store the volumes when you copy them. This directory must belong to a user that can do ssh from origin to destination:
root@destination:~# mkdir /var/lib/libvirt/images/tmp root@destination:~# chown frankie /var/lib/libvirt/images/tmp
Import the Base¶
Copy the Base definition¶
First copy the base definition file from server origin to destination. You need an user in the destination machine and ssh connection from each other.
root@origin:~# virsh dumpxml Lubuntu1704 > Lubuntu1704.xml root@origin:~# scp Lubuntu1704.xml [email protected]:
Copy the volumes¶
The volumes have a backing file, you must find out what it is so you can copy to destination.
root@origin:~# grep source Lubuntu1704.xml <source file='/var/lib/libvirt/images/Lubuntu1704-vda-X18J.img'/> root@origin:~# qemu-img info /var/lib/libvirt/images/base-vda-X18J.img | grep -i backing backing file: /var/lib/libvirt/images/Lubuntu1704-vda-X18J.ro.qcow2 root@origin:~# rsync -av /var/lib/libvirt/images/base-vda-X18J.ro.qcow2 [email protected]:/var/lib/libvirt/images/tmp root@origin:~# rsync -av /var/lib/libvirt/images/Lubuntu1704-vda-X18J.img [email protected]:/var/lib/libvirt/images/tmp
Move the volumes on destination¶
You just copied the data on a temporary directory available to the user. That must be copied to the actual storage pool as root. Make sure you don’t have similar volumes there because that procedure will overwrite them:
root@dst:/home/frankie# cd /var/lib/libvirt/images/tmp root@dst:/var/lib/libvirt/images/tmp# mv Lubuntu1704-* ../ root@dst:/var/lib/libvirt/images/tmp# chown root ../Lubuntu1704-*
Define the base on destination¶
Go to the destination server and define the virtual machine base with the XML config you copied before
root@dst:~# cd ~frankie/ root@dst:/home/frankie# virsh define Lubuntu1704.xml Domain base defined from Lubuntu1704.xml
Importing clones¶
Now if you want to import a clone too, first you have to ask the clone owner to start the machine on destination. Then you have to copy the volumes from origin and overwrite what has just been created on destination.
Create a clone¶
The owner of the original clone must create a clone in destination using Ravada. That will create a basic virtual machine with the same name owned by the correct user. Stop the domain on destination:
root@dst:~# virsh shutdown Lubuntu1704-juan-ramon
Mke sure it is stopped
root@dst:~# virsh dominfo Lubuntu1704-juan-ramon
Copy the clone volumes¶
Find out what are the clone volume files, and copy them to the temporary space in destination:
root@origin:~# virsh dumpxml Lubuntu1704-juan-ramon | grep "source file" | grep -v ".ro." <source file='/var/lib/libvirt/images/Lubuntu1704-juan-ramon-vda-kg.qcow2'/> root@origin:~# rsync -av /var/lib/libvirt/images/Lubuntu1704-juan-ramon-vda-kg.qcow2 frankie@dst:/var/lib/libvirt/images/tmp/
Start the clone on destination¶
First move the volumes to the right place, notice in destination the volumes have different names.
root@dst:~# virsh dumpxml Lubuntu1704-juan-ramon | grep source <source file='/var/lib/libvirt/images.2/Lubuntu1704-juan-ramon-vda-nz.qcow2'/> root@dst:~# cd /var/lib/libvirt/images/tmp/ root@dst:/var/lib/libvirt/images/tmp# mv Lubuntu1704-juan-ramon-vda-jz.qcow2 ../Lubuntu1704-juan-ramon-vda-nz.qcow2 root@dst:~# chown root /var/lib/libvirt/images/Lubuntu1704-juan-ramon-*
Hopefully then you can start the clone. It is a delicate procedure that must be followed carefully, please consider helping with this document if you have any suggestions. | https://ravada.readthedocs.io/en/latest/docs/migrate_manual.html | 2020-02-16T23:25:18 | CC-MAIN-2020-10 | 1581875141430.58 | [] | ravada.readthedocs.io |
All content with label amazon+async+buddy_replication+concurrency+custom_interceptor+data_grid+expiration+hibernate+infinispan+listener+notification+repeatable_read+server.
Related Labels:
datagrid, coherence, interceptor, replication, transactionmanager, dist, release, partitioning, deadlock, contributor_project, archetype, lock_striping, jbossas, nexus, guide, schema, cache, s3, memcached,
grid, jcache, test, api, xsd, ehcache, maven, documentation, userguide, write_behind, 缓存, ec2, streaming, aws, interface, clustering, setup, eviction, large_object, jboss_cache, import, index, events, configuration, hash_function, batch, loader, write_through, cloud, remoting, mvcc, tutorial, xml, read_committed, distribution, cachestore, resteasy, hibernate_search, cluster, br, development, websocket, transaction, xaresource, build, searchable, demo, cache_server, scala, installation, command-line, client, non-blocking, migration, filesystem, jpa, tx, user_guide, gui_demo, eventing, shell, student_project, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - amazon, - async, - buddy_replication, - concurrency, - custom_interceptor, - data_grid, - expiration, - hibernate, - infinispan, - listener, - notification, - repeatable_read, - server )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+async+buddy_replication+concurrency+custom_interceptor+data_grid+expiration+hibernate+infinispan+listener+notification+repeatable_read+server | 2020-02-16T22:13:03 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.jboss.org |
All content with label amazon+aws+docbook+expiration+import+infinispan+jbosscache3x+jta+listener+read_committed+recovery+release+scala+snapshot+transactionmanager.
Related Labels:
publish, datagrid, coherence, interceptor, server, replication, dist,, installation, client, migration, non-blocking, jpa, tx, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, hotrod, repeatable_read, webdav, docs, batching, consistent_hash, store, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking, rest, hot_rod
more »
( - amazon, - aws, - docbook, - expiration, - import, - infinispan, - jbosscache3x, - jta, - listener, - read_committed, - recovery, - release, - scala, - snapshot, - transactionmanager )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+aws+docbook+expiration+import+infinispan+jbosscache3x+jta+listener+read_committed+recovery+release+scala+snapshot+transactionmanager | 2020-02-16T22:35:36 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.jboss.org |
All content with label batch+clustering+coherence+datagrid+deadlock+docs+gridfs+guide+hibernate_search+infinispan+loader+maven+replication+server+user_guide.
Related Labels:
podcast, expiration, publish, interceptor, recovery, transactionmanager, dist, release, partitioning, timer, query, intro, archetype, jbossas, lock_striping, nexus, schema, listener, cache,
s3, amazon, memcached, grid, high-availability, jcache, api, xsd, ehcache, documentation, jboss, youtube, userguide, write_behind, 缓存, ec2, hibernate, aws, interface, custom_interceptor, setup, eviction, out_of_memory, concurrency, jboss_cache, import, index, l, events, hash_function, configuration, buddy_replication, xa, write_through, cloud, jsr352, remoting, mvcc, tutorial, notification, presentation, jbosscache3x, read_committed, distribution, cachestore, data_grid, cacheloader, resteasy, cluster, development, br, websocket, transaction, async, interactive, xaresource, build, domain, searchable, subsystem, demo, cache_server, scala, installation, command-line, mod_cluster, client, jberet, migration, non-blocking, filesystem, jpa, tx, article, gui_demo, eventing, shell, client_server, testng, infinispan_user_guide, standalone, snapshot, ejb, hotrod, repeatable_read, webdav, consistent_hash, batching, store, whitepaper, jta, faq, as5, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - batch, - clustering, - coherence, - datagrid, - deadlock, - docs, - gridfs, - guide, - hibernate_search, - infinispan, - loader, - maven, - replication, - server, - user_guide )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/batch+clustering+coherence+datagrid+deadlock+docs+gridfs+guide+hibernate_search+infinispan+loader+maven+replication+server+user_guide | 2020-02-16T22:38:43 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.jboss.org |
All content with label client+custom_interceptor+distribution+expiration+grid+gridfs+import+infinispan+jta+mvcc+recovery+transaction.
Related Labels:
podcast, datagrid, coherence, interceptor, server, rehash, replication, transactionmanager, dist, release, partitioning, deadlock, intro, archetype, pojo_cache, jbossas, lock_striping, nexus, guide,
schema, listener, state_transfer, cache, amazon, s3, memcached, jcache, test, api, xsd, ehcache, maven, documentation, roadmap, youtube, userguide, write_behind, 缓存, ec2, hibernate, aws, interface, clustering, setup, eviction, concurrency, fine_grained, jboss_cache, index, events, batch, hash_function, configuration, buddy_replication, loader, colocation, xa, pojo, write_through, cloud, remoting, notification, tutorial, presentation, murmurhash2, xml, read_committed, jbosscache3x, meeting, jira, cachestore, data_grid, hibernate_search, resteasy, cluster, br, development, permission, websocket, async, xaresource, build, hinting, searchable, demo, scala, installation, ispn, command-line, migration, non-blocking, rebalance, filesystem, jpa, tx, article, user_guide, eventing, shell, client_server, testng, infinispan_user_guide, murmurhash, standalone, repeatable_read, snapshot, webdav, hotrod, docs, batching, consistent_hash, store, whitepaper, faq, 2lcache, as5, jsr-107, docbook, jgroups, lucene, locking, rest, hot_rod
more »
( - client, - custom_interceptor, - distribution, - expiration, - grid, - gridfs, - import, - infinispan, - jta, - mvcc, - recovery, - transaction )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/client+custom_interceptor+distribution+expiration+grid+gridfs+import+infinispan+jta+mvcc+recovery+transaction | 2020-02-16T22:59:29 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.jboss.org |
Fluid Rheology¶
Basic Concepts¶
Fluids are gases and liquids that flow when subject to an applied shear stress. For single-phase fluids, momentum transport is governed by the fluid density and viscosity. For multi-phase systems, the surface tension must also be defined to describe dynamics at the interface between two phases.
M-Star CFD can handle both Newtonian and non-Newtonian fluid rheology. The available fluid models, along with the relevant simulation parameters, are described in the sections that follow.
Turbulent fluid flows in M-Star CFD are typically filtered using large eddy simulation (LES). The effects of the filtering are controlled by the user-defined Smagorinsky Coefficient. Additional theoretical details related to the LES model are provided in the Theory and Implementation section of this manual.
General Fluid Parameters¶
- Density
Density of fluid, [kg/m^3]
- Surface Tension, [N/m]
Surface tension of the fluid in air.
Only relevant for free surface or immiscible fluid simulations.
Turbulence Model, [Auto, DNS, ILES, LES]
DNS: Direct Numerical Simulation. DNS simulations attempt to capture all fluid motion across all eddy scales. DNS simulations will diverge if the eddy size approaches the lattice spacing. ILES: Implicit Large Eddy Simulations. These models use a larger (27-vector) lattice stencil and a cumulant-based momentum integrator to maintain stability at higher Reynolds numbers. LES: Large Eddy Simulations. These models compute a local eddy viscosity using the local shear rate to capture the effects of sub-grid turbulence. LES models tend to be stable at arbitrary Reynolds numbers. Auto: If maximum Reynolds number detected by the simulation is below 5000, the codes runs a DNS simulation. Above the Reynolds number, an LES model with static Smagorinsky coefficient of 0.10 is applied.
Newtonian Fluid¶
A Newtonian fluid has a constant viscosity, such that the viscous stresses arising from flow are linearly proportional to the local strain rate.
- Kinematic Viscosity
Kinematic viscosity of fluid, [m^2/s]
- Max User Shear Rate
Max allowable shear rate, [1/s]
Shear rates above and below these will use a constant viscosity equal to that realized at these maximum and minimum rates
- Min User Shear Rate
Min allowable shear rate, [1/s]
Shear rates above and below these will use a constant viscosity equal to that realized at these maximum and minimum rates
- Viscosity At Max Shear
Viscosity that corresponds to the MaxUserShearRate, [m^2/s]
- Viscosity At Min Shear
Viscosity that corresponds to the MinUserShearRate, [m^2/s]
Power Law Fluid¶
A power law fluid is generalized Newtonian fluid where the shear stress, \(\tau\), is related to the shear rate, \(\dot{\gamma}\) , such that:
where \(\rho\) is the fluid density, \(K\) is the flow consistency, and \(n\) is the fluid behavior index. The units on \(\rho\) are taken to be \(kg/m^3\) , the units on \(K\) are taken to be \(m^2/s^{2-n}\) and \(n\) is dimensionless.
From this constitutive relationship, the apparent viscosity \(\nu_a\) of a power-law fluid is then defined as:
where the units \(\nu_a\) are \(m^2/s\).
This definition of apparent viscosity is used to calculate the spatiotemporal variation in viscosity across the fluid volume due to spatiotemporal variations in strain rate.
- Power Law K
Flow consistency index, [\(m^2/s^{2-n}\)]
- Power Law N
Flow behavior index “n”, [-]
When a yield stress is added to a power law fluid, we have a Herschel-Bulkley fluid. The Herschel-Bulkley model describes the behavior of non-Newtonian yield stress fluids:
where \(\tau\) is the shear stress, \(\dot{\gamma}\) the shear rate, \(\tau_{0}\) the yield stress, \(n\) the consistency index, and \(k\) the flow index.
Like the power law expression, the effective viscosity is then defined as:
- Yield Stress
Yield shear stress, [N / m^2]
- Power Law K
Flow consistency index, [\(m^2/s^{2-n}\)]
- Power Law N
Flow behavior index “n”, [-]
Important
-Users should specify the fluid behavior index in anticipation that
1 will be subtracted from the specified value when evaluating the
local viscosity.
Carreau Fluid¶
A Carreau fluid is a generalized Newtonian fluid with an effetive viscosity, \(\mu_{\operatorname{eff}}\) , defined by:
- Carreau Vinf
Viscosity at infinite shear, [m^2/s]
- Carreau V0
Viscosity at zero shear, [m^2/s]
- Carreau Lambda
Relaxation time, [s]
- Carreau N
Power index, [-]
Herschel-Bulkley Fluid¶
The Herschel-Bulkley model describes the behavior of non-Newtonian yield stress fluids:
where \(\tau\) is the shear stress, \(\dot{\gamma}\) the shear rate, \(\tau_{0}\) the yield stress, \(n\) the consistency index, and \(k\) the flow index.
The effective viscosity is then defined as:
- Yield Stress
Yield shear stress, [N / m^2]
- K
Flow consistency index, [m^2 / s]
- N
Flow behavior index ‘n’ [dimensionless]
Custom Fluid¶
- Custom Expression
Analytic expression F(s) for the kinematic viscosity in units [m^2/s]. Can be a function of local shear rate ‘s’ with units [1/s], global time ‘t’ with units [s], local temperature ‘T’ with units [K], and the local concentration of any user-defined scalar field [mol].
Additional examples of entering formulas are presented in - User Defined Expression Syntax | https://docs.mstarcfd.com/fluid/rheology.html | 2020-02-16T21:15:10 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.mstarcfd.com |
Slug: id Data field: sl_id Type: integer (auto-assigned)
Name
Slug: store Data field: sl_store Type: string up to 255 characters
Address
Slug: address Data field: sl_address Type: string up to 255 characters
Address Line 2
Slug: address2 Data field: sl_address2 Type: string up to 255 characters
City
Slug: city Data field: sl_city Type: string up to 255 characters
State
Slug: state Data field: sl_state Type: string up to 255 characters
Zip
Slug: zip Data field: sl_zip Type: string up to 255 characters
Country
Slug: country Data field: sl_country Type: string up to 255 characters
Latitude
Slug: latitude Data field: sl_latitude Type: string up to 255 characters
Longitude
Slug: longitude Data field: sl_longitude Type: string up to 255 characters
Slug: tags Data field: sl_tags Type: Text up to 4096 characters
The Pro Pack tags field. Requires Pro Pack for full functionality.
Description
Slug: description Data field: sl_description Type: Long text, as determined by your MySQL settings. More than 4096 characters.
Slug: email Data field: sl_email Type: string up to 255 characters
Website
Slug: url Data field: sl_url Type: string up to 255 characters
Hours
Slug: hours Data field: sl_hours Type: string up to 255 characters
Phone
Slug: phone Data field: sl_phone Type: string up to 255 characters
Slug: fax Data field: sl_fax Type: string up to 255 characters
Image
Slug: image Data field: sl_image Type: string up to 255 characters
An fully qualified image URL, http:: with a full domain.
Private
Slug: private Data field: sl_private Type: A single character string.
May be used in the future as a 1|0 value to determine if a location is to only appear on the admin interface and not front end searches.
Neat Title
Slug: neat_title Data field: sl_neat_title Type: string up to 255 characters
May be used in the future as an alternate store name or subtitle.
Linked Post ID
Slug: linked_postid Data field: sl_linked_postid Type: integer, auto-assigned
The ID of the related store_page entry where extra taxonomy data and other location data is stored. Used with Store Pages and Tagalong. Should not be modified.
Pages URL
Slug: pages_url Data field: sl_pages_url Type: string up to 255 characters
The relative URL for the Store Pages linked post ID. Provides a processing shortcut for the Store Pages add-on pack.
Pages On
Slug: pages_on Data field: sl_pages_on Type: a single 1|0 character
Used with Store Pages to determine which locations have been populated with Store Pages template content.
Option Value
Slug: option_value Data field: sl_option_value Type: text up to 4096 characters
A serialized JSON data object which can store extra location data. Slower and less direct than extended data fields.
Last Updated
Slug: lastupdated Data field: sl_lastupdated Type: timestamp
A date and time MySQL timestamp indicating the last time the main data for the location was updated.
Initial Distance
Slug: initial_distance Data field: sl_initial_distance Type: A float..
Table: slp_tagalong field: sl_id = the store id field: term_id = the WordPress taxonomy id
Power Add On : Pages Data
SEO pages follows a different format.
To add Social media data:
shortcode is: [storepage field=socialiconarray]
On SEO pages all data fields must start with storepage field = in the shortcode. | https://docs.storelocatorplus.com/locator-data-the-field-names/ | 2020-02-16T22:47:44 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.storelocatorplus.com |
Types of additional IP addresses
First of all it is important to understand the concept of „IP Routing“ from which arises different types of additional IP addresses that can be ordered from Cherry Servers.
Routing is the mechanism that allows a system (in this case a server) to find the network path to another system. A route is a defined pair of addresses which represent the “destination” and a “gateway”. The route indicates that when trying to get to the specified destination, send the packets through the specified gateway.
For illustrative purposes lets take „Dedicated IP address“ which is offered with each server at Cherry Servers and explore its routing principles.
We have a dedicated 198.51.100.10 IP address that came with the server. Dedicated IP addresses come with assigned „gateway“ address so that server’s operating system would know to whome it should send packets to by default. In this case the gateway IP is 198.51.100.1 address.
When your server needs to send packets to 8.8.8.8 it looks up its routing table and sees that its default route is to 198.51.100.1. So packet is sent to 198.51.100.1 gateway. At the other end there will be a router who owns this gateway address 198.51.100.1 and is able to send your packet for 8.8.8.8 to the internet to reach it’s requested destination.
We can also examine how configuring „Dedicated IP“ with gateway IP looks like on ubuntu 16 down below:
| https://docs.cherryservers.com/knowledge/additional-ip-addresses | 2020-02-16T21:23:12 | CC-MAIN-2020-10 | 1581875141430.58 | [array(['https://docs.cherryservers.com/hubfs/IP.png',
'dedicated IP work scheme'], dtype=object)
array(['https://docs.cherryservers.com/hubfs/IP2.png',
'dedicated ip Ubuntu 16'], dtype=object) ] | docs.cherryservers.com |
4. Productivity¶
Writing system code in a high-level language such as Haskell should be much more productive than writing it in a low-level language like C.
High-level code is often more concise and most of the boilerplate code (e.g., error management, logging, memory management) can be abstracted away. Fully working code examples stay shorts and understandable. Writing system code is much more fun as we can quickly get enjoyable results (less irrelevant details to manage).
Many errors are caught during the compilation (type checking) which is especially useful with system programming because programs are harder to debug using standard methods (printf) and tools (gdb).
Code is easier to refactor thanks to type-checking, hence more maintenable. | https://docs.haskus.org/system/discussion/productivity.html | 2020-02-16T22:23:59 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.haskus.org |
LPT is the Modern Customization Tool for j-platform
LPT Basics
This section contains documentation for installing, upgrading your LPT Environment.
Fundamentals
Logo Platform Tailor is a development platform. You can examine the links here to conceptually recognize LPT
LPT Setup and Configuration
LPT Project Development Guide
LPT Advanced Level
LPT and j-guar have much-advanced functionality. This section contains some useful concepts for customization
The link below contains some Useful piece of codes that might be helpful while you develop an LPT Project.
You can easily develop an integrated HTML-based application with Jaguar. Here is a sample application developed with Vaadin.
J-GUAR Integrated Vaadin Project Development
Integration Interfaces
With j-platform, you can use the WS and Controller interfaces to integrate systems or applications.
j-platform Custom Web Service
XUI Emulating Controllers
Training and Certification
Before training, we suggest you read the documents.
Training and Certification
Sample Applications
Examples of applications that are described in the platform training and using the different technologies provided by the platform are presented.
The apps were created in different versions. It may need to be updated according to the installment.
Others
The new version of LPT is announced. You may find the changes made in each version | https://docs.logo.com.tr/public/jct | 2020-02-16T21:29:20 | CC-MAIN-2020-10 | 1581875141430.58 | [] | docs.logo.com.tr |
Package socket
Overview ▹
Overview ▾
Package socket provides outbound network sockets.
This package is only required in the classic App Engine environment. Applications running only in App Engine "flexible environment" should use the standard library's net package. LookupIP ¶
func LookupIP(ctx context.Context, host string) (addrs []net.IP, err error)
LookupIP returns the given host's IP addresses.
type Conn ¶
Conn represents a socket connection. It implements net.Conn.
type Conn struct { net.Conn }
func Dial ¶ ¶
func DialTimeout(ctx context.Context, protocol, addr string, timeout time.Duration) (*Conn, error)
DialTimeout is like Dial but takes a timeout. The timeout includes name resolution, if required.
func (*Conn) KeepAlive ¶
func (cn *Conn) KeepAlive() error
KeepAlive signals that the connection is still in use. It may be called to prevent the socket being closed due to inactivity.
func (*Conn) SetContext ¶. | http://docs.activestate.com/activego/1.8/pkg/google.golang.org/appengine/socket/ | 2019-03-18T17:53:05 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.activestate.com |
7.3.4 Z Linear Cubic Splines
There is one interesting variation on the cubic spline in use in our domain: the so-called Z Linear Cubic Spline. As shown in Figure 7.3.4-1 , the resulting interpolant is bounded in Z where a cubic spline need not be.
This variation mixes a linear spline interpolant in Z with a natural cubic spline interpolant in (X,Y). It always appears as a natural cubic spline. The figure also include two variations on the cubic splines. The green curve corresponds to the standard natural cubic spline interpolation from Numerical Recipes. The red curve is a cubic spline as a function of the Z coordinate. The Z Linear Cubic Spline is a function of the parameter P, and has continuous derivatives at the knots when expressed as a function of P, but not as a function of Z. | http://docs.energistics.org/RESQML/RESQML_TOPICS/RESQML-000-167-0-C-sv2010.html | 2019-03-18T18:18:08 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['RESQML_IMAGES/RESQML-000-050-0-sv2010.png', None], dtype=object)] | docs.energistics.org |
The Station ID Insertion feature is part of the Playlist Template Builder tool. The Playlist Template Builder enables you to create a basic Playlist Template that will, at playlist generation time, generate a playlist containing your radio station Station IDs or Spots.
Note: For an advanced approach to playlist generation containing Station IDs or Spots see the How to configure Scheduling & Logging section.
To find the Station ID insertion
feature click the
icon on the toolbar
and click the Template Builder...
button. The Station ID Insertions
section is at the bottom of the Playlist
Template Builder dialog box.
Create your Station ID (or Spots) album. We recommend that you compile an Ots Album file that contains your spot tracks using Ots Studio. This is not necessary but recommended for tidiness.
Create a Station ID category in the Media Library, see the How to create a new Category section if you are unfamiliar with creating categories. Give the category an identifying name like "Station ID" or "Spots".
Import and add your Station ID album to the Station ID category, see Importing Ots Album Files to the Media Library and How to add Albums to a category.
You are now ready to build a template that includes Station ID Insertion. For an example of this, click here.
Hot Tip: Scheduling provides a powerful approach to Station ID or Spot insertion. Click here for details. Also see the Scheduling & Logging reference topic.
How to build a playlist template containing Station IDs
Playlist Template Builder
Generate Playlist dialog box
Scheduling & Logging
How to configure the Scheduling & Logging | http://docs.otslabs.com/OtsAV/help/using_otsav/playlist_features/station_id_insertion.htm | 2019-03-18T18:28:10 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.otslabs.com |
.
But before deciding on embedded as your user interface of choice, be sure to consider the following factors and whether they apply to your use case. collaborate via the news feed, then Tempo may be the more appropriate user interface since it supports news.. To achieve a seamless user experience, use the same external authentication system (i.e., single sign-on) for both the non-Appian web site and the Appian environment. This authentication system.
To enable these requests, you must enable cross-origin resource sharing from your Appian server. To do so, navigate to the Embedded Interfaces page of the Appian Administration Console and add the external:
<!DOCTYPE html> <html> <head> <!-- This script loads the Appian Web components; change it to your Appian server's domain --> <script src="" id="appianEmbedded"></script> </head> <body> <!-- This custom HTML element specifies an Appian report to embed on the HTML page --> <appian-report</appian-report> </body> </html>:
<script src="" id="appianEmbedded"></script>:
<appian-report</appian-report>
The
<appian-action> element is used to embed an action. Any process model with a SAIL start form may be embedded. Activity-chaining is supported so long as the chain includes only SAIL tasks.
The attributes for the
<appian-action> element are listed below.
The example below is a static example of the
<appian-action> element.
<appian-action</appian-action>
To use this static example for testing, simply replace the value of the
processModelUuid attribute with a value from your environment.
The
<appian-record-view> element is used to embed a record view. The Summary view and any custom record views may be embedded. Embedding is not supported for the News and Related Actions views, nor is it supported for any record views for the User record. Related action shortcuts within a record view are supported and activity-chaining from related actions is supported so long as the chain includes only SAIL tasks.
The attributes for the
<appian-record-view> element are listed below.
The example below is a static example of the
<appian-record-view> element.
<appian-record-view</appian-record-view>. Activity-chaining is supported so long as the chain includes only SAIL tasks.
The attributes for the
<appian-related-action> element are listed below.
The example below is a static example of the
<appian-related-action> element.
<appian-related-action</appian-related-action>.
<appian-report</appian-report>
The
<appian-task> element is used to embed a task. Any task that uses a SAIL interface can be embedded. Activity-chaining is supported for embedded tasks so long as the chain includes only SAIL tasks. Task management, including preview, saving, acceptance and unacceptance, reassignment, and rejection, are all supported for embedded tasks.
The attributes for the
<appian-task> element are listed below.
The example below is a static example of the
<appian-task> element.
<appian-task</appian).
The following troubleshooting tips will help you diagnose and resolve common issues: SAIL:
<script> function submitted() { // Place appropriate submit handling here alert("The task has been submitted!"); } window.onload = function() { document.getElementById("embeddedTask").addEventListener("submit", submitted, false); } </script> <appian-task</appian-task>
For simplicity, the above example uses a static
<appian-task> element and basic JavaScript event handling mechanisms, but since these are normal JavaScript events, any JavaScript library's event handling mechanism should work..
To apply custom styling to the Appian interfaces embedded in a host web page, add a
data-themeidentifier attribute to the script tag specifying the theme to apply to the embedded interfaces. Here is an example of a script tag with a theme specified:
<script src="" id="appianEmbedded" data-</script>:
<script src="" id="appianEmbedded" data-</script>:
<script src="" id="appianEmbedded" data-</script>.
const container = document.getElementById('appianContainer'); const userAgent = navigator.userAgent; const isSupportedBrowser = userAgent && userAgent.includes('Mozilla/') && ( userAgent.includes('Chrome/') || userAgent.includes('Firefox/') || userAgent.includes('Safari/') || userAgent.includes('Edge/') || userAgent.includes('Trident/7') || // IE11 userAgent.includes('Mobile/')); if (isSupportedBrowser) { const appianBootstrap = document.createElement('script'); appianBootstrap.setAttribute('type', 'text/javascript'); appianBootstrap.setAttribute('src', ''); appianBootstrap.setAttribute('id', 'appianEmbedded'); container.appendChild(appianBootstrap); const appianReport = document.createElement('appian-report'); appianReport.setAttribute('reportUrlStub', 'DtJN3Q'); container.appendChild(appianReport); } else { // Inject alternate behavior for unsupported browsers here }
Because Appian Web components act like ordinary HTML elements, they can be created using normal DOM manipulation techniques. The following example code dynamically embeds a task using only JavaScript:
<!DOCTYPE html> <html> <head> <!-- This script loads the Appian Web components; change it to your Appian server's domain --> <script src="" id="appianEmbedded"></script> <script> /* This function dynamically inserts an <appian-task> element into the page with the user-specified taskId */ function addTask() { var taskId = document.getElementById('new-task').value; if (taskId) { var newTask = document.createElement('appian-task'); newTask.setAttribute("taskId", taskId); newTask.addEventListener("submit", handleSubmit, false); document.body.insertBefore(newTask, document.getElementById('insertPoint')); } } /* This function is called by the submit event listener */ function handleSubmit() { alert("The task has been submitted!"); } </script> </head> <body> <input id="new-task" /> <button onclick="return addTask();">Embed This Task</button> <div id="insertPoint"></div> </body> </html>:
<!DOCTYPE html> <html> <head> <script src="" id="appianEmbedded"></script> <script src=""></script> <script> /* This function dynamically inserts an <appian-task> element into the page with the user-specified taskId */ function addTask() { var taskId = $("#new-task").val(); if (taskId) { $('<appian-task />').attr("taskId", taskId).on("submit", handleSubmit).appendTo("#taskContent"); } } /* This function is called by the submit event listener */ function handleSubmit() { alert("The task has been submitted!"); } </script> </head> <body> <div id="inputs"> <input id="new-task" /><button onclick="return addTask();">New Task</button> </div> <div id="taskContent"></div> </body> </html>.
var dialog = document.getElementById('dialog'); var task = dialog.querySelector('appian-task'); task.destroy();
If your containing web site is a single-page application that allows users to repeatedly open embedded interfaces (e.g., opening and submitting a series of tasks), you must frequently trigger a page refresh to release browser memory. Failing to do so can eventually lead to poor browser performance on Google Chrome and Apple Safari or crashing on Mozilla Firefox and Internet Explorer 9, 10, and 11. Because memory usage varies by application, we recommend testing your application to determine how often to trigger a page refresh. One way to implement an auto-refresh is to implement a counter that increments each time an embedded interface is displayed and triggers a refresh once the counter reaches the specified threshold..
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Task List Demo</title> <script src=""></script> <script src="" id="appianEmbedded"></script> <script> function showTask(taskId) { $("#tasks").empty(); $('<appian-task />').attr("taskId", taskId).on("submit", handleSubmit).appendTo("#tasks"); } function handleSubmit() { $("#tasks").empty(); showTaskList(); } function showTaskList() { $.ajax({ type: 'GET', url: "", dataType:"json", contentType: 'application/json', xhrFields: { withCredentials: true } }) .done(function(data) { $("<h1>").text("Appian Tasks").appendTo("#tasks"); $.each(data, function(i, item) { $("<a>").attr("href", "#") .text(item.DisplayName) .click(function() { showTask(item.Id); }) .appendTo("#tasks"); }); }); } </script> <style> body { background-color: #fff; font-family: Sans-Serif; width: 1000px; margin: 0 auto !important; } h1 { color: #333; font-size: 24px; font-weight: bold; margin: 20px 0; } #tasks a { font-size: 14px; line-height: 18px; font-weight: bold; color: #285fab; text-decoration: none; display: block; margin: 0 0 8px; } #tasks a:hover, #tasks a:focus { color: #ee6615; } </style> </head> <body onload="showTaskList();"> <div id="tasks"></div> </body> </html>
On This Page | https://docs.appian.com/suite/help/17.1/Embedded_Interfaces.html | 2019-03-18T18:03:14 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.appian.com |
Anomaly Detection
Introduction
Step-by-Step Tutorial with Access Log data.
It detects anomaly in time series data frame. It employs an algorithm referred to as Seasonal Hybrid ESD (S-H-ESD), which can detect both global as well as local anomalies in the time series data by taking seasonality and trend into account. It’s built by a team at Twitter for their use on monitoring their traffics.
How to Access?
How to Configure?
Column Selection
- Date/Time Column - Select a Date or POSIXct data type column that holds date/time information.
- Aggregation Level - When data type is Date, data is aggregated (e.g. summed, averaged, etc.) for each day. When data type is POSIXct, level of aggregation can be day, hour, minute, or second.
- Value Column - Select either 'Number of Rows' or a numeric column for which you want to detect anomalies.
- Aggregation Function - Select an aggregate function such as 'sum', 'mean', etc. to aggregate the values.
Parameters
- How to Fill NA - This algorithm requires NAs to be filled. The default is Fill with Previous Value. This can be...
- Fill with Previous Value
- Fill with Zero
- Linear Interpolation
- Spline Interpolation
- Direction of Anomaly (Optional) - The default is "both". Direction of anomaly. This can be...
- "both" - Both positive and negative direction.
- "pos" - Only positive direction.
- "neg" - Only negative direction.
- With Expected Values (Optional) - The default is TRUE. Whether expected_values should be returned.
- Maximum Ratio of Anomaly Data (Optional) - The default is 0.1. The maximum ratio of anomaly data compared to the number of total data.
- Alpha (Sensitivity to Anomaly Data) (Optional) - The default is 0.05. The larger the value, the more anomaly data are captured.
- Report Only Last Values within (Optional) - The default is NULL. Find only last anomalies within a day or hour. This can be
- NULL - Find all anomalies.
- "day" - Find last anomalies within a day.
- "hr" - Find last anomalies within an hour.
- Threshold of Positive Anomaly (Optional) - The default is 'None'. If this is specified, only positive anomalies above the threshold are reported. This can be
- 'None' - No threshold.
- 'med_max' - Median of daily max values.
- 'p95' - 95th percentile of the daily max values.
- 'p99' - 99th percentile of the daily max values.
- Longer Time Span than a Month (Optional) - The default is FALSE. This should be TRUE if the time span is longer than a month.
- Piecewise Median Time Window (Optional) - The default is 2. The size of piecewise median time window (span of seasons). The unit is weeks.
How to Read the Result?
- Date / Time Column
- Value Column
- pos_anomaly - Returns TRUE if anomaly is detected in the positive detection for each row.
- pos_value - Anomaly values in the positive direction.
- neg_anomaly - Returns TRUE if anomaly is detected in the negative detection for each row.
- neg_value - Anomaly values in the negative direction.
- expected_value - The values that the model would have expected based on the underlying trend. | https://docs.exploratory.io/ml/anomaly.html | 2019-03-18T17:49:05 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['images/anomaly1.png', None], dtype=object)
array(['images/anomaly2.png', None], dtype=object)
array(['images/anomaly3.png', None], dtype=object)] | docs.exploratory.io |
Installing an HDP Cluster
If you are installing an HDF cluster that includes Stream Analytics Manager (SAM), you must have an existing HDP cluster with Druid installed. This section provides instructions for installing HDP for use with SAM. For complete HDP installation instructions, see the Ambari Installation for HDP
- Log in to the Apache Ambari UI and start the Cluster Installation wizard.The default Ambari user name and password are admin and admin.
- In the Select Version page, under public repositories, remove all Base URLs that do not apply to your operating system.
- Change the HDP Base URL to the URL appropriate for the HDP version you are installing, provided in the HDF Release Notes.
- On the Choose Services step, you must select the following services to run an HDF cluster with full SAM capabilities.
HDFS
YARN + MapReduce2
ZooKeeper
Ambari Infra
Ambari Metrics
SmartSense
Druid
You may further customize as required by your use case and operational objectives.
- In the Assign Masters page, distribute master services using the deployment diagrams available in Planning Your Deployment.
- On the Assign Slaves and Clients screen, distribute slave services using the deployment image as a guide. | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.3.1/installing-hdf-and-hdp/content/installing_an_hdp_cluster.html | 2019-03-18T18:56:46 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.hortonworks.com |
Delete part of a string
Member of String Intrinsic Functions (PRIM_LIBI.ICommonStringIntrinsics)
DeleteSubstring deletes the characters in a string from the specified start position as far as the specified length. If a length is not specified, all characters after the start position will be deleted.
In this example, if #String contained ?abcd?, the result would be ?acd?.
#Com_owner.Caption := #String.DeleteSubstring( 2 1)
All Component Classes
Technical Reference
Febuary 18 V14SP2 | https://docs.lansa.com/14/en/lansa016/prim_libi.icommonstringintrinsics_deletesubstring.htm | 2019-03-18T17:54:10 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.lansa.com |
Remote Authentication in SharePoint Online Using Claims-Based Authentication
Summary: Learn how to authenticate against Microsoft SharePoint Online in client applications using the SharePoint client-side object models.
Applies to: Business Connectivity Services | Open XML | SharePoint Designer 2010 | SharePoint Foundation 2010 | SharePoint Online | SharePoint Server 2010 | Visual Studio
Provided by: Robert Bogue, Thor Projects, LLC (Microsoft SharePoint MVP)
Contents
Introduction to Remote Authentication in SharePoint Online Using Claims-Based Authentication
Brief Overview of SharePoint Authentication
Evolution of Claims-Based Authentication
SharePoint Claims Authentication Sequence
Using the Client Object Models for Remote Authentication in SharePoint Online
Reviewing the SharePoint Online Remote Authentication Sample Code Project
Conclusion
Additional Resources
About the Author
Download the code: Remote Authentication in SharePoint Online Using the Client Object Model
Introduction to Remote Authentication in SharePoint Online Using Claims-Based Authentication
The decision to rely on cloud-based services, such as Microsoft SharePoint Online, is not made lightly and is often hampered by the concern about access to the organization's data for internal needs. In this article, I will address this key concern by providing a framework and sample code for building client applications that can remotely authenticate users against SharePoint Online by using the SharePoint 2010 client-side object model.
Note
Although this article focuses on SharePoint Online, the techniques discussed can be applied to any environment where the remote SharePoint 2010 server uses claims-based authentication.
I will review the SharePoint 2010 authentication methods, provide details for some of the operation of SharePoint 2010 with claims-mode authentication, and describe an approach for developing a set of tools to enable remote authentication to the server for use with the client-side object model.
Brief Overview of SharePoint Authentication
In Office SharePoint Server 2007, there were two authentication types: Windows authentication, which relied upon authentication information being transmitted via HTTP headers, and forms-based authentication. Forms-based authentication used the Microsoft ASP.NET membership and roles engines for managing users and roles (or groups). This was a great improvement in authentication over the 2003 version of the SharePoint technologies, which relied exclusively on Windows authentication. However, it still made it difficult to accomplish many scenarios, such as federated sign-on and single sign-on.
To demonstrate the shortcomings of relying solely on Windows authentication, consider an environment that uses only Windows authentication. In this environment, users whose computers are not joined to the domain, or whose configurations are not set to automatically transmit credentials, are prompted for credentials for each web application they access, and in each program they access it from. So, for example, if there is a SharePoint-based intranet on intranet.contoso.com, and My Sites are located on my.contoso.com, users are prompted twice for credentials. If they open a Microsoft Word document from each site, they are prompted two more times, and two more times for Microsoft Excel. Obviously, this is not the best user experience.
However, if the same network uses forms-based authentication, after users log in to SharePoint, they are not prompted for authentication in other applications such as Word and Excel. But they are prompted for authentication on each of the two web applications.
Federated login systems, such as Windows Live ID, existed, but integrating them into Office SharePoint Server 2007 was difficult. Fundamentally, the forms-based mechanism was designed to authenticate against local users, that is, it was not designed to authenticate identity based on a federated system. SharePoint 2010 addressed this by adding direct support for claims-based authentication. This enables SharePoint 2010 to rely on a third party to authenticate the user, and to provide information about the roles that the user has.
Evolution of Claims-Based Authentication
Because computers are designed to accommodate multiple users, authorization has always been a challenge. At first, the operating system validated the user. Then, the operating system told the application who the user was. This was the very first claim, made by the operating system to the application. The essence of this claim was the identity of the user. The operating system had determined who the user was, probably through a password that the user provided. The application did not have to ascertain who the user was; it simply trusted the operating system.
This authentication process worked well until it became necessary for a user to use applications on systems that they were not logged in to. At this point, the application had to perform its own authentication. Applications started with the same approach that operating systems had used—requiring the user's name and password. Authentication works well this way, but it requires that every application validate the user name and password. Maintaining multiple separate authentication databases became problematic, because users would want to use their same user names and passwords across applications.
Creating accounts with the same user name and password is a manageable problem. However, as the concern for security grew, the requirement for users to periodically change their passwords created an unmanageable situation, where the users would find it difficult to keep their passwords synchronized between different applications. Shared authentication mechanisms were developed to solve this problem. Once again, an application could rely upon a third party for authentication of the user.
The most popular approach for a shared authentication database is one that uses the Kerberos protocol. The Kerberos protocol provides mechanisms that enable a user to authenticate against a centralized server and then convey his or her identity through a ticket signed by that server. The advantage of this approach is that the centralized server is the only one that must know the user's password or other identifying information. This works well for providing authentication information, but it relies on a single store for user identities.
Today, some of the users of your application are using identities that you do not control. Consider an organization that provides payroll or retirement plan services for other companies. These organizations may need to accept users who belong to many other companies, and have no requirement to authenticate them individually. They have to know only that the organization that they have a contract with can identify the user. In these cases, the centralized server for authentication does not exist; there is no centralized entity that can validate every user.
Similarly, if you have an extranet website that is designed to work with multiple partners, you may not want to manage user accounts for all of your partners' employees, or even let the partners manage them. Instead, you just want to take the remote server's claim of the user's identity.
This is one of the great features of a claims-based login. Another system, which your system trusts, provides a claim of the user's identity.
There are many standards, such as WS-Federation, WS-Security, and WS-Trust that define how this sort of arrangement should work. WS-Federation is the most relevant because it describes a specific approach for the exchange of federated authentication. SharePoint 2010 implements the WS-Federation standard, as do many other Microsoft and non-Microsoft products. This means that the SharePoint claims implementation can talk to many other systems.
In addition to the authentication information described previously, there is the capability for the infrastructure to make other claims about the user, including profile properties such as name and email address. The claim can also contain the roles, or groups, that a user belongs to. This opens the door for applications, including SharePoint 2010, to use what the claims provider (known as an issuing party) trusts about the user. Then, applications can set authorization to do something to the roles that are conveyed in the claims token.
The ability for a claim to convey more than just simple identity necessitates the requirement to enforce rules about what types of claims an application will accept from a third party. The WS-Trust standard supports the idea that one party may rely on another party, as described previously. In addition, it also allows for a chain of trust, where the application, such as SharePoint 2010, trusts an internal provider such as Active Directory Federation Services (AD FS) 2.0, which in turn trusts another party or even multiple other parties. AD FS 2.0 creates its own claims token for SharePoint, based on the information that it received from the issuing party that it trusts. This is particularly useful because AD FS 2.0 is a claims transformation engine. That is, it can change one claim, such as a property, into another claim, such as role membership. AD FS 2.0 can also filter the claims made by a third party, so that the third party cannot pass a claim to the application that a user is an administrator for.
SharePoint Claims Authentication Sequence
Now that you have learned about the advantages of claims-based authentication, we can examine what actually happens when you work with claims-based security in SharePoint. When using classic authentication, you expect that SharePoint will issue an HTTP status code of 401 at the client, indicating the types of HTTP authentication the server supports. However, in claims mode a more complex interaction occurs. The following is a detailed account of the sequence that SharePoint performs when it is configured for both Windows authentication and Windows Live ID through claims.
The user selects a link on the secured site, and the client transmits the request.
The server responds with an HTTP status code of 302, indicating a temporary redirect. The target page is /_layouts/authenticate.aspx, with a query string parameter of Source that contains the server relative source URL that the user initially requested.
The client requests /_layouts/authenticate.aspx.
The server responds with a 302 temporary redirect to /_login/default.aspx with a query string parameter of ReturnUrl that includes the authentication page and its query string.
The client requests the /_login/default.aspx page.
The server responds with a page that prompts the user to select the authentication method. This happens because the server is configured to accept claims from multiple security token services (STSs), including the built-in SharePoint STS and the Windows Live ID STS.
The user selects the appropriate login provider from the drop-down list, and the client posts the response on /_login/default.aspx.
The server responds with a 302 temporary redirect to /_trust/default.aspx with a query string parameter of trust with the trust provider that the user selected, a ReturnUrl parameter that includes the authenticate.aspx page, and an additional query string parameter with the source again. Source is still a part of the ReturnUrl parameter.
The client follows the redirect and gets /_trust/default.aspx.
The server responds with a 302 temporary redirect to the URL of the identity provider. In the case of Windows Live ID, the URL is with a series of parameters that identify the site to Windows Live ID and a wctx parameter that matches the ReturnUrl query string provided previously.
The client and server iterate an exchange of information, based on the operation of Windows Live ID and then the user, eventually ending in a post to /_trust/default.aspx, which was configured in Windows Live ID. This post includes a Security Assertion Markup Language (SAML) token that includes the user's identity and Windows Live ID signature that specifies that the ID is correct.
The server responds with a redirect to /_layouts/authenticate.aspx, as was provided initially as the redirect URL in the ReturnUrl query string parameter. This value comes back from the claims provider as wctx in the form of a form post variable. During the redirect, the /_trust/default.aspx page writes two or more encrypted and encoded authentication cookies that are retransmitted on every request to the website. These cookies consist of one or more FedAuth cookies, and an rtFA cookie. The FedAuth cookies enable federated authorization, and the rtFA cookie enables signing out the user from all SharePoint sites, even if the sign-out process starts from a non-SharePoint site.
The client requests /_layouts/authenticate.aspx with a query string parameter of the source URL.
The server responds with a 302 temporary redirect to the source URL.
Note
If there is only one authentication mechanism for the zone on which the user is accessing the web application, the user is not prompted for which authentication to use (see step 6). Instead, /_login/default.aspx immediately redirects the user to the appropriate authentication provider—in this case, Windows Live ID.
SharePoint Online Authentication Cookies
An important aspect of this process, and the one that makes it difficult but not impossible to use remote authentication for SharePoint Online in client-side applications, is that the FedAuth cookies are written with an HTTPOnly flag. This flag is designed to prevent cross-site scripting (XSS) attacks. In a cross-site scripting attack, a malicious user injects script onto a page that transmits or uses cookies that are available on the current page for some nefarious purpose. The HTTPOnly flag on the cookie prevents Internet Explorer from allowing access to the cookie from client-side script. The Microsoft .NET Framework observes the HTTPOnly flag also, making it impossible to directly retrieve the cookie from the .NET Framework object model.
Note
For SharePoint Online, the FedAuth cookies are written with an HTTPOnly flag. However, for on-premises SharePoint 2010 installations, an administrator could modify the web.config file to render normal cookies without this flag.
Using the Client Object Models for Remote Authentication in SharePoint Online
The starting point for using the SharePoint client-side object model for remote authentication is getting a ClientContext object. It is this client context object that ties the other operations in the object model to the server and specified site. In a Windows-based HTTP authentication scenario, the client context behaves as Internet Explorer behaves; it automatically transmits credentials to the server if the server is in the Intranet zone. In most cases, this works just fine. The server processes the credentials and automatically authenticates the user.
In a forms-based authentication environment, it is also possible to use the FormsAuthenticationLoginInfo object to provide forms-based authentication to the server. However, this works only for forms-based authentication. It does not work for federated-based claims scenarios because SharePoint does not own the actual authentication process.
Creating an authenticated ClientContext object is a multistep process. First, the user must be able to sign into the remote system interactively. First, the user signs into SharePoint through the federated authentication provider, and SharePoint must issue its authentication cookies. Second, the code must retrieve the authentication cookies. Third, those cookies must be added to the ClientContext object.
Enabling User Login for Remote Authentication
The .NET Framework includes a System.Windows.Forms.WebBrowser object that is designed to enable the use of a web browser inside of an application. To enable the user to log in to the federated authentication provider, this object must be created and displayed. The goal, however, is to retrieve the authentication cookies issued by SharePoint Online. To determine when the login process is completed, you must register a handler on the Navigated event. This event fires after the browser has completed navigation. This event handler watches for the user to be returned to the URL that they started navigating from. When this occurs, the code knows that the user has completed the login sequence.
Fetching the SharePoint Authentication Cookies
Because the FedAuth cookies are written with an HTTPOnly flag, they cannot be accessed from the .NET Framework. To retrieve the cookies, a call must be made to the WININET.dll. The .NET Framework can call COM-based DLL methods through PInvoke (platform invoke). In this case, the method to be called is InternetGetCookieEx. This can return regular cookies and those with the HTTPOnly flag. However, this method works only by starting with Internet Explorer 8. After the cookie is retrieved, it must be added to the client context.
For more information about PInvoke, see Calling Native Functions from Managed Code.
Adding the SharePoint Authentication Cookies to the ClientContext object
The final step is to retrieve the authentication cookies from the user's login and to attach them to the client context. Unfortunately, there is no direct way to add the cookies to the request. However, there is an event, ExecutingWebRequest, which is called before the ClientContext object makes a request to the server. By adding an event handler to this event, you can add a new request header with the authentication cookies. After this is complete, the ClientContext object can be used normally, and the rest of the code is completely unaware that the authentication is based on a federated authentication.
Reviewing the SharePoint Online Remote Authentication Sample Code Project
The code sample that accompanies this article demonstrates this technique of adding the SharePoint authentication cookies to the ClientContext object. It provides a set of classes that you can use to perform federated user authentication. You can start with the sample program to see what changes you must make when using this code compared to using an HTTP authenticated web server.
The Sp_Ctx SharePoint Authentication Sample Client Application
The Sp_Ctx project is a command-line program that uses a SharePoint ClientContext object to retrieve information about the web that the context is pointed to.
Note
When using the Sp_Ctx sample, you must specify the web URL as an https request. Specifying the web URL as an http request will result in an exception.
The project refers to the ClaimsAuth library but is otherwise the same. In fact, the main code looks almost identical to the code that you would find in a standard client application.
01: static void Main(string[] args) 02: { 03: if (args.Length < 1) { Console.WriteLine("SP_Ctx <url>"); return; } 04: string targetSite = args[0]; 05: using (ClientContext ctx = ClaimClientContext.GetAuthenticatedContext(targetSite)) 06: { 07: if (ctx != null) 08: { 09: ctx.Load(ctx.Web); // Query for Web 10: ctx.ExecuteQuery(); // Execute 11: Console.WriteLine(ctx.Web.Title); 12: } 13: } 14: Console.ReadLine(); 15: }
In this code, the only difference from code you would find in a standard client application is in line 05. Instead of creating a new ClientContext, we call ClaimClientContext.GetAuthenticatedContext. The big difference is that calling ClaimClientContext.GetAuthenticatedContext displays a form to enable the user to supply his or her credentials to the remote system, captures the authentication cookies needed to authenticate requests, and adds that header to the client context.
ClaimClientContext in the SharePoint Online Authentication Sample
Examining the next layer of the code, notice the ClaimClientContext object that was being called. It exposes two methods, the one used in the sample, GetAuthenticatedContext and GetAuthenticatedCookie. GetAuthenticatedCookie is called by GetAuthenticatedContext. In fact, GetAuthenticatedContext calls GetAuthenticatedCookie to get the cookies and wraps those in a context.
GetAuthenticatedCookie opens the dialog box that displays the website, and watches for the authentication request to complete so that the authentication cookies can be retrieved.
ClaimsWebAuth in the SharePoint Online Authentication Sample
The ClaimsWebAuth class gets the authentication cookies. It encapsulates a form and a WebBrowser object. The first step, which is performed during object construction, is to gather the login and end navigation pages. This is done through the GetClaimParams method, which makes a web request with an HTTP OPTIONS method.
When the Show method is called, the WebBrowser object is created, and an event handler is added for the Navigated event. This event is called every time the WebBrowser object is finished navigating. The event handler detects when the web browser has reached the navigation end URL. The event receiver makes the call to CookieReader, which in turn reads from WinINET to get the HTTPOnly FedAuth cookie.
With the event receiver in place, the WebBrowser is navigated to the login URL and the authentication process occurs. When it is finished, the authentication cookies are returned to the caller.
CookieReader in the SharePoint Online Authentication Sample
The final piece of the sample code is the CookieReader class, which contains the call to WinINET.dll and a helper function to get the cookie. The helper function, named GetFedAuthCookie, returns a string that represents the cookie in the form of "Name=Value." GetFedAuthCookie calls the WinINET.dll method InternetGetCookieEx to fetch the cookie.
InternetGetCookieEx returns false if the size of the string buffer that is passed in is not large enough. It also sets the size parameter to the size of buffer it needs. As a result, GetFedAuthCookie may have to call InternetGetCookieEx twice to get the value of the cookie, if the initial buffer is too small.
Conclusion
This article describes how to perform claims-based authentication for Microsoft SharePoint Online in client applications by using the SharePoint 2010 client-side object models. SharePoint Online provides a compelling and flexible option for companies that want the powerful collaborative platform of SharePoint, without the operational costs that are associated with hosting software on-premises. And, with the techniques discussed in this article, developers can use the SharePoint client-side object models to create client applications that are capable of remotely authenticating against SharePoint Online.
About the Author
Robert Bogue has been a part of the Microsoft MVP program for 8 years. Robert’s latest book is The SharePoint Shepherd’s Guide for End Users. You can find out more about the book at. Robert blogs at. You can reach Robert at [email protected].
Additional Resources
For more information about remote authentication in SharePoint Online using claims-based authentication, see the following resources: | https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/hh147177(v%3Doffice.14) | 2019-03-18T18:32:58 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.microsoft.com |
Contents IT Business Management Previous Topic Next Topic Enable cost rollup calculations Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Enable cost rollup calculations Enable rollup calculations from the project properties. Before you beginRole required: it_project_manager Procedure Navigate to Project > Settings > Preferences. Select Enable project cost rollup. Click Save. ResultRollup values are read-only on forms. Point to the icon beside the field for a tooltip message. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-it-business-management/page/product/project-management/task/t_EnablingCostRollupCalculations.html | 2019-03-18T18:22:25 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.servicenow.com |
OpenEBS Features and Benefits
OpenEBS Features
- Containerized Storage for Containers
- Synchronous replication
- Snapshots and clones
- Backup and Restore
- Prometheus metrics and Grafana graphs
OpenEBS Benefits
Granular policies per stateful workload
-
Reduced storage TCO upto 50%
Native Hyperconvergence on Kubernetes
High availability - No Blast Radius
Free cross cloud visibility of stateful applications
For more information on how OpenEBS is used in cloud native environments, visit the use cases section.
OpenEBS FeaturesOpenEBS Features
Containerized Storage for ContainersContainerized Storage for Containers
OpenEBS follows CAS architecture. Volumes provisioned through OpenEBS are always containerized. Each volume has a dedicated storage controller that increases the agility and granularity of persistent storage operations of the stateful applications. Benefits and more details on CAS architecture are found here
Synchronous ReplicationSynchronous Replication
OpenEBS synchronously replicates the data volume replicas for high availability. The replication happens across Kubernetes zone resulting in the cloud native applications to be highly available in cross AZ setups. This feature is especially becomes useful to build highly available stateful applications using local disks on cloud providers services such as GKE, EKS and AKS
Snapshots and ClonesSnapshots and Clones
Copy-on-write snapshots are a key feature of OpenEBS. The snapshots are created instantaneously and there is no limit on the number of snapshots. The incremental snapshot capability enables data migration and portability services across Kubernetes clusters and across different cloud providers or data centers, enabling a true multi-cloud data plane for stateful applications. Operations on snapshots and clones are performed in completely Kubernetes native method using the standard kubectl command
Backup and RestoreBackup and Restore
Backup and restore of OpenEBS volumes work with the recent Kubernetes backup and restore solution such as VMware velero (or HeptIO Ark). Data backup to object storage targets such as S3 or Minio can be built using OpenEBS incremental snapshot capability. This storage level snapshotting and backup saves a lot bandwidth and storage space as only the incremental data is used for backup.
Prometheus Metrics for Workload TuningPrometheus Metrics for Workload Tuning
OpenEBS volumes are instrumented for granular data metrics such as volume IOPS, throughput, latency and data patterns. As OpenEBS follows CAS architecture, Stateful applications can be tuned for better performance by observing the traffic data patterns on Prometheus and tweaking the storage policy parameters without worrying about neighbouring workloads that are using OpenEBS
OpenEBS BenefitsOpenEBS Benefits
Truely Cloud Native Storage for KubernetesTruely Cloud Native Storage for Kubernetes
With CAS architecture and being completely in user space, OpenEBS is a truly cloud native storage for stateful applications on Kubernetes. This greatly simplifies how persistent storage is used and managed by developers and DevOps architects. They use the standard Kubernetes skills and utilities to configure, use and manage the persistent storage needs.
Avoid Cloud Lock-inAvoid Cloud Lock-in
Even with Kubernetes, data gravity concerns exist on clouds. With Kubernetes Stateful applications can be moved across clouds. But with Stateful applications, the data is written to cloud provider storage infrastructure and results in the cloud lock-in of the Stateful applications. The OpenEBS, the data is written to the OpenEBS layer and it acts as the data abstraction layer. Using this data abstraction layer, data can be moved across Kubernetes layers eliminating the expensive cloud lock-in issue.
Granular Policies Per Stateful WorkloadGranular Policies Per Stateful Workload
Containerization of storage software and dedicating such controller for each volume brings in maximum granularity in storage policies. The storage parameters can be monitored on a per volume basis and storage policies can be dynamically updated at run time to achieve the desired result for a given workload. The policies are tested and tuned keeping only the particular workload in mind, neighbouring workloads are affected. The operations and maintenance of storage is greatly reduced because of this dedicated storage stack per workload
Reduced Storage TCO upto 50%Reduced Storage TCO upto 50%
On most clouds, block storage on cloud is charged based on how much is purchased and not on how much is used. Thin provisioning feature of OpenEBS is useful in pooling the local storage or cloud storage and start giving out the data volumes to the stateful applications in whatever size they need. The storage can be added on the fly without any disruption to the volumes exposed to the workloads or applications. This process has shown cost savings of upto 50% in the medium to long term of running workloads on clouds
Native Hyperconvergence on KubernetesNative Hyperconvergence on Kubernetes
Node Disk Manager in OpenEBS enables disk management in a Kubernetes way or by using Kubernetes constructs. Using OpenEBS, nodes in the Kubernetes cluster can be horizontally scaled without worrying about managing persistent storage needs of stateful applications. The storage needs (capacity planning, performance planning, and volume management) of a cluster can easily be automated using the volume and pool policies of OpenEBS.
High AvailabilityHigh Availability
There is no blast radius effect. CAS architecture does not have the typical blast radius issue that is typically observed in the traditional storage systems. Metadata of the volume is not centralized and is kept local to the volume. Losing any node results in the loss of volume replicas present only on that node. As the volume data is synchronous replicated at least on to two other nodes, in the event of a node failure, the data continues to be available at the same performance levels.
Free Cross Cloud Visibility of Stateful ApplicationsFree Cross Cloud Visibility of Stateful Applications
MayaOnline is the SaaS service for OpenEBS enabled Kubernetes clusters that provide comprehensive monitoring and management of OpenEBS volumes. Logs of all OpenEBS volume pods are instantly uploaded to MayaOnline and available for users through Kibana dashboard. Topology view on MayaOnline used very often to understand the Kubernetes resources when they are deployed at scale. | https://staging-docs.openebs.io/docs/next/features.html | 2019-03-18T18:01:22 | CC-MAIN-2019-13 | 1552912201521.60 | [] | staging-docs.openebs.io |
Contents IT Service Management Previous Topic Next Topic Installed with Change Management - State Model Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Installed with Change Management - State Model Several types of components are installed with the Change Management - State Model. Related TasksUpdate change request states Properties installed with Change Management - State Model Change Management - State Model adds the following properties. Note: To open the System Property [sys_properties] table, enter sys_properties.list in the navigation filter. Property Description glide.ui.change_request_activity.fields Change request activity formatter fields. Type: string Default value: assigned_to,cmdb_ci,state,impact,priority,opened_by,work_notes,comments,on_hold_reason Location: System Property [sys_properties] table com.snc.change_management.core.log Controls the level at which logging should be displayed. Type: choice list Default value: debug Other possible values: info warn error Location: System Property [sys_properties] table Business rules installed with Change Management - State Model Change Management - State Model adds the following business rules. Business rule Table Description Scratchpad Variables from parent Change Change Task[change_task] Sets a flag in the scratchpad variable to indicate if the change task has a change request that is on hold. mark_closed Change Request[change_request] Sets a change request to inactive depending on the current state. Cancel approvals when Change is on hold Change Request[change_request] Cancels all approvals if the change request is put on hold. Client scripts installed with Change Management - State Model Change Management - State Model adds the following client scripts. Client script Table Description Hide On hold for certain states Change Request [change_request] Hides the On hold field if the state was New, Closed, or Canceled when the Change Request form was loaded. Field message - state Change Task [change_task] Adds a field message to the State field under certain conditions, such as when the change is on hold. Show valid states values Change Request [change_request] Changes the State field to display only the current state and the next valid state for the change request. Adds a field message to the State field when the current state requires an approval or the change request is On hold. Show On hold reason when on hold ticked Change Request [change_request] Makes the On hold reason field mandatory when the On hold check box is selected. Table modified with Change Management - State Model Change Management - State Model modifies the list view of the following table. Table Description Change Request [change_request] Sets the column order in the list of change requests. Script includes installed with Change Management - State Model Change Management - State Model adds the following script includes. Script include Description ChangeRequest Change request API. Provides an abstraction from the legacy and new change types and state models. ChangeRequestStateHandlerSNC Base state handler implementation extended by ChangeRequestStateHandler. ChangeRequestStateHandler Transition between states. Uses one of the defined models to determine which transitions are allowed. ChangeRequestStateModelSNC_emergency Extended by ChangeRequestStateModel_emergency. ChangeRequestStateModel_emergency State model for emergency changes. ChangeRequestStateModelSNC_standard Extended by ChangeRequestStateModel_standard. ChangeRequestStateModel_standard State model for standard changes. ChangeRequestStateModelSNC_normal Extended by ChangeRequestStateModel_normal. ChangeRequestStateModel_normal State model for normal changes. ChangeRequestStateHandlerAjaxSNC Base client API extended by ChangeRequestStateHandlerAjax. ChangeRequestStateHandlerAjax Client-callable API for ChangeRequestStateHandler. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-it-service-management/page/product/change-management/reference/r_InstalledWithStateModel.html | 2019-03-18T18:17:50 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.servicenow.com |
changes.mady.by.user Nilmini Perera
Saved on Jan 08, 2019
...
windowSize
This parameter specifies the size of a slot. A publisher returns its last message ID to the slot manager each time he/she publishes a number of messages equal to the number specified in this parameter.
Increasing the window size would increase the number of messages received per consumer. Thus the subscribers who are the earliest to be allocated slots would receive a higher number of messages. A lower window size would result in an even load of messages received by each subscriber. The default/recommended value is 1000.
workerThreadCount
deleteThreadCount
maxSubmitDelay
messageAccumulationTimeout
SlotDeleteQueueDepthWarningThreshold
thriftClientPoolSize
maxNumberOfReadButUndeliveredMessages
This parameter specifies the maximum number of undelivered messages that are allowed to be retained in the memory.
The default value for the maxNumberOfReadButUndeliveredMessages parameter is 1000. Increasing this value can cause out-of-memory exceptions, but performance will be improved because the number of database calls will be reduced. Therefore, your allocated server memory capacity should be considered when configuring this parameter.
ringBufferSize
This parameter specifies the thread pool size of the queue delivery workers.
The default value of 4096 for the ringBufferSize parameter can be increased if there are a lot of unique queues in the system. An increased ring buffer size is also required if the slot window size is large and therefore, there is a large number of messages to be delivered.
parallelContentReaders
This parameter specifies the number of parallel readers used to read content from the message store.
The default value for the parallelContentReaders parameter, the parallelDeliveryHandlers parameter and the parallelDecompressionHandlers parameter is 5. Increasing this value would increase the speed of the message sending mechanism, however, the load on the message store would also be increased. A higher number of cores is required to increase these values.
parallelDeliveryHandlers
parallelDecompressionHandlers
contentReadBatchSize
ackHandlerCount
This parameter specifies the number of message acknowledgement handlers to process acknowledgements concurrently.
The default value of 1 for the ackHandlerCount parameter should be increased in a high throughput scenario with a relatively high amount of messages being delivered to the consumers. The value for this parameter can be decreased when the value specified for the ackHandlerBatchSize parameter is high and as a result, each individual acknowledgement handler can handle a higher number of acknowledgements. Note that increasing the number of acknowledgement handlers when the number of messages being delivered and acknowledged is low, or when the value specified for the ackHandlerBatchSize parameter is high can result in idle acknowledgement handlers incurring an unnecessary system overhead.
ackHandlerBatchSize
This parameter specifies the maximum number of acknowledgements that can be handled by an acknowledgement handler.
The default value of 100 for the ackHandlerBatchSize parameter should be increased when there is an increase in the number of messages being delivered to consumers. If the number of acknowledgements that can be handled by each individual acknowledgement handler is too low in a high throughput scenario, it will be required to increase the number of acknowledgement handlers. This would increase the number of calls made to the database, thereby increasing the system overhead.
maxUnckedMessages
contentChunkHandlerCount
This parameter specifies the number of handlers that should be available to handle content chunks concurrently.
The default value of 3 for the contentChunkHandlerCount parameter should be increased when there is a significant number of large messages being published. A low value can be specified when the value for the maxContentChunkSize parameter is high in such situations, each individual handler will be able to handle a higher amount of content chunks. Note that increasing this value in a scenario where there are not many large messages published or when the maximum content chunk size is high can result in idle handlers causing an unnecessary system overhead.
maxContentChunkSize
allowCompression
This parameter specifies whether or not content compression is enabled. If enabled, messages published to the Message Broker profile will be compressed before storing in the DB, to reduce the content size.
contentCompressionThreshold
bufferSize
This parameter specifies the size of the Disruptor ring buffer for inbound event handling.
It is recommended to increase the value for the bufferSize parameter when there is an increase in the rate of publishing. The default/recommended value is 65536.
parallelMessageWriters
This parameter specifies the number of parallel writers used to write content to the message store.
Increasing the value for the parallelMessageWriters parameter increases the speed of the message receiving mechanism. However, it would also increase the load on the data store. A higher number of cores are required to increase this value. The default/recommended value is 1.
messageWriterBatchSize
This parameter specifies the maximum batch size of the batch write operation for inbound messages.
The messageWriterBatchSize parameter should be used in high throughput scenarios to avoid database requests with a high load. A higher number of cores are required to increase this value. The default/recommended value is 70.
purgedCountTimeout
vHostSyncTaskInterval
counterTaskInterval
This parameter specifies the delay which should occur between the end of one execution and the start of another, in milliseconds.
contentRemovalTaskInterval
This parameter specifies the ask interval for the content removal task which will remove the actual message content from the store in the background.
If the publish/consumer rate is very high, a low value should be entered for the contentRemovalTaskInterval parameter to increase the number of delete requests per task.
AnchortransactionstransactionsTransactions
maxWaitTimeout
Powered by a free Atlassian Confluence Community License granted to WSO2, Inc.. Evaluate Confluence today. | https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=87712723&selectedPageVersions=4&selectedPageVersions=3 | 2019-03-18T18:29:18 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.wso2.com |
deepTools: tools for exploring deep sequencing data¶
deepTools is a suite of python tools particularly developed for the efficient analysis of high-throughput sequencing data, such as ChIP-seq, RNA-seq or MNase-seq.
There are 3 ways for using deepTools:
- Galaxy usage – our public deepTools Galaxy server let’s you use the deepTools within the familiar Galaxy framework without the need to master the command line
- command line usage – simply download and install the tools (see Installation and The tools)
- API – make use of your favorite deepTools modules in your own python programs (see deepTools API)
The flow chart below depicts the different tool modules that are currently available.
If the file names in the figure mean nothing to you, please make sure to check our Glossary of NGS terms.
Contents:¶
While developing deepTools, we continuously strive to create software that fulfills the following criteria:
- efficiently extract reads from BAM files and perform various computations on them
- turn BAM files of aligned reads into bigWig files using different normalization strategies
- make use of multiple processors (speed!)
- generation of highly customizable images (change colours, size, labels, file format, etc.)
- enable customized down-stream analyses, meaning that every data set created can be stored by the user
- modular approach - compatibility, flexibility, scalability (i.e. we can add more and more modules and make use of established methods)
Tip
For support, questions, or feature requests contact: [email protected]
Please cite deepTools2): gkw257.
This tool suite is developed by the Bioinformatics Facility at the Max Planck Institute for Immunobiology and Epigenetics, Freiburg. | https://deeptools.readthedocs.io/en/stable/ | 2019-03-18T18:30:33 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['_images/start_collage.png', '_images/start_collage.png'],
dtype=object)
array(['_images/start_workflow1.png', '_images/start_workflow1.png'],
dtype=object)
array(['_images/logo_mpi-ie1.jpg', '_images/logo_mpi-ie1.jpg'],
dtype=object) ] | deeptools.readthedocs.io |
about_ProvMachineCreationSnapIn
Topic
about_ProvMachineCreationSnapin
Short Description
The Machine Creation Service PowerShell snap-in provides administrative functions for the Machine Creation Service.
Command Prefix
All commands in this snap-in have the noun prefixed with 'Prov'.
Long Description
The Machine Creation Service PowerShell snap-in enables both local and remote administration of the Machine Creation Service. It provides facilities to create virtual machines and manage the associated disk images.
The snap-in provides two main entities:
Provisioning Scheme Specifies details of new virtual machines created by the Machine Creation Service. Provisioning schemes define the following information.
Hosting Unit Provides details of the hypervisor and storage on which new virtual machines will be created. Stored and maintained by the Host Service and PowerShell snap-in.
Identity Pool Lists the Active Directory computer accounts available for use by new virtual machines. Stored and maintained by the Active Directory Identity Service and PowerShell snap-in.
Master Image Specifies the disk image that will be used for new virtual machines. Accessed through the hosting provider in the Host Service snap-in.
Provisioned VM Defines the virtual machines created by the Machine Creation Services. These virtual machines are associated with the provisioning scheme from which they were created.
The processes of creating provisioning schemes and new virtual machines can take a significant amount of time to complete. For this reason, these long-running tasks can be run asynchronously so that other commands are accessible while the processes are running. Note, however, that only one long-running task can operate on a provisioning scheme at any one time. The processes are monitored using the Get-ProvTask command. For more information, see the help for Get-ProvTask. | https://developer-docs.citrix.com/projects/citrix-virtual-apps-desktops-sdk/en/latest/MachineCreation/about_ProvMachineCreationSnapin/ | 2019-03-18T18:25:12 | CC-MAIN-2019-13 | 1552912201521.60 | [] | developer-docs.citrix.com |
Installing additional components
Depending on your organization’s needs, you may have purchased additional features that add specialized functionality to your Dynamics GP system. A Dynamics GP feature can be a single function or a complete a range of related business and accounting tasks that use one or more modules. Several products that integrate with Dynamics GP are included on the Dynamics GP media.
Dynamics GP features
After you’ve installed Dynamics GP, you may decide to purchase an additional feature or remove a feature. Some features add a single function to your Dynamics GP system while some, such as Manufacturing, allow you to complete a range of related business and accounting tasks that use one or more modules. You can use the Select Features window to install or uninstall a feature. For more information about accessing this window, see Adding or removing additional features.
You can register Dynamics GP using the Registration window (Administration >> Setup >> System >> Registration) after you install. For more information about registration, see Registering Dynamics GP. All features are registered for the sample company, Fabrikam, Inc. For more information about the sample company, see Adding sample company data.
The following lists shows the Dynamics GP features. The features available depends on the country or region you selected when installing Dynamics GP.
For all countries and regions:
- A4
- Analysis Cubes Client
- Analytical Accounting
- Date Effective Tax Rates
- Electronic Bank Reconcile
- Encumbrance Management
- Enhanced Intrastat
- Fixed Asset Management
- Grant Management
- Manufacturing
- Multilingual Checks
- Payment Document Management
- Professional Services Tools Library
- Project Accounting
- Revenue/Expense Deferrals
- Safe Pay
- Service Based Architecture
- VAT Daybook
- Web Client Runtime
For all countries and regions except Canada and the United States:
- Bank Management
- Direct Debit Refunds
- Scheduled Installments
For the United States:
- Human Resources and Payroll suite
For Belgium and France:
- Export Financial Data
Note
We recommend that you install each Dynamics GP feature and additional component that you are going to register on all client computers..
Be sure to follow the instructions in the Dynamics GP Utilities windows after installing a feature. Depending on the feature that you’re installing, you may have to update tables and update your companies.
After you install a feature, be sure that the feature is at the current version. You can’t log in to Dynamics GP on a client computer if a product installed on the client has different version information than the server. You can use the GP_LoginErrors. installation media, double-click the Setup.exe file to open the Dynamics GP installation window. the installation.
In the Installation Complete window, click Exit.
Start Dynamics GP Utilities. Choose Start >> All Programs >> Microsoft Dynamics >> GP>> GP Utilities. that you’re installing, you may have to update tables and update your companies.
After the processing is finished, the Additional Tasks window will open, where you can perform additional tasks, start Dynamics GP, or exit the installation.
Additional components
A smaller set of additional components are separate installations available on the Dynamics GP media. These additional components are listed on the main Dynamics GP installation window for media. For more information about accessing this window, see Installing an additional component.
There are some additional components that are released only on CustomerSource.
Installing an additional component
Use this procedure to install an additional component after you’ve installed Dynamics GP. installation media, double-click the Setup.exe file to open the Dynamics GP installation window..
See Also
Using Microsoft Dynamics Utilities
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/dynamics-gp/installation/installing-additional-components | 2019-03-18T18:13:32 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['media/service-based-architecture-login.png',
'Login screen login screen for service based architecture service'],
dtype=object) ] | docs.microsoft.com |
Sample (DirectX HLSL Texture Object)
Samples a texture.
Parameters
Return Value
The texture format, which is one of the typed values listed BasicHLSL10.fx file in the BasicHLSL10 Sample.
// Object Declarations Texture2D g_MeshTexture; // Color texture for mesh SamplerState MeshTextureSampler { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; struct VS_OUTPUT { float4 Position : SV_POSITION; // vertex position float4 Diffuse : COLOR0; // vertex diffuse color (note that COLOR0 is clamped from 0..1) float2 TextureUV : TEXCOORD0; // vertex texture coords }; VS_OUTPUT In; // Shader body calling the intrinsic function ... Output.RGBColor = g_MeshTexture.Sample(MeshTextureSampler, In.TextureUV) * In.Diffuse;
Remarks
Texture sampling uses the texel position to look up a texel value. An offset can be applied to the position before lookup. The sampler state contains the sampling and filtering options. This method can be invoked within a pixel shader, but it is not supported in a vertex shader or a geometry shader.
Calculating Texel Positions
Texture coordinates are floating-point values that reference texture data, which is also known as normalized texture space. Address wrapping modes are applied in this order (texture coordinates + offsets + wrap mode) to modify texture coordinates outside the [0...1] range.
For texture arrays, an additional value in the location parameter specifies an index into a texture array. This index is treated as a scaled float value (instead of the normalized space for standard texture coordinates). The conversion to an integer index is done in the following order (float + round-to-nearest-even integer + clamp to the array range).
Applying Texture Coordinate Offsets
The offset parameter modifies the texture coordinates, in texel space. Even though texture coordinates are normalized floating-point numbers, the offset applies an integer offset.
The data format returned is determined by the texture format. For example, if the texture resource was defined with the DXGI_FORMAT_A8B8G8R8_UNORM_SRGB format, the sampling operation converts sampled texels from gamma 2.0 to 1.0, filter, and writes the result as a floating-point value in the range [0..1]. | https://docs.microsoft.com/en-us/windows/desktop/direct3dhlsl/dx-graphics-hlsl-to-sample | 2019-03-18T17:45:25 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.microsoft.com |
These Bootstrap 4 element properties can be set for any element on the page.
Responsive vs. non-responsive properties
Responsive properties can be set for a particular display size, while non-responsive properties define the element behavior in general, irrespective of the device size.
An example of a non-responsive property is the Text Color property. It lets us choose the color of the text, but we can’t make the text red on small screens and blue on others.
An example of a responsive property is the Text Align property. We can make the text left aligned on mobile devices, centered on tablets and right aligned on larger devices.
Responsive properties have a device size selector displayed at the top of the property section.
By default, XS (extra small) is selected. In Bootstrap, that means that the property value will affect XS and above, encompassing all screen sizes.
So, if you just leave XS setting, that’s how the element will behave on all screen sizes.
To display settings for another display size, toggle the size in the device size selector:
Then, if you then set a different value for that size – for example, LG (large) – that setting will override the default XS value for large sizes and up.
Let’s look at an example:
Here we set the Text Align property for XS to left and to right for LG and above.
On XS, SM and MD sizes the text will be left aligned. On LG and XL, it will be right aligned.
Of course, we don’t have to set the XS value at all. We can make the text left aligned on SM, and right aligned on larger devices. That means that for XS size, the value of Text Align Bootstrap property is not defined. The text on XS will be aligned according to whatever CSS rules affect that element.
Those sizes that are not visible in the active page view will be shown darker in the Device size selector. For example, XL is dimmed because it is not visible on the currently active page view that has the LG size.
Let’s list all general Bootstrap 4 sections in the Element properties. We won’t go into explaining how Bootstrap works here. We’ll only mention things specific to working with Bootstrap in Pinegrow that are not self-evident.
Layout
Responsive
Text & Context
Non-responsive
Spacing
Responsive
Display
Non-responsive
Border
To display borders only on certain sides (for example, top and bottom), select the Border checkbox and then use the Hide multi-select control to select the sides where the border is hidden – in our example, left and right.
Columns
Columns are a responsive control, but here we list the settings for different sizes in a table, instead of using Device size selector.
Why is the columns control shown for all elements, not just for divs in rows?
Bootstrap columns are handy for sizing and positioning various elements, not just main column divs.
You could set a column span on a button, for example.
Of course, according to Bootstrap rules, these settings only work properly if the element is positioned inside a row element. That’s why, if you set a column value on an element that is not located in a row, Pinegrow will offer to create a wrapping row element.
Visibility
Non-responsive
Flex Container and Flex child
Responsive
These controls are almost identical to Flex controls in the Style Visual editor. That’s because these Bootstrap helper classes directly correspond to different values of CSS Flex properties.
Flex Container settings apply to containers and Flex child settings to the items in Flex containers.
Tooltip
Check the Tooltip checkbox and set the parameters to display a tooltip on the selected element.
_15<<_16<< | https://docs.pinegrow.com/docs/bootstrap-visual-editor/bootstrap-properties-common-to-all-elements/bootstrap-4-element-properties/ | 2019-03-18T18:33:17 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-non-responsive.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-responsive-control.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-device-size-selector.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-device-selector-example.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-size-not-shown.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-layout.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-text-context.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-spacing.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-display.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-border.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-columns.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-visibility.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-flex.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-flex-child.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-tooltip.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-popover.jpg',
None], dtype=object)
array(['https://vyqlux86kz-flywheel.netdna-ssl.com/wp-content/uploads/bootstrap-4-props-toggle.jpg',
None], dtype=object) ] | docs.pinegrow.com |
bamCoverage¶
If you are not familiar with BAM, bedGraph and bigWig formats, you can read up on that in our Glossary of NGS terms
This tool takes an alignment of reads or fragments as input (BAM file) and generates a coverage track (bigWig or bedGraph) as output. The coverage is calculated as the number of reads per bin, where bins are short consecutive counting windows of a defined size. It is possible to extended the length of the reads to better reflect the actual fragment length. bamCoverage offers normalization by scaling factor, Reads Per Kilobase per Million mapped reads (RPKM), counts per million (CPM), bins per million mapped reads (BPM) and 1x depth (reads per genome coverage, RPGC).
usage: An example usage is:$ bamCoverage -b reads.bam -o coverage.bw
Usage hints¶
- A smaller bin size value will result in a higher resolution of the coverage track but also in a larger file size.
- The 1x normalization (RPGC) requires the input of a value for the effective genome size, which is the mappable part of the reference genome. Of course, this value is species-specific. The command line help of this tool offers suggestions for a number of model species.
- It might be useful for some studies to exclude certain chromosomes in order to avoid biases, e.g. chromosome X, as male mice contain a pair of each autosome, but usually only a single X chromosome.
- By default, the read length is NOT extended! This is the preferred setting for spliced-read data like RNA-seq, where one usually wants to rely on the detected read locations only. A read extension would neglect potential splice sites in the unmapped part of the fragment. Other data, e.g. Chip-seq, where fragments are known to map contiguously, should be processed with read extension (
--extendReads [INTEGER]).
- For paired-end data, the fragment length is generally defined by the two read mates. The user provided fragment length is only used as a fallback for singletons or mate reads that map too far apart (with a distance greater than four times the fragment length or are located on different chromosomes).
Warning
If you already normalized for GC bias using
correctGCbias, you should absolutely NOT set the parameter
--ignoreDuplicates!
Note
Like BAM files, bigWig files are compressed, binary files. If you would like to see the coverage values, choose the bedGraph output via
--outFileFormat.
Usage example for ChIP-seq¶
This is an example for ChIP-seq data using additional options (smaller bin size for higher resolution, normalizing coverage to 1x mouse genome size, excluding chromosome X during the normalization step, and extending reads):
bamCoverage --bam a.bam -o a.SeqDepthNorm.bw \ --binSize 10 --normalizeUsing RPGC --effectiveGenomeSize 2150570000 --ignoreForNormalization chrX --extendReads
If you had run the command with
--outFileFormat bedgraph, you could easily peak into the resulting file.
$ head SeqDepthNorm_chr19.bedgraph 19 60150 60250 9.32 19 60250 60450 18.65 19 60450 60650 27.97 19 60650 60950 37.29 19 60950 61000 27.97 19 61000 61050 18.65 19 61050 61150 27.97 19 61150 61200 18.65 19 61200 61300 9.32 19 61300 61350 18.65
As you can see, each row corresponds to one region. If consecutive bins have the same number of reads overlapping, they will be merged.
Usage examples for RNA-seq¶
Note that some BAM files are filtered based on SAM flags (Explain SAM flags).
Regular bigWig track¶
bamCoverage -b a.bam -o a.bw
Separate tracks for each strand¶
Sometimes it makes sense to generate two independent bigWig files for all reads on the forward and reverse strand, respectively.
As of deepTools version 2.2, one can simply use the
--filterRNAstrand option, such as
--filterRNAstrand forward or
--filterRNAstrand reverse.
This handles paired-end and single-end datasets. For older versions of deepTools, please see the instructions below.
Note
The
--filterRNAstrand option assumes the sequencing library generated from ILLUMINA dUTP/NSR/NNSR methods, which are the most commonly used method for
library preparation, where Read 2 (R2) is in the direction of RNA strand (reverse-stranded library). However other methods exist, which generate read
R1 in the direction of RNA strand (see this review). For these libraries,
--filterRNAstrand will have an opposite behavior, i.e.
--filterRNAstrand forward will give you reverse strand signal and vice-versa.
Versions before 2.2¶
To follow the examples, you need to know that
-f will tell
samtools view to include reads with the indicated flag, while
-F will lead to the exclusion of reads with the respective flag.
For a stranded `single-end` library
# Forward strand bamCoverage -b a.bam -o a.fwd.bw --samFlagExclude 16 # Reverse strand bamCoverage -b a.bam -o a.rev.bw --samFlagInclude 16
For a stranded `paired-end` library
Now, this gets a bit cumbersome, but future releases of deepTools will make this more straight-forward. For now, bear with us and perhaps read up on SAM flags, e.g. here.
For paired-end samples, we assume that a proper pair should have the mates on opposing strands where the Illumina strand-specific protocol produces reads in a
R2-R1 orientation. We basically follow the recipe given in this biostars tutorial.
To get the file for transcripts that originated from the forward strand:
# include reads that are 2nd in a pair (128); # exclude reads that are mapped to the reverse strand (16) $ samtools view -b -f 128 -F 16 a.bam > a.fwd1.bam # exclude reads that are mapped to the reverse strand (16) and # first in a pair (64): 64 + 16 = 80 $ samtools view -b -f 80 a.bam > a.fwd2.bam # combine the temporary files $ samtools merge -f fwd.bam fwd1.bam fwd2.bam # index the filtered BAM file $ samtools index fwd.bam # run bamCoverage $ bamCoverage -b fwd.bam -o a.fwd.bigWig # remove the temporary files $ rm a.fwd*.bam
To get the file for transcripts that originated from the reverse strand:
# include reads that map to the reverse strand (128) # and are second in a pair (16): 128 + 16 = 144 $ samtools view -b -f 144 a.bam > a.rev1.bam # include reads that are first in a pair (64), but # exclude those ones that map to the reverse strand (16) $ samtools view -b -f 64 -F 16 a.bam > a.rev2.bam # merge the temporary files $ samtools merge -f rev.bam rev1.bam rev2.bam # index the merged, filtered BAM file $ samtools index rev.bam # run bamCoverage $ bamCoverage -b rev.bam -o a.rev.bw # remove temporary files $ rm a.rev*.bam | https://deeptools.readthedocs.io/en/latest/content/tools/bamCoverage.html | 2019-03-18T18:30:21 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['../../_images/norm_IGVsnapshot_indFiles.png',
'../../_images/norm_IGVsnapshot_indFiles.png'], dtype=object)] | deeptools.readthedocs.io |
You need to configure Networking so that the bare metal server can communicate with the Networking service for DHCP, PXE boot and other requirements. This section covers configuring Networking for a single flat network for bare metal provisioning.
It is recommended to use the baremetal ML2 mechanism driver and L2 agent for proper integration with the Networking service. Documentation regarding installation and configuration of the baremetal mechanism driver and L2 agent is available here.
For use with routed networks the baremetal ML2 components are required..
You will also need to provide Bare Metal service with the MAC address(es) of each node that it is provisioning; Bare Metal service in turn will pass this information to Networking service for DHCP and PXE boot configuration. An example of this is shown in the Enrollment section.
Install the networking-baremetal ML2 mechanism driver and L2 agent in the Networking service.
Edit
/etc/neutron/plugins/ml2/ml2_conf.ini and modify these:
[ml2] type_drivers = flat tenant_network_types = flat mechanism_drivers = openvswitch,baremetal [ml2_type_flat] flat_networks = physnet1 [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True [ovs] bridge_mappings = physnet1:br-eth2 # Replace eth2 with the interface on the neutron node which you # are using to connect to the bare metal server
Restart the
neutron-server service, to load the new configuration.
Create and edit
/etc/neutron/plugins/ml2/ironic_neutron_agent.ini and
add the required configuration. For example:
[ironic] project_domain_name = Default project_name = service user_domain_name = Default password = password username = ironic auth_url = auth_type = password region_name = RegionOne
Make sure the
ironic-neutron-agent service is started.
If neutron-openvswitch-agent runs with
ovs_neutron_plugin.ini as the input
config-file, edit
ovs_neutron_plugin.ini to configure the bridge mappings
by adding the [ovs] section described in the previous step, and restart the
neutron-openvswitch-agent.
Add the integration bridge to Open vSwitch:
$ ovs-vsctl add-br br-int
Create the br-eth2 network bridge to handle communication between the OpenStack services (and the Bare Metal services) and the bare metal nodes using eth2. Replace eth2 with the interface on the network node which you are using to connect to the Bare Metal service:
$ ovs-vsctl add-br br-eth2 $ ovs-vsctl add-port br-eth2 eth2
Restart the Open vSwitch agent:
# service neutron-plugin-openvswitch-agent restart
On restarting the Networking service Open vSwitch agent, the veth pair between the bridges br-int and br-eth2 is automatically created.
Your Open vSwitch bridges should look something like this after following the above steps:
$ ovs-vsctl show Bridge br-int fail_mode: secure Port "int-br-eth2" Interface "int-br-eth2" type: patch options: {peer="phy-br-eth2"} Port br-int Interface br-int type: internal Bridge "br-eth2" Port "phy-br-eth2" Interface "phy-br-eth2" type: patch options: {peer="int-br-eth2"} Port "eth2" Interface "eth2" Port "br-eth2" Interface "br-eth2" type: internal ovs_version: "2.3.0"
Create the flat network on which you are going to launch the instances:
$ openstack network create --project $TENANT_ID sharednet1 --share \ --provider-network-type flat --provider-physical-network physnet1
Create the subnet on the newly created network:
$ openstack subnet create $SUBNET_NAME --network sharednet1 \ --subnet-range $NETWORK_CIDR --ip-version 4 --gateway $GATEWAY_IP \ --allocation-pool start=$START_IP,end=$END_IP --dhcp
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/ironic/latest/install/configure-networking.html | 2019-03-18T17:47:03 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.openstack.org |
In order to help you understand how WordPress themes created with Pinegrow and using Bootstrap Blocks works, we have created a sample content file for your upcoming experimentations.
The sample content will add numerous posts with pictures and meta data in your WordPress setup in order for you to quickly understand how to configure your new theme from the customizer.
IMPORTANT: Please, IMPORT the Sample content ONLY AFTER you have created, exported and activated your first WordPress theme using Bootstrap Blocks. Otherwise, it will not work.
Step 1
Download the file pinegrowwpwebeditor.wordpress.2015-07-22-1.xml and save it to your desktop. (unzip the file)
Step 2
Log in to your WordPress installation. Move the mouse pointer to the “Tools” heading on the left side and select “Import” on the menu.
Step 3
Click the “WordPress” link. A prompt appears to ask if you would like to download the WordPress Importer tool.
Step 4
Click “Install Now.”
Step 5
Click the “Activate Plugin & Run Importer” link. This activates the WordPress Importer tool and brings you to a screen titled “Import WordPress.”
Step 6
Click the “Choose File” button and double-click the WordPress sample data file on the desktop.
Step 7
Click the “Upload File and Import” button.
Step 8
Click the “Download and import file attachments” check box to download the images attached to the sample posts.
Step 9
Click the “Submit” button to import the sample data into your website. Be patient, the process can take a few minutes until completion.
Now, you have all the Content “sources” (posts, images, meta content) that you need for testing your WordPress theme! | https://docs.pinegrow.com/docs/bootstrap-blocks/sample-content-for-wordpress-themes-using-bootstrap-blocks/ | 2019-03-18T17:57:23 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.pinegrow.com |
Imprint¶
© 2008-2009 Jonathan Wage, and contributors
ISBN-13: 978-2-918390-26-8
Sensio SA 92-98, boulevard Victor Hugo 92 115 Clichy France [email protected]
This work is licensed under the “Attribution-Share Alike 3.0 Unported” information in this book is distributed on an “as is” basis, without warranty. Although every precaution has been taken in the preparation of this work, neither the author(s) nor Sensio shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work.
If you find typos or errors, feel free to report them by creating a ticket on the [email protected]. | http://doctrine.readthedocs.io/en/latest/en/manual/imprint.html | 2017-02-19T18:41:41 | CC-MAIN-2017-09 | 1487501170249.75 | [] | doctrine.readthedocs.io |
The i-Docs 2014 programme is available as a PDF download here and online at the i-Docs 2014 site (optimised for mobile & tablet)
The programme has been structured to reflect these current pressing themes within the field of interactive documentary:
Production models
What are the different models through which an interactive documentary can be produced, how do these relate to documentary intent and purpose – and is anyone making any money yet?
Engagement and evaluation
Who are the audiences for interactive documentary, when is user testing most appropriate, and how can we evaluate impact?
New territories
How are interactive documentaries evolving? Do emerging technologies and different cultural perspectives embrace new voices and new visions? | http://i-docs.org/programme/ | 2019-07-15T20:25:33 | CC-MAIN-2019-30 | 1563195524111.50 | [] | i-docs.org |
General Settings- Visitors
Here we can indulge in how the dashboard works when there is heavy traffic as well as we can also block the users if deemed necessary.
Here we have two sections, first, one named “Enable High Load Dashboard” and the second one being “Block Visitors”. We will start with having a look at High Load dashboard first.
Enable High Load Dashboard:
This dashboard could be used when you have thousands or hundreds of visitors on your website, it will only keep the visitors with active chat on the dashboard so that you can easily organize your task.
The High Load Dashboard is an alternative version of the Visitor List that only shows visitors in your Incoming Chats and Currently Served sections. All other visitors are hidden. Recommended for websites with the high visitor traffic.
Block Visitors:
Here you can just copy the IP address of the user you want to block and past it in the text box provided and just hit lock button. This will allow them to browse through your website but will prevent them from initiating any chat on your website | https://docs.acquire.io/general-settings-visitors | 2019-07-15T20:29:12 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['https://media.acquire.io/knowledgebase/kb-img-15541850532891.jpg',
None], dtype=object) ] | docs.acquire.io |
[…]
Server Monitor Agent Requirements & Compatibility
We wanted our Server Monitor Agent to be as light as as possible, with a very tiny footprint on the server resource usage. It is fully coded in bash language, and it requires no additional software to be installed for it to function. The following operating systems are compatible and can run the HetrixTools Server […] […]
Accessing Server Monitoring Data
Once you’ve installed our Server Monitoring Agent on one of your VPS or dedicated servers, you’ll be able to access the resource usage data from your Uptime Monitors list: Simply click the Uptime Monitor name that you’ve installed the agent on: A pop-up will open, containing all of the graphs from the collected data: You […]
Set Resource Usage Warnings
Being alerted whenever your server starts using up too many resources is sometimes critical for your business and infrastructure. Using our Server Monitoring Agent you can easily set up warnings, directly from your dashboard. Start by opening up the collected data from any of your Uptime Monitors that have a Server Agent attached to them: […] […]
Server Monitor Color Indicators
The […] | https://docs.hetrixtools.com/category/server-monitor/ | 2019-07-15T20:15:31 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.hetrixtools.com |
Whenever Hevo finds that in an event cannot proceed ahead in the Pipeline it is marked as Failed and made visible on the Hevo UI for inspection.
There can be several reasons for us to mark events as Failed:
-.
All such failed events visible on the Pipeline's Overview screen for inspection.
Replaying events
A majority of the failed events are replayed by Hevo as and when it finds that the underlying reason for those events to fail has been resolved. In some cases, however, you may have to confirm that you have taken the suggested action. Hevo will then try to Replay the events and if they fail again, you will find them again as Failed in a couple of minutes.
When replayed, events are fed back into the Transformations stage of the pipeline.
Note: In case the events were created through the Transformations Code they are fed back to the Schema Mapper stage rather than Transformations stage.
View Sample events
You may want to view a few sample events which were marked as Failed to get to the root cause. You can view Sample events by clicking the View Sample link.
Note
The failed events are held by Hevo for 30 days, after which they will be purged permanently. Purged events will not be replicated to your Destination. Hence, you have to resolve all of the failed events within 30 days.
Please sign in to leave a comment. | https://docs.hevodata.com/hc/en-us/articles/360007690153-Introduction-to-Failed-Events-in-a-Pipeline | 2019-07-15T21:15:52 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.hevodata.com |
Components Privacy Consents/de
From Joomla! Documentation
Privacy Consents screen.
Contents
Description
In this screen you have the ability to look at the list of your privacy policy consents. You can sort them in different ways.
How to access
You can access the Privacy Consents by using the top menu bar Users → Privacy and then click on Consents in the left sidebar.. (Valid/Obsolete/Invalidated) The status of the item.
- Username. The Username of the user who consented to the Privacy Policy
- User ID. The ID of the user who consented to the Privacy Policy
- Subject. Consent to the Privacy Policy
- Body. Displays information about the user information stored and consented (User's IP, browser used ...)
- Created. Indicates when the consent has been given by the user
- ID. This is a unique identification number for the privacy consent assigned automatically by Joomla. It is used to identify the item internally, and you cannot change this number..
- ID Descending (default). Shows ordering of selected column, ascending or descending.
- Number of items to display right, you will see the toolbar:
The functions are:
- Help. Opens this help screen.
- Options. Opens the Options screen where settings can be edited.
List Filters
The List Filter, above on the left, lets you limit what items show in the Privacy Consents screen. You can filter by Username, Date of creation or ID.Only items that meet the filter conditions will show on the list. | https://docs.joomla.org/Help39:Components_Privacy_Consents/de | 2019-07-15T21:09:52 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.joomla.org |
Enable Access-Based Enumeration on a Namespace. ().
Note
If you upgrade the domain functional level to Windows Server 2008 while there are existing domain-based namespaces, DFS Management will allow you to enable access-based enumeration on these namespaces. However, you will not be able to edit permissions to hide folders from any groups or users unless you migrate the namespaces to the Windows Server 2008 mode. For more information, see Migrate a Domain-based Namespace to Windows Server 2008 Mode.
To use access-based enumeration with DFS Namespaces to control which groups or users can view which DFS folders, you must follow these steps.
Enable access-based enumeration on a namespace
Control which users and groups can view individual DFS folders. For more information, see Using Inherited Permissions with Access-Based Enumeration.
Enabling access-based enumeration on a namespace
Using the Windows interface
Using a command line
Tip
To manage access-based enumeration on a namespace by using Windows PowerShell, use the Set-DfsnRoot, Grant-DfsnAccess, and Revoke-DfsnAccess cmdlets. The DFSN Windows PowerShell module was introduced in Windows Server 2012.
Open a command prompt window on a server that has the Distributed File System role service or Distributed File System Tools feature installed.
Type the following command, where <namespace_root> is the root of the namespace:
dfsutil property abe enable \\ <namespace_root>
Controlling which users and groups can view individual DFS folders
Using the Windows interface
Using a command line sd grant <DFSPath> DOMAIN\Account:R (…) Protect Replace
For example, to replace existing permissions with permissions that allows the Domain Admins and CONTOSO\Trainers groups Read (R) access to the
\\contoso.office\public\trainingfolder, type the following command:
dfsutil property sd grant \\contoso.office\public\training ”CONTOSO\Domain Admins”:R CONTOSO\Trainers:R Protect Replace
To perform additional tasks from the command prompt, use the following commands | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd759150(v=ws.11) | 2019-07-15T20:07:30 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.microsoft.com |
GitLab-specific scopes¶
Scopes may be added to the GitLab OAuthenticator by overriding the scope list, like so:
c.GitLabOAuthenticator.scope = ['read_user']
The following scopes are implemented in GitLab 11.x:
api: Grants complete read/write access to the API, including all
groups and projects. If no other scope is requested, this is the default.
This is a very powerful set of permissions, it is recommended to limit
the scope of authentication to something other than API.
read_user: Grants read-only access to the authenticated user’s
profile through the /user API endpoint, which includes username,
public email, and full name. Also grants access to read-only
API endpoints under /users.
read_repository: Grants read-only access to repositories on
private projects using Git-over-HTTP (not using the API).
write_repository: Grants read-write access to repositories
on private projects using Git-over-HTTP (not using the API).
read_registry: Grants read-only access to container registry
images on private projects.
sudo: Grants permission to perform API actions as any user
in the system, when authenticated as an admin user.
openid: Grants permission to authenticate with GitLab using
OpenID Connect. Also gives read-only access to the user’s
profile and group memberships.
profile: Grants read-only access to the user’s profile data
using OpenID Connect. | https://oauthenticator.readthedocs.io/en/latest/gitlab.html | 2021-09-16T18:54:37 | CC-MAIN-2021-39 | 1631780053717.37 | [] | oauthenticator.readthedocs.io |
This page provides step-by-step guidance on creating a record type object to display employee information.
If you are new to Appian and unfamiliar with Appian design objects and concepts, check out Appian Academy Online.
Appian Records enable you to do more with your data. Regardless of where your data lives, Appian allows you to organize your data into actionable records that allow users to access and update the information that they need.
For example, think about an employee record. What information about the employee would you want to see? You'd probably want to have the employee's name, title, department, and start date readily available. What might you want to do? At some point you would likely have to update the employee's information. You might also need to add a new employee. These common actions can be directly added to your record type and can be referenced in other interfaces and processes.
This tutorial will take about an hour and walk you through the steps to create a basic record type for the employee data in a data store entity (DSE).
Record types are made up of record data, the records themselves, and a record list. This tutorial will walk you through creating a record type in three phases: (1) adding the source data, (2) creating your record views and actions, and (3) configuring your record list.
In the first phase, we will bring together our data and record type to begin connecting our users with the data. To do this, we will:
In the second phase, we will define different ways for users to view each record and add buttons so they can act on the information in each record. To do this, we will:
In the third phase, we will structure the way that our users will interact with the set of records through the record list. To do this, we will:
After successfully completing this tutorial, you will be ready to create your own record types.
If you are interested in learning about record types that use a process model as their source, see the Process Model Tutorial.
To successfully complete this tutorial, you must have already completed the Application Building Tutorial and you should be familiar with building interfaces and creating process models.
Ensure you are familiar with the different design objects that will be referenced throughout this tutorial:
Before we can create our record type, we need to create an application and add the data for our record type. We will also create a group to set visibility, a folder to hold our rules and constants, and a constant.
To get started with this tutorial you must first create the Appian Tutorial application and complete the Application Building Tutorial. In this tutorial, we will be using the application, groups, and folders created in the application building tutorial.
Now that you have your application, groups, and a folder created, we will add a new group for
Managers.
AT Managers.
AT Administrators.
While we are doing all the prep work for this tutorial, we will make a folder to save rules and constants.
To create a rule and constant folder:
AT Rules and Constants.
AT Administratorsgroup with Administrator.
We will be saving all our expression rules, constants, and interfaces in this folder throughout the tutorial.
Before we can create a record type, we will first create a custom data type (CDT), data store, and DSE to reference and hold the data that we will be adding for our records.
To populate the data for our records, we will be using the Use the Write to Data Store Entity Smart Service Function with an Interface.
No matter where your data lives, you can add it as a source for your record type. Before we configure the ways that users will view or act upon the records, we need to first point the record type to the data source. In the first phase of this tutorial we will be creating our record type, naming it, setting security, and using the data from our DSE as our data source.
By creating a record type, we will determine how users will interact with your data through record views, make your data actionable through the use of related actions, and configure the way that your data is displayed in the record list.
Employee.
Employees. This is the name that your users will see when they view the record type from Tempo.
Directly after creating your record type, you need to add the object security.
To set object security:
AT Administrators.
Once you have your record type created and the security set, we need to tell the record type where the data lives. Since all of the data that we will be using for this record type is referenced using a DSE, we will set that DSE as its source.
To set the data source for your record type:
Now that we have our record data, we will configure the records within the record type by creating a record view and a related action. A record view displays information for a single record, and you can create multiple views to display the record information in different ways.
In this tutorial, we will configure a summary view to show information about each employee. After we define the summary view, we will also create a related action so that users can update the information about an employee from the record view. To make the related action, we will create a reusable interface and a process model.
In this step, we will define the summary view of the record type. To do this, we'll first create an interface for our view. Then, we'll add the interface to the summary view in our record type. Lastly, we will configure the record header background color and the record title with the employee's name.
We will start by creating an interface so that we can display the data that we want to see in each record.
To create an employee summary interface:
AT_summaryView.
AT Rules and Constants.
With your interface created, we can now add a component for each field of the record type.
To add components to the interface:
Now we want to add our Employee record data type as a rule input in our interface. This will pass the data for all fields into each record without needing an expression rule to query the data.
To add your record data type as a rule input:
Employee.
With our record data type added as a rule input, it's time to configure our components and connect them to the rule input.
To configure the components:
Your interface should look like this:
Since this summary view is only for displaying record data, we set each field to read-only and made the component labels adjacent. The adjacent labels in this design pattern are a best practice for creating an interface that will be viewed but not edited, such as a summary view. For more information on read-only UX designs, see UX labels.
Record data types as rule inputs should only be used in interfaces that will be read-only views. They should not be used for interfaces that will be a part of a related action or process model.
We have created our employee summary interface, so we will now add the interface as the summary view for the record type. We need to use
rv!record in our interface rule for the summary view to pull record data into the view.
To add the summary view to the record type:
rule!and the name of your interface.
rv!record. The expression should look like this:
rule!summaryView(Employee: rv!record).
rv!recordcalls the data for each individual employee record and displays it in each record's summary view.
When we combine record data type rule inputs and
rv!record, we are able to easily call in the data for each record and display that data in the record's summary view. The
rv!record reference knows the record that you're viewing and shows you the data for that record without you having to create an extra expression rule to query the data. This saves us time and simplifies the configuration of the record type.
While we are on the Views page, let's go ahead and configure a record header background and title. The record header background contains and displays the title, breadcrumbs, and related actions on every record view of your record.
You can opt for no background or you can set an image or color. We are going to configure a gray background:
#666666to select a dark gray color.
Now, we'll configure the record title so that it will show the employee's first and last name. We are adding the employee's name as a record title so that a user landing on this page will understand what they are looking at without having to navigate to the record list. To show the employee's name, concatenate the record fields of firstName and lastName.
rv!record[recordType!Employee.fields.firstName] & " " & rv!record[recordType!Employee.fields.lastName].
Now that we have a summary view set up to view employee information, we are going to add a related action to update the record. We don't just want to be able to view the employee information, we want to take action on it directly from the record. For this example, a manager, HR representative, or the specific employee looking at an employee's record would frequently need to make updates to that information.
We will create a related action for managers to update employee information that will be accessible from the summary view. We will create the related action in two steps. In the first, we will create a group constant, a department constant, an expression rule, and a reusable interface. In the second, we'll create a process model.
Before we make an expression rule and a new interface, we'll be making a constant to point to the manager group so that we can configure the visibility of the related action. Though there are multiple user roles that could need access to this action, we will be narrowing it down to just managers for this example. In the future, keep in mind which users will need to have access to view related actions.
Constants are a useful way to reference groups for setting conditional visibility on record views and related actions. We will set one up for our managers group so that we can make sure that they are the only users editing employee information.
To make a group constant:
AT_MANAGERS_GROUP.
Group.
AT_Rules and Constants.
Now we are going to create a constant for our department list so we can reference a set of pre-defined department names in our update employee interface. This constant allows users to only select from a pre-defined list of departments when updating the employee information, instead of entering a department in a field every time or entering a department name incorrectly.
To set up a new constant with a text array:
AT_DEPARTMENT_LIST.
In the Value box, enter the department options. Separate each department by a line break, but do not include spaces, commas, or quotations:
Now that we have two constants, we only have one more object to create before we can create our reusable interface.
In this step, we will create an expression rule that queries the DSE to get employee information by the record ID. In the next few steps, we will use this expression rule to help us test our reusable interface, pull in employee data to work with the logical expressions in that interface, and pass in the correct employee data to the process models that we'll be creating.
To create our expression rule:
AT_getEmployeeById.
AT Rules and Constants.
id.
Copy and paste the following expression into the expression editor:
With our expression rule created, we will now make a reusable interface for our related action that we will also use later on in our list action. The reusable interface will show the same employee information as the summary view, but the fields will be editable.
Reusable interfaces allow us to use the same interface multiple times instead of having to create multiple similar interfaces. To make this interface functional for both the related and list actions, we will write two expressions to conditionally show the different form labels and conditionally make components editable depending on the action.
To make a reusable employee interface:
AT_createEditEmployee.
AT Rules and Constants.
Once the interface with the fields from your data type has been generated:
Department.
---Select a Department---.
cons!AT_DEPARTMENT_LISTto call in the department constant.
Before we move on to configuring some logical expressions, we're going to add our expression rule to the interface as a test value.
We are using the record ID 1 to pass the record data into the interface so that we can see the fields populated with employee data.
rule!AT_getEmployeeById(1).
Now we are going to configure two expressions for our form label and the start date field so that this form will work with our related action and list action. We need to add an expression for our form label so that it will show "Update Employee Information" when managers are using the related action or "Add New Employee" when managers are adding an employee record.
If we are updating the employee's information as part of the related action, we will be acting on a record. This means that the values for the record fields and in our rule input will not be null. If we are adding a new employee, these fields will be null. We will use if() in this expression.
In the Form Layout:
Enter the following expression:
This expression simply says that if the rule input or rule input ID field is null, the label will read "Add New Employee". If the rule input or the ri!id field is not null, then the label will read "Update Employee Information".
We will do something similar for the Start Date field, but using not() and isnull(). If we are updating the employee information, we won't be changing the start date of the employee.
To configure the Start Date field:
Enter the following expression:
The not function returns either true or false. This means that if the rule input ID field is not null, the field is disabled. If the rule input ID field is null, the field is not disabled and you can edit it.
Now that you have set up your reusable interface, we can move on to creating a process model to update the employee's information. We will be going through the steps to create a simple process model for our related action.
The process model will consist of the reusable interface as a start form, a cancel flow to end the process if the user cancels the action, and a write to data store entity smart service to save the updated employee information to our employee DSE. Then, we will add the process model as a related action to our record type and make it available from the summary view.
If you need more information or if you haven't created a process model before, check out the Process Model Tutorial or Appian Academy Online's course for help.
When we are finished, we will have a process model that looks like this:
To create a process model with a start form:
Add or Update Employee.
AT Process Models.
AT Administrators.
AT Managers.
Once inside the process modeler:
AT_createEditEmployee.
Creating a cancel flow is a best practice because cancel and submit buttons are configured in the same way. This means that without a cancel flow, the information in the form will be submitted and written to the DSE even if the user wanted to cancel the action.
To create a cancel flow:
Update Canceled.
Cancel Update?
cancelis true, then we want the process to go to the "Cancel Update" node.
To add our data in the first few steps in this tutorial, we used the Write to Data Store Entity Smart Service as a function in an interface. Here, we are using the same Write to Data Store Entity smart service as a node in the process model to save our data to the DSE from the start form.
To add and configure a Write to Data Store Entity smart service node:
Write to Employee DSE.
employee.
pv!AT_employee.
Now that we have our process model almost complete, we are going to add activity chaining. Activity chaining allows the process to move quicker between nodes by chaining them together. We are going to add activity chaining between the start form and the Write to Data Store Entity node so that after updating the employee information on the summary view, the fields will be updated without having to refresh the page.
Activity chaining is usually used between multiple user input tasks and between user input tasks and write to data store entity nodes. Overusing activity chaining outside of these cases could slow down process performance.
To add activity chaining:
You are now ready to save, publish, and test your process model.
To save and test your process:
For more information on testing and debugging process models, see Testing and Debugging Problems with Process Models.
Now that we have the interface and process model working, we will return to the record type to set up the related action.
To add your process model as a related action:
Update Employee.
Add or Update Employee.
rule!AT_getEmployeeById(ID: rv!identifier)in place of null. The cancel parameter should still be null.
Since we don't want everyone to be able to update the employee information, we are going to make sure that only those in the manager group are able to access this related action.
To set visibility for the related action:
a!isUserMemberOfGroup(loggedinuser(),cons!AT_MANAGERS_GROUP).
The combination of these two functions with the group constant checks to make sure that the logged in user that is trying to access the related action is a member of the managers group.
While we are here, we will change the icon to a pencil to differentiate multiple related actions. You can learn more about icons in the UX design guide.
Now we will add our related action to the summary view so that users can also update employee information if they need to while viewing the employee summary.
To add the related action to the summary view:
Now you can access your update employee related action right from the summary view!
When you are setting up record views and related actions on your own, you will need to consider what information your users need to see, what actions your users will need to take to update the data, and which users will be able to access the actions and record views.
In the final phase of this tutorial, we will create the record list. The record list displays all of the records, shows a few fields of key information about each record, and allows users to easily filter records. Users can also create new records directly from the record list with a list action.
We will be configuring the record list by selecting the columns of data that the users will want to see displayed in the list. We will end the phase by using the department constant to create a user filter for users to be able to filter the records in the list.
We are going to configure the fields displayed in the record list so that they reflect the data that we want users to see at a glance before they drill into each record.
When looking at a list of employees, we really only want to see their name, title, and department. To configure the record list to show only these columns, we are going to remove the ID, phone number, and start date columns, and combine the first and last name columns into one.
To edit the record list:
Next we will combine the first and last name columns into one
Name column. We will also add sorting to allow the users to easily sort data in a column, and set the display value to let the column know which record field to show.
To create the
Name column:
First Nameto
Name.
fv!row[recordType!Employee.fields.firstName] & " " & fv!row[recordType!Employee.fields.lastName].
With the record list columns configured, we will add a user filter so that users can easily filter employee records by department from the record list. We will be using the department constant along with
a!forEach() to make creating and maintaining our user filter easier.
To add a new user filter:
Department.
By using the department constant, we don't have to manually update the filter if the department list is changed. We are using an a!forEach function here for the same reason. For more information, see Expression-based user filters.
You can also set up default filters in your record type so that users can see filtered record list results by default. For more information on configuring and using default filters, see default filters.
You have successfully completed all the steps to create a record type with a DSE as its source! The next step walks you through how to add a list action to your record type. Many record types use both data store entities and processes to get their data, but it's not required for all record types.
The last step of this tutorial is to create a list action. This step will show you how to add a process model to your record type so that users are able to write to the DSE to create new records. You will use the process model that we just created to update employee records as a process to add new employee records, as well. Though the record type already has a process for adding data to create records, this step creates an action for users to be able to add new records directly from the record list.
While some records use only data store entities or only process models as the source of their data, many developers use both for their applications. This step will show you how to connect a process model to your record type as a list action. List actions are used to create new records and are accessible from the record list.
We will use the process model that we just created for our related action as our list action. We can use this process model to add new employees because the reusable AT_createEditEmployee interface is our start form. The interface will evaluate whether or not there is an existing record and determine if the process needs to update a record or add a new one to the employee record type.
Let's add our reusable interface and process model as a list action.
To add a list action:
Add New Employee.
Add or Update Employee.
a!isUserMemberOfGroup(loggedinuser(),cons!AT_MANAGERS_GROUP)to set the visibility to managers only.
plusicon.
Remember that the list action button will appear on your record list page and not on the record views like a related action would.
You did it! You made it through all of the phases and steps to successfully create a fully functioning record type with a summary view, related action, user filter, and a list action. You are now ready to create record types that enable your own unique business data to do more all on your own!
On This Page | https://docs.appian.com/suite/help/21.3/Records_Tutorial.html | 2021-09-16T18:42:40 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.appian.com |
The plugin will add a preview of the uploaded resume in the applicant detail page of WP Job Openings Plugin. You need not download the uploaded resume anymore! Powered by Google Docs Viewer.
The plugin will allow you to view uploaded applicant resume from the admin panel of WP Job Openings Plugin.
Requires:
WP Job Openings 1.0+
Key Features
Supports Microsoft Word (DOC,DOCX) and PDF files
Previews the document along with the application view.
Log in to your WordPress admin panel
Navigate to the Plugins menu and click Add New
In the search field type “Docs Viewer Add-On for WP Job Openings" and click search plugins
Once you find it you can install it by clicking Install Now
Upload docs-viewer-add-on-for-wp-job-openings.zip to the wp-content/plugins directory to your web server.
Activate the plugin from the plugin menu within the WordPress admin. | https://docs.wpjobopenings.com/other-add-ons/docs-viewer-add-on-for-wp-job-openings | 2021-09-16T18:35:11 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.wpjobopenings.com |
Create a user template¶
Click on tab or icon Users, in the Users and groups section in FusionDirectory
Click Actions –> Create –> Template
Depending on which other plugins you have installed, you can configure your template like you need. Below, you can find some exemple.
You can find the documentation on how to create a macro here Macros.
User¶
The user tab is the base of your template, click on User tab
Generic User tab : this is the base tab to create a user template.
In this example we set the following macros to create a user :
- Login : %alps[1]|givenName%%alp|sn% meaning that login will be first letter of first name in low character followed by last name in low characters
- Password : %r[12]|% meaning that password will contain 12 random characters
Unix¶
When you are creating or editing your template, click on Unix tab Then click on Add Unix settings. A new dialog is opened
Then click on Add Unix settings. A new dialog is opened
Fill-in Unix settings
- Home directory : the path to the home directory of this user (required).
You can use macro to automatically build the name of home directory users
For example : /home/%uid%
Mail¶
When you are creating or editing your template, click on Mail tab
Then click on Add Mail settings. A new dialog is opened
Fill-in Mail account settings
- Primary address : primary mail address (required)
You can use the macros to automatically build the name of mail user account.
In this example we set macro %uid%@acme.com meaning that the mail account will be [email protected]
Click on Ok button bottom right
Now on the main page, on you template line, you will see the mail icon
| https://fusiondirectory-user-manual.readthedocs.io/en/1.3/fusiondirectory/templates/create-a-user-template.html | 2021-09-16T18:38:39 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../../_images/templates-users.png',
'Picture of Users icon in FusionDirectory'], dtype=object)
array(['../../_images/templates-create-template.png',
'Picture of create template menu in FusionDirectory'], dtype=object)
array(['../../_images/templates-tab-user.png',
'Picture of User tab in FusionDirectory'], dtype=object)
array(['../../_images/template-fd-user-creation.png',
'Picture of templates settings in FusionDirectory'], dtype=object)
array(['../../_images/templates-unix-tab.png',
'Picture of Unix tab in FusionDirectory'], dtype=object)
array(['../../_images/templates-add-unix-settings.png',
'Picture of Add Unix settings button in FusionDirectory'],
dtype=object)
array(['../../_images/templates-unix-macro-settings.png',
'Picture of Unix macro settings screen in FusionDirectory'],
dtype=object)
array(['../../_images/templates-mail-tab.png',
'Picture of Mail in FusionDirectory'], dtype=object)
array(['../../_images/templates-add-mail-settings.png',
'Picture of Add Mail settings buttonin FusionDirectory'],
dtype=object)
array(['../../_images/templates-user-mail-account-settings.png',
'Picture of Mail user account settings screen FusionDirectory'],
dtype=object)
array(['../../_images/templates-ok.png',
'Picture of Ok button in FusionDirectory'], dtype=object)
array(['../../_images/template-mail-icon.png',
'Picture of Mail icon in FusionDirectory'], dtype=object)] | fusiondirectory-user-manual.readthedocs.io |
Convert to Geometry¶
Reference
- Mode
Object Mode
In the 3D Viewport, sketches on the active layer can be converted to geometry, based on the current view settings, by transforming the points recorded when drawing (which make up the strokes) into 3D space. Currently, all points will be used, so it may be necessary to simplify or subdivide parts of the created geometry for standard use. Sketches can currently be converted into curves in Object Mode.
Options¶
- Type
The type of object to convert to.
- Path
Create NURBS 3D curves of order 2 (i.e. behaving like polylines).
- Bézier Curve
Create Bézier curves, with free “aligned” handles (i.e. also behaving like polylines).
- Polygon Curve
Bézier curve with straight line segments (auto handles).
Note
Converting to Mesh
If you want to convert your sketch to a mesh, simply choose NURBS first, and then convert the created curve to a mesh.
- Bevel Depth
The Bevel Depth to use for the converted curve object.
- Bevel Resolution
The Bevel Resolution to use for the converted curve object.
- Normalize Weight
Will scale weights value so that they fit into the (0.0 to 1.0) range.
- Radius Factor
Multiplier for the points’ radii (set from the stroke’s width).
- Link Strokes
Will create a single spline, i.e. curve element, from all strokes in active Grease Pencil layer. This is especially useful if you want to use the curve as a path. All the strokes are linked in the curve by “zero weights/radii” sections.
Timing¶
Grease Pencil stores “dynamic” data, i.e. how fast strokes are drawn. When converting to curve, this data can be used to create an Evaluate Time F-curve (in other words, a path animation), that can be used e.g. to control another object’s position along that curve (Follow Path constraint, or, through a driver, Curve modifier). So this allows you to reproduce your drawing movements.
Link Strokes has to be enabled for all timing options.
- Timing Mode
This control lets you choose how timing data is used.
- No Timing
Just create the curve, without any animation data (hence all following options will be hidden).
- Linear
The path animation will be a linear one.
- Original
The path animation will reflect to original timing, including for the “gaps” (i.e. time between strokes drawing).
- Custom Gaps
The path animation will reflect to original timing, but the “gaps” will get custom values. This is especially useful if you want to shorten large pauses between some strokes.
- Frame Range
The “length” of the created path animation, in frames. In other words, the highest value of Evaluation Time.
- Start Frame
The starting frame of the path animation.
- Realtime
When enabled, the path animation will last exactly the same duration it has taken you to draw the strokes.
- End Frame
When Realtime is disabled, this defines the end frame of the path animation. This means that the drawing timing will be adjusted to fit into the specified range.
- Gap Duration
Custom Gaps only. The average duration (in frames) of each gap between strokes. Please note that, the value will only be exact if Realtime is enabled, otherwise it will be scaled, exactly as the strokes’ timing is. | https://docs.blender.org/manual/en/latest/grease_pencil/modes/object/convert_to_geometry.html | 2021-09-16T19:49:07 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.blender.org |
What is paid for in my account? What are Slots?
Do I pay for email accounts or slots?
How is my billing calculated?
Is this pre-paid or post-paid?
Where can I find the details of my next payment?
Where will I find my invoice?
Can I pause my account for time being?
Why isn't my invoice in English? »
What is paid for in my account? What are Slots?
Our pricing is based on Slots and Add-ons.
Slots are placeholders for connecting your email account, which you will use for sending your campaigns.
To have an active Woodpecker account you’ll need to pay for at least one slot.
Every additional one will increase your bill. You can have multiple slots active on your account.
Please remember that you’ll be charged for those to which email accounts are connected as well as those which are empty..
Remember, that you’re buying add-ons for all of your slots. If you're an Agency user, you can only buy add-ons from your main account and they will be applied to all the slots under your Agency’s one and its companies.
Do I pay for email accounts or slots?
Your payments will be based on the number of slots and add-ons on your account. You can have as many active email accounts to Woodpecker as many slots you have. If you want to use them for sending, remember to turn them on and connect them to slots.
Every slot equals one email address. You can have empty slots, however, you can not have more active email addresses than slots.
Woodpecker will only download incoming messages for active email addresses, which are currently connected to slots.
How is my billing calculated?
Your payment will be based on the number of Slots you have and the number of Add-ons which you bought from Marketplace.
The payment remains the same every billing cycle if you make no changes to your account (i.e. add or remove Slots or Add-ons).
Is this pre-paid or post-paid?
Every payment is prepaid, which means that if you add a new slot in the middle of your billing cycle you will pay a prorated amount for the time remaining in the current billing cycle for it.
If you delete add-ons or slots there will be a next payment reduction, since you’ve already pre-paid for them. In this situation, your next payment will be reduced to make up for the paid time you’ll not use them. The reduced amount will be proportional to the time left in your billing cycle without these slots or add-ons.
After cost equalization, you’ll have a constant, monthly price set up. Discounts can not proceed into refunds.
Where can I find the details of my next payment?
Just navigate to Settings → Billing → Summary, and click see → Slots?
Before removing a slot you need to empty it, which means deactivating the email address which is currently connected to this slot. You can only delete empty slots. Managing your email account »
If you remove a slot in the middle of your billing cycle your next payment will be reduced by the prorated amount left till the next cycle. Woodpecker premium account comes with at least one active slot which can not be removed.
If you’re planning on taking a break from cold emailing, remember that you can pause your account » for the time you’re not planning to run your campaigns in order to keep all your data and access to your account. This option will lower your payment to $10 per month for the whole account.
However, in case you wish to stop using Woodpecker and have your subscription canceled, see Deleting the account »
Can I pause my account for time being?
Yes, in order to keep your profile and all data for later use, you can have your account paused.
How to pause account: Go to your Billing → Billing & payment info → scroll down for 'DELETE ACCOUNT' → click 'Pause or cancel' → click Pause account $10/mo
Alternatively, contact support specifying your Woodpecker login.
Details:.
Pricing: When your account is paused, the monthly fee is reduced to $10 per account (regardless of the number of slots or add-ons you have).
How do I remove add-ons?
To remove any add-on, please go to its page and click Remove this add-on at the bottom of its description.
If you remove an add-on in the middle of your billing cycle your next payment will be reduced by the prorated amount left till the next cycle.
What is a Marketplace?
Marketplace is a place in the app where you can manage your purchasing process. You can manage all your slots, add-ons, and integrations there. Go there any time you want to add or remove slots or turn add-ons on or off.
Feel free to check out available integrations under Marketplace and if you decide to connect Woodpecker with another tool, go to the Add-ons tab and purchase API Key to do so.
If anything is still unclear feel free to reach out to us at [email protected] | https://docs.woodpecker.co/en/articles/5234070-billing-faq | 2021-09-16T19:06:03 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['https://downloads.intercomcdn.com/i/o/369557831/65085acde15055b5b461f6ad/obraz.png',
None], dtype=object)
array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/336912175/cc8076d890a377b7dedb38f6/file-JqezIAHct0.gif',
None], dtype=object)
array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/336912177/569fe07ad28f15258bdcccf6/file-Ysz2IyPpKI.jpg',
None], dtype=object)
array(['https://woodpeckerco-0d8c91672dff.intercom-attachments-1.com/i/o/336912179/013615074e6396c85021293e/file-tYx05g7oM4.gif',
None], dtype=object) ] | docs.woodpecker.co |
This operation opens the messaging screen for the corresponding contact.
Show Record Messages operation at Property Panel
Properties
Description: Description related to action is written. When focused on action with the mouse, description info is displayed.
User: The user is written.
To use this operation, Chat Actions property in the User Options menu of the Client main page must be adjusted. It is necessary to get authorization from Xpoda manager for these transactions to be active.
A button in the form has a Show Record Messages operation with When clicked event. It depends on a list of users. This is chosen from Linked List property. Thus, when the button is clicked, record messages are opened.
| https://docs.xpoda.com/hc/en-us/articles/360011682939-Show-Record-Messages | 2021-09-16T18:40:13 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['/hc/article_attachments/360015117580/mceclip0.png',
'mceclip0.png'], dtype=object)
array(['/hc/article_attachments/360015117600/mceclip1.png',
'mceclip1.png'], dtype=object)
array(['/hc/article_attachments/360015102679/mceclip2.png',
'mceclip2.png'], dtype=object) ] | docs.xpoda.com |
Table of Contents
Table of Contents
Pages by tag: mission portal
- Alerts and Notifications
- Best Practices
- Custom actions for Alerts
- Debugging Mission Portal
- Debugging Slow Queries
- Decommissioning hosts
- Enterprise Reporting
- Extending Mission Portal
- Extending Query Builder in Mission Portal
- Hosts and Health
- Measurements app
- Reporting UI
- Settings
- Unable to log into Mission Portal
- User Interface | https://docs.cfengine.com/docs/3.15/tags-mission-portal.html | 2021-09-16T19:34:18 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.cfengine.com |
You can start a collaborative session at any time by clicking the Collaborate icon at the top of the screen.
After starting a collaborative session, your url will change to a link you can share with anyone to join your session. Whenever a sample is saved, the dataset will update for everyone in the session and a notification will appear at the bottom right indicating a collaborative sample was saved. | https://docs.universaldatatool.com/collaborative-labeling | 2021-09-16T18:51:17 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.universaldatatool.com |
This is the documentation of the development version, check the Stable Version documentation.
JSON Web Token (JWT) is structured by RFC7515: JSON Web Signature or RFC7516: JSON Web Encryption with certain payload claims. The JWT implementation in Authlib has all built-in algorithms via RFC7518: JSON Web Algorithms, it can also load private/public keys of RFC7517: JSON Web Key:
>>> from authlib.jose import jwt >>> header = {'alg': 'RS256'} >>> payload = {'iss': 'Authlib', 'sub': '123', ...} >>> key = read_file('private.pem') >>> s = jwt.encode(header, payload, key) >>> claims = jwt.decode(s, read_file('public.pem')) >>> print(claims) {'iss': 'Authlib', 'sub': '123', ...} >>> print(claims.header) {'alg': 'RS256', 'typ': 'JWT'} >>> claims.validate()
The imported
jwt is an instance of
JsonWebToken. It has all
supported JWS algorithms, and it can handle JWK automatically. When
JsonWebToken.encode() a payload, JWT will check payload claims for
security, if you really want to expose them, you can always turn it off
via
check=False.
Important
JWT payload with JWS is not encrypted, it is just signed. Anyone can extract the payload without any private or public keys. Adding sensitive data like passwords, social security numbers in JWT payload is not safe if you are going to send them in a non-secure connection.
You can also use JWT with JWE which is encrypted. But this feature is not mature, documentation is not provided yet.
jwt.encode is the method to create a JSON Web Token string. It encodes the
payload with the given
alg in header:
>>> from authlib.jose import jwt >>> header = {'alg': 'RS256'} >>> payload = {'iss': 'Authlib', 'sub': '123', ...} >>> key = read_file('private.pem') >>> s = jwt.encode(header, payload, key)
The available keys in headers are defined by RFC7515: JSON Web Signature.
jwt.decode is the method to translate a JSON Web Token string into the
dict of the payload:
>>> from authlib.jose import jwt >>> claims = jwt.decode(s, read_file('public.pem'))
The returned value is a
JWTClaims, check the next section to
validate claims value.
There are cases that we don’t want to support all the
alg values,
especially when decoding a token. In this case, we can pass a list
of supported
alg into
JsonWebToken:
>>> from authlib.jose import JsonWebToken >>> jwt = JsonWebToken(['RS256'])
JsonWebToken.decode() accepts 3 claims-related parameters:
claims_cls,
claims_option and
claims_params. The default
claims_cls is
JWTClaims. The
decode method returns:
>>> JWTClaims(payload, header, options=claims_options, params=claims_params)
Claims validation is actually handled by
JWTClaims.validate(), which
validates payload claims with
claims_option and
claims_params. For
standard JWTClaims,
claims_params value is not used, but it is used in
IDToken.
Here is an example of
claims_option:
{ "iss": { "essential": True, "values": ["", ""] }, "sub": { "essential": True "value": "248289761001" }, "jti": { "validate": validate_jti } }
It is a dict configuration, the option key is the name of a claim. | https://docs.authlib.org/en/latest/jose/jwt.html | 2021-09-16T18:37:27 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.authlib.org |
Reporting zone information
You can access legacy reports from within Access Manager by selecting Access Manager, right-clicking, then selecting Report Center. You can also use command-line programs, PowerShell scripts, or ADEdit scripts to report zone information. In most cases, however, you should install and configure Report Services to generate and access reports about the Active Directory domain and your zones.
For details about installing and configuring Report Services, and how to customize and access the reports that generated, see the Report Administrator’s Guide. | https://docs.centrify.com/Content/auth-admin-unix/ZonesInformation.htm | 2021-09-16T18:46:11 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.centrify.com |
RecentLabelItem Class
A label item within the RecentItemControl.
Namespace: DevExpress.XtraBars.Ribbon
Assembly: DevExpress.XtraBars.v21.1.dll
Declaration
public class RecentLabelItem : RecentTextGlyphItemBase
Public Class RecentLabelItem Inherits RecentTextGlyphItemBase
Remarks
Objects of the RecentLabelItem class can serve as both static elements that display a caption and image, and interactive elements that can be selected and support hover and pressed visual states. This behavior depends on the RecentLabelItem.AllowSelect property value. In the figure below, the ‘Documents’ element is a label that supports selection.
Labels provide multiple pre-defined styles than can be selected by using the RecentLabelItem.Style property.
See the Recent Item Control topic for more info.
Inheritance
See Also
Feedback | https://docs.devexpress.com/WindowsForms/DevExpress.XtraBars.Ribbon.RecentLabelItem | 2021-09-16T19:50:44 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['/WindowsForms/images/recentitemcontrol-label-items120459.png',
'RecentItemControl - Label Items'], dtype=object) ] | docs.devexpress.com |
libroomba (community library)
Summary
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
Particle Photon library for communicating with iRobot Roomba vacuums.
This library has been tested on the following devices:
- Roomba 530
You will need to create a cable to connect your Photon to the Roomba's serial port.
Usage
Connect XYZ hardware, add the libroomba library to your project and follow this simple example:
#include "libroomba.h" LibRoomba roomba; void setup() { // Enable the USB serial port for debug messages. Serial.begin(115200); Serial.println("Starting Roomba!"); // Setup the Roomba. roomba.setDebug(true); roomba.begin(D6); // Enable user control. roomba.start(); roomba.setControl(); roomba.setSafe(); // Play a song to indicate setup is complete. roomba.writeSong(0, 8, 60, 32, 62, 32, 64, 32, 65, 32, 67, 32, 69, 32, 71, 32, 72, 32); roomba.playSong(0); } void loop() { roomba.updateSensors(); roomba.debugSensors(); delay(1000); }
See the examples folder for more details.
Contributing
Fork this library on GitHub Desktop IDE.
After your changes are done you can upload them with
particle library upload or
Upload command in the IDE. This will create a private (only visible by you) library that you can use in other projects. Do
particle library add libroomba_myname to add the library to a project on your machine or add the libroomb | https://docs.particle.io/cards/libraries/l/libroomba/ | 2021-09-16T19:26:27 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.particle.io |
Connect with Tableau
You can use Tableau to connect to a SingleStore cluster for exploring your data.
This guide provides guidelines and best practices for developing business Tableau dashboards using SingleStore data and covers the following:
Installing Tableau Desktop
To develop business Tableau dashboards using SingleStore DB data, you first need to install Tableau Desktop.
If you do not have Tableau Desktop already installed on your PC or Mac, you can download a 14-day trial version from the Tableau Download site.
(Optional) Install Tableau Reader
You can use Tableau Reader to package a dashboard, along with the supporting data, and share it with the recipients, even if they do not have Tableau Desktop (trial or licensed version) installed.
Tableau Reader is available as a free download.
Connecting Tableau to SingleStore
After installing Tableau Desktop, connect it to SingleStore through the following steps:
Download and install MySQL ODBC connector.
Configure MySQL ODBC connector.
Connect Tableau to SingleStore DB.
Download and Install ODBC Connector for MySQL
Navigate to the MYSQL ODBC Connector site.
Product Versionas
8.0or higher, and then choose your
Operating Systemand
OS Versionfrom the drop-down lists. Newer 8.x versions of the ODBC connector may require additional configuration changes when connecting to a SingleStore cluster. See Connect with Application Development Tools for more information.
Pick the appropriate installer option for your system and then click
Download. After the download finishes, install the driver.
Configure MySQL ODBC Connector
You need an existing ODBC connection before connecting Tableau with SingleStore.
If you are using Windows, open the search bar and type
ODBC.
Click Set up data sources (ODBC).
Configure a new ODBC data source for SingleStore.
Specify
TCP/IP Serverand
Portas
3306.
Specify
User,
Password, and
Database.
Click Test to test the connectivity and click OK.
Connect Tableau to SingleStore
Open Tableau Desktop and navigate to
Connect > To a Server > More… > SingleStore.
In the
Server,
Username, and
Password. For Server, enter the Admin endpoint (SingleStore Managed Service) or IP Address (SingleStore) for your cluster.
You can find the Admin endpoint on the SQL IDE tab when you click Connect in the SingleStore Portal.
Select the
Require SSLcheck box if the SingleStore cluster has been configured for secure client connections.
Setting Up SingleStore Data Source in Tableau
On the data source page, perform the following steps:
The data source name defaults to the server value you entered previously. Enter a unique data source name to be used in Tableau. For example, use a data source naming convention that helps other users of the data source identify which data source to connect to.
On the left pane, navigate to the
Databasedrop-down list and select a database or search by database name.
Under
Table, select a table or search by table name.
Drag a table to the canvas, and then select the sheet tab to start your analysis.
You can also perform different joins, table appends, filters in the data source along with setting up live connections and extracts. More information is available on the Tableau Online Help.
After the data source set up, you are ready to visualize and analyze data.
Tableau Best Practices
Follow these strategies to build efficient Tableau dashboards.
Data Strategies
Keep simple data sources: The performance of visualization depends on the underlying data sources. To improve performance, extract only the data that is needed for the worksheet to perform its analysis.
Extract only the required data: Minimize joined tables. If the analysis requires data from joined tables, then edit the connection to remove unused data.
Execute data source filters before executing traditional filters for context filtering so that the extract is smaller and takes less time to refresh. Although context filtering creates a flat table with initial performance issues, performance improves for subsequent views and filters.
Use extracts filtered by context filters so that it contains only the data that is needed. Extracts are stored in an internal structure, which is easier for Tableau to query and access. Also, calculated fields are saved as actual data, saving further computation time. One of the drawback of using extracts is that the data is not real-time and a scheduled task is required to refresh the data.
Hide unused columns: Hiding unused columns (dimensions/measures) minimizes extract refresh time or custom SQL query time. You can hide fields in the data window or data source, or allow Tableau to hide all unused fields before creating the extract in the Extract Data box.
Use Extracts: Tableau uses different techniques to optimize the extract. You can also improve visualization by aggregating the data for a visible dimension, known as an aggregated extract. When users interact with an aggregated extract, all calculations and summations have already been compiled.
Extracts can be filtered with data source filters, which can help to control the size of the extract in two different ways, depending on when the filter is applied:
If a data source filter is in place prior to extract creation, the extract will contain filtered records.
If a data source filter is put in place after extract creation, the filter will be applied against the full extracted data set. So, your extract will contain all the data but will only show what the data source filter is allowing.
Filtering
Minimize quick filters: Quick filters require Tableau to run a query against the database to determine the values to display for the selected dimension. Therefore, the quick filters that do not require querying the database for values are custom value list, wildcard match, relative date filters, and the browse period date filter.
It is recommended that you avoid quick filters that require knowledge of the values in the database such as multiple value list, single value list, compact list, slider, measure filters, and ranged date filters.
Use “All values in database” option in quick filters: Use the default option “All values in database” for a quick filter and avoid using “Only relevant values” option. The default option makes all values in the database for that particular field available for user selection. In contrast, “Only relevant values” compares the values returned from the database with those in other quick filters to show only the values that apply, given the choice made on the other filters. This behavior can bring performance issues, especially if the dashboard contains more than two quick filters.
Avoid quick filters or actions that generate context filters: Context filters create a context TEMP table with the values that go through the filter. All other filters access this TEMP table to draw their values from the limited set of data. This improves the performance of the dashboard; however, if the context filter does not trim down the data to a more manageable set in the new context table then it may cause a performance issue with the visualization.
If you use a context filter, make the TEMP table as small as possible. For example, eliminate columns that are not needed for that particular visualization to reduce the size of the data set at least to one tenth of the original size. Also, the context filter should be used against slow-changing values or dimensions only.
Keep range quick filters simple: To display results for across large separated periods of time, use a visualization rather than a quick filter. This is applicable for the following date filters:
Relative date filters, which are used for a date range that is relative to a specific date.
Range of date filters, which are used for a defined range of discrete dates.
Discrete date filters, which are used for the individual dates selected from a list. It is recommended to avoid discrete date filters.
Replace quick filters with action filters: Instead of using multiple quick filters, “Only Relevant Values” option, or quick filters with too many values, use actions filters as they do not require Tableau to run additional queries. These filters work on the users actions, such as clicking on a mark. Action filters can also operate as cascading filters in a filter hierarchy, where values are filtered out as they traverse through the hierarchy.
Avoid using action filters from several sheets for a single dashboard layout as quick filters: Creating visualizations in different worksheets to use them later as action filters in a single dashboard generates extra load every time the dashboard is loaded, as the visualizations are refreshed with every action from the user. When this situation arises, quick filters may be a better solution as they are only loaded once when the dashboard is loaded, and then the filter is applied across all the sheets simultaneously.
SQL Code
Limit the use of customer SQL code in live connections: SQL connections are issued to the database inside a subquery, which can include other clauses from Tableau like GROUP BY, ORDER BY, WHERE and more. Even with efficient SQL code, the extra clauses issued to the database can slow down performance. Use custom SQL only if Tableau cannot generate the desired outcome.
If a SQL command is necessary, then create a view inside the database and connect to it from Tableau. If it is not possible to create a new view, build a data extract with the SQL code. It will run only once when the extract is built or refreshed, minimizing the effect on visualization performance.
Remove extra clauses: Effective SQL code provides Tableau the required data to produce the desired outcome. Remove extra clauses in the SQL code for Tableau to organize and visualize data effectively.
Calculation
Consider data types for faster calculation: Tableau provides a massive list of functions, divided into different categories that assist in creating calculated fields. The performance impact from different data types may be unnoticeable on smaller data sets, but these differences are more pronounced as the number of records increase. In general, the fastest calculations involve Boolean or number data types, followed by dates, and finally string calculations. It is important to consider ways to achieve same calculated results using faster data types.
Avoid blended calculations: Blended calculations occur when you have to query different data sources to obtain a single calculated field in your visualization. In this case, Tableau needs to query each data source separately to retrieve the values. This can affect performance, especially in large data set. An alternative is to prepare a new view on the data layer on the database server to keep data processing outside Tableau.
Avoid row-level calculations involving parameters: Row-level or record-level calculations operate on every record in the underlying data. Every row calculation consumes time, but when the parameter contains a significant variety of values, for example a table as parameter, it increases time processing exponentially.
Rendering
Avoid high mark counts: Marks are the points, plots, or symbols on the visualizations. Each mark must be created and positioned before the report can be rendered. Use the capabilities of Tableau to drill down or interact with visualizations to obtain better results.
Minimize the file size of images or custom shapes: Big images or shapes result in slow loading and exporting process. Keep images below 50Kb, 32x32 pixels in dimensions, and use efficient image compression formats to reduce load time.
Implementing TDC
Tableau Datasource Customization (TDC) helps optimize the interaction of the Tableau dashboard with a SingleStore DB cluster. Since Tableau is designed to create TEMP tables, it can sometimes result in performance issues. TDC can stop creation of these tables, along with other configuration settings.
Setting up TDC File
Before you create a TDC file, ensure that you have the Tableau version and the driver name. The following are few sample codes that can be used to set up TDC files for SingleStore DB.
Sample 1
\<?xml version=\'1.0\' encoding=\'utf-8\' ?\> \<connection-customization class=\'memsql\' enabled=\'true\' version=\'10.0\'\> \<vendor name=\'memsql\'/\> \<driver name=\'memsql\'/\> \\>
Sample 2
\<?xml version=\'1.0\' encoding=\'utf-8\' ?\> \<connection-customization class=\'mysql\' version=\'2019.1\' enabled=\'true\'\> \<vendor name=\'mysql\' /\> \<driver name=\'mysql\' /\> \\>
Copy the code into a text editor and save the file with a
.tdc file extension. On a Windows machine, place the
.tdc file in the folder
C:\Users\<user name>\Documents\My Tableau Repository\Datasources. Make sure there is only one
.tdc file in this location.
Notice
Tableau will not create or customize a connector to work with a specific ODBC driver.
For more information on TDC and customization options, see the documentation on ODBC capabilities and customization.
TDC Configuration Properties
As per the need and capability of the data source, configure the following properties for your TDC file: | https://docs.singlestore.com/db/v7.5/en/query-data/connect-with-analytics-and-bi-tools/connect-with-tableau/connect-with-tableau.html | 2021-09-16T19:29:19 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../../../image/1613fcf0db679d.png', '16083b578bc582.png'],
dtype=object) ] | docs.singlestore.com |
Skuid Platform: Quickstart¶
Register for Skuid Platform¶
Go to to register your Skuid site.
Fill out the registration form:
First Name: First name of the user.
Last Name: Last name of the user.
Email Address: Must be an email account you have access to. If not, then you will not be able to access your verification link!
Company Name: A name you will use to identify the site. This is not used to create the URL or domain for your site, so you can feel free to be more descriptive or creative with your site name.
For example: Glacier Ice Cubes, Inc.
User Name: A name (does NOT have to be based on an email address format) that your users will remember and that meets your organization’s standards for a username.
Subdomain: A name that will be used to create a URL for your site. Choose something URL friendly! For example, if you choose ‘glacier’ for your subdomain, then the URL for your site would look like this:
Region: Select the region you where you want to create your Skuid site. For more information, see What are regions, and what do they mean for my Skuid site?
Click NEXT.
You’ll be redirected to a page that will prompt you to check the email address you entered into the registration form.
- Go check your inbox.
- Click on the verification link in your email inbox. The link will take you to the form where you will set your password and a security question/answer.
- Click NEXT when you’ve completed the form.
- You will be redirected to your site!
Log in to Skuid¶
To log in to your Skuid Platform site, navigate to your site’s URL, which will be a combination of your subdomain with -us-trial.skuidsite.com appended to it.
- For example, if you registered your subdomain as glacier, then your login link would be:
-
Skuid Platform FAQ¶
General Questions¶
What are regions, and what do they mean for my Skuid site?¶
When you create a Skuid site, you can select the region in which the site will be hosted. Choosing the correct region can help reduce data latency―the time it takes for requested data to be returned―and decrease page load time.
Additionally, some companies (as well as some countries) have specific regulatory requirements about data storage and security; selecting the correct region for your site makes it easier adhere to these regulations.
How it works
When you create a Skuid site by selecting a subdomain and a region, Skuid creates the site only in the specified region. The site is not replicated in other regions. The Skuid site only exists on the specified region’s servers and will not be automatically deployed to other sites that you create in other regions.
One site cannot be served on more than one region, and sites cannot be moved from one region to another.
Best practices
- When possible, choose the region closest to where your data source is hosted, even if the bulk of your end users will be in another region.
- For example, if 30% of your users are in the US and 70% are in EMEA, it might seem logical to host your Skuid site from the European region. But if your data source is hosted in the US, choosing the US region will reduce overall data latency.
- However, you may need to balance page load needs with security or regulatory requirements.
Note
If the current regional options do not meet your location requirements, please talk to your Skuid account representative.
How is security enforced on Skuid Platform?¶
Security in Skuid Platform is enforced at a per-user level. There are “Profiles” that can be given to an end user to limit / extend permissions to different functionality in Skuid Platform, such as access to data sources, apps, and more.
How does Skuid Platform allow users to access their data from other external systems?¶
Skuid Platform works with multiple external systems—Microsoft Dynamics, Salesforce, Google Calendar, Google Drive, Slack, Outlook, SAP, and others—as well as systems accessible through OData and REST APIs. All Skuid Platform data source types are read/write enabled.
Can I import themes created in Skuid on Salesforce into Skuid Platform?¶
Yes, you can import themes created in Skuid on Salesforce to Skuid Platform but cannot yet bulk-import pages or other asset types.
Can I use “static resources?”¶
Skuid Platform allows users to upload static file content (such as images, JavaScript or CSS resources, etc.) in the Files tab in the Site navigation. Files can be uploaded by clicking clicking Upload new File.
Within the App Composer, uploaded files can be selected as an image source for Image and Wrapper components, as well as a source for JavaScript and CSS resources. Internally, the Skuid Platform stores themes and component packs within Files.
Deployment¶
Can I assign certain pages to specific profiles and end users in Skuid Platform? Does Skuid Platform recognize existing assignments from other systems?¶
While Skuid works with multiple external systems, there may be some features supported by the originating system that are not duplicated in Skuid Platform―but corresponding Skuid features that address the same issue.
For example, Skuid Platform doesn’t use Salesforce features such as page assignments, overrides, Salesforce apps, tabs, or communities. These features are replaced by apps and routes. Within Skuid Platform, an app is a collection of URL “routes” that allow end users to view different Skuid pages.
Data access permissions, however, are preserved on Skuid Platform. End users cannot circumvent data model permissions using Skuid.
Support/Troubleshooting¶
What kind of support tools are available on Skuid Platform?¶
Currently, there are no support tools (such as “Login as”) available, but we plan to provide that in a future release.
I cannot access data from a Salesforce org within my Skuid site¶
If you are an international user, it may be necessary to relax the Salesforce Connected App’s IP Restrictions within its OAuth settings. To do so:
- In the Salesforce sidebar, navigate to Build > Create > Apps.
- Click Manage beside the Connected App used for the data source.
- Click Edit Policies.
- Beside IP Relaxtion, select Relax IP restrictions with second factor or Relax IP restrictions.
- The Salesforce data source type in Skuid Platform will not work for source orgs with versions of Skuid lower than 8.15.4. Consider updating the version of Skuid within your Salesforce org.
Known Limitations¶
This release includes the following incomplete features, which will be resolved in future Skuid releases.
- New Page templates: Skuid Platform does not currently support creating a new page from a page template.
- Per-Profile and Per-User credentials: Used for Basic HTTP Authentication in Skuid data sources, this credential source option will be available in a future Skuid Platform update.
- The Shared by all Site Users and Per User, with optional shared Site-wide Defaults credential source options are available and work as intended.
Questions?¶
Visit community.skuid.com to ask questions, share ideas, and report problems. | https://docs.skuid.com/v12.0.8/v1/en/skuid-platform/index.html | 2021-09-16T18:33:33 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.skuid.com |
Using TIPBOT
Registering your WalletRegistering your Wallet
Go to the
#bots channel in the Discord server, and type
.registerwallet TRTL..., and replace
TRTL... with your wallet address.
For example, you would type-
.registerwallet TRTLv3pFrFm2yk4cYNtKf5fxV1b594tNrZfEV2CYWJsTSqr9BWoWMrUNpQaeD9StrzQrxpRQKPCdd1FfvT6D6dAg4pY6iB7sqs
Depositing TurtleDepositing Turtle
After your wallet address has been registered, type
.deposit in the
#bots channel, then:
- Check for a new direct message from TIPBOT
- Copy the line of code he gives (excluding the
Integrated Address:) and enter that as the address of the recipient
No Payment ID!
zedwalletzedwallet
Follow the steps given here and replace the values of the address with the one provided.
- See Expected Results section below
WalletShellWalletShell
Guide coming soon!
- See Expected Results section below
PLEASE ENTER YOUR OWN VALUES WHICH THE BOT SENDS YOU!
Expected ResultsExpected Results
When the bot receives the payment, it will send you a PM letting you know. Now you can tip people!
Checking your BalanceChecking your Balance
Before you can tip, you need to know your balance. Your balance is the amount of TRTL you have in your tipjar wallet to tip to others.
To check your balance, type
.balance. TIPBOT will PM you with how much balance you have remaining in your tipjar wallet.
If it shows
0.00, then make sure you have deposited some TRTL and it has been received
Tipping PeopleTipping PeopleAdding a Message when Tipping
The syntax for tipping someone is-
.tip 12345 @person
- Trying to add a message before it, will not work.
For example,
hey .tip 1 @RockSteady#7588
will not send RockSteady 1 TRTL.
- Trying to add it on a separate,
.tip 1 @RockSteady#7588 hey
will send RockSteady 1 TRTL.
-isTipping with Emojis tipping with the emoji
will not tip the original poster of the message 99 TRTL.
You can react with the emoji
(again) however, to tip the person 99 TRTL.
Tipping Multiple PeopleTipping Multiple People :s?Where Do These Tips Go? :sos: and PM them with instructions on how to register their wallet and tip.
Other CommandsOther Commands
TIPBOT isn't just a tip bot, it's so much more! Here's a table of it's other commands, what each of them do, and how to use them (which aren't explained above).
That's it! Enjoy tipping and getting tipped :) | https://docs.turtlecoin.lol/guides/using-tipbot | 2021-09-16T18:14:51 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../assets/dep.png', 'received'], dtype=object)
array(['../assets/balance.png', 'balance'], dtype=object)] | docs.turtlecoin.lol |
How to enable Catalog Mode
Go to Advanced → Catalog Mode to enable Catalog mode
TIP: Adding an Enquiry form to the product page
- First, you need to create a form with Ninja Forms (Forms in wp-admin)
- Then go to Advanced → Catalog Mode
- Add this code to "Add to cart replacement"-field
[accordion] [accordion-item title="Send us an enquiry"] [ninja_forms_display_form id=4] [/accordion-item] [/accordion]
TIP: Adding search or social icons to the header:
Go to Advanced → Catalog Mode and look for "Cart / Account replacement"
To get a search field enter this:
[search]
To get social icons enter this:
[follow twitter="" facebook=""<br> email="[email protected]" pinterest=""] | https://docs.uxthemes.com/article/61-how-to-enable-catalog-mode | 2021-09-16T19:23:46 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.uxthemes.com |
CPU virtualization emphasizes performance and runs directly on the processor whenever possible. The underlying physical resources are used whenever possible and the virtualization layer runs instructions only as needed to make virtual machines operate as if they were running directly on a physical machine.
CPU virtualization is not the same thing as emulation. ESXi does not use emulation to run virtual CPUs. With emulation, all operations are run in software by an emulator. A software emulator allows programs to run on a computer system other than the one for which they were originally written. The emulator does this by emulating, or reproducing, the original computer’s behavior by accepting the same data or inputs and achieving the same results. Emulation provides portability and runs software designed for one platform across several platforms.
When CPU resources are overcommitted, the ESXi host time-slices the physical processors across all virtual machines so each virtual machine runs as if it has its specified number of virtual processors. When an ESXi host runs multiple virtual machines, it allocates to each virtual machine a share of the physical resources. With the default resource allocation settings, all virtual machines associated with the same host receive an equal share of CPU per virtual CPU. This means that a single-processor virtual machines is assigned only half of the resources of a dual-processor virtual machine. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.resmgmt.doc/GUID-DFFA3A31-9EDD-4FD6-B65C-86E18644373E.html | 2021-09-16T17:57:19 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.vmware.com |
Nontechnical Overview¶
This document provides a high level, nontechnical overview of Firefox’s address bar, with a focus on the different types of results it shows and how it chooses them.
Table of Contents
- Nontechnical Overview
- Terminology
- Maximum Result Count
- Search Strings
- Top Sites
- Searches
- The Heuristic Result
- Result Composition
- Result Composition Nuances
- Other Result Types
- Search Mode
Terminology¶
This document uses a small number of terms of art that would be helpful to understand up front.
- Input
The text box component of the address bar. In contrast, we use “address bar” to refer to the whole system comprising the input, the view, and the logic that determines the results that are shown in the view based on the text in the input.
- Result
An individual item that is shown in the view. There are many different types of results, including bookmarks, history, open tabs, and search suggestions.
- View
The panel that opens below the input when the input is focused. It contains the results.
Maximum Result Count¶
The view shows a maximum of 10 results by default. This number is controlled by
a hidden preference,
browser.urlbar.maxRichResults.
Search Strings¶
If the user has not modified the text in the input or the text in the input is empty, we say that the user’s search string is empty, or in other words, there is no search string. In contrast, when the user has modified the text in the input and the text is non-empty, then the search string is that non-empty text.
The distinction between empty and non-empty search strings is helpful to understand for the following sections.
Top Sites¶
When the search string is empty and the user focuses the input, the view opens and shows the user’s top sites. They are the same top sites that appear on the new-tab page except their number is capped to the maximum number of address bar results (10). If the user has fewer top sites than the maximum number of results (as is the case in a new profile), then only that number of results is shown.
This behavior can be turned off by going to about:preferences#privacy and unchecking “Shortcuts” in the “Address Bar” section. In that case, the view closes when the search string is empty.
Searches¶
When the search string is non-empty, the address bar performs a search and displays the matching results in the view. Multiple separate searches of different sources are actually performed, and the results from each source are combined, sorted, and capped to the maximum result count to display the final list of results. In address bar terminology, each source is called a provider.
Each provider produces one or more types of results based on the search string. The most common result types include the following (not exhaustive):
How the address bar combines and sorts results from different providers is discussed below in Result Composition.
The Heuristic Result¶
The first result in the view is special and is called the heuristic result. As the user types each character in their search string, the heuristic result is updated and automatically selected, and its purpose is to show the user what will happen when they press the enter key without first selecting a (non-heuristic) result. The heuristic result is so called because it shows Firefox’s best guess for what the user is trying to do based on their search string.
The heuristic result is determined by running through a number of different heuristics and picking the one that first matches the search string. The most important heuristics in the order that Firefox runs through them are:
Is the search string…
An omnibox extension keyword? Extensions using the omnibox API can register keywords by which they become activated.
A bookmark keyword? The user can associate a keyword with each bookmark. Typing a bookmark keyword plus an optional search string and pressing enter will visit the bookmark.
A domain name or URL that should be autofilled? Autofill is the name of the feature where the input completes the domain names and URLs of bookmarks and frequently visited sites as the user is typing them. (Firefox autofills “to the next slash”, meaning it first autofills domain names and then partial paths.)
A valid URL? If so, visit the URL. (This includes fixing common typos like “mozilla..org” and “mozilla.ogr”. Valid URLs are based on the Public Suffix List. The user can also specify an allow-list using hidden preferences to support domains like localhost.)
Ultimately fall back to performing a search using the default engine. (The user can opt out of this fallback by setting the hidden preference
keyword.enabledto false. In that case, Firefox stops at the previous step and attempts to visit the user’s search string as if it were a URL.)
Result Composition¶
For a given search string, the address bar performs multiple separate searches of different providers and then combines their results to display the final list. The way in which results are combined and sorted is called result composition. Result composition is based on the concept of result groups, one group after another, with different types of results in each group.
The default result composition is described next, starting with the first result.
2. Extension Omnibox Results¶
The next group of results is those provided by extensions using the omnibox API. Most users never encounter these results because they are provided only by extensions that use this feature, and even then the user must type certain extension-defined keywords to trigger them. There are at most 6 results in this group.
3. Search Suggestions¶
The next group is search suggestions. Typically this group contains 6 results, but the exact number depends on certain factors described later in Result Composition Nuances. There are actually three types of search suggestions:
Previous searches the user has performed from the address bar and search bar (denoted with a clock icon):
This is the only type of search suggestion that is generated by Firefox alone, without the help of a search engine. When the user performs a search using an engine from the address bar or search bar (and only the address bar and search bar), Firefox stores the search string, and then when the user starts to type it again, Firefox includes it as a result to make it easy to perform past searches. (Firefox does not store search strings used within web pages like google.com.)
Suggestions from the user’s default engine (denoted with a magnifying glass icon):
These are fetched from the engine if the engine provides the necessary access point. The ordering and total number of these suggestions is determined by the engine.
Google-specific “tail” suggestions, which look like “… foo” and are provided for long and/or specific queries to help the user narrow their search:
These are fetched from Google when Google is the user’s default engine. The ordering and total number of these suggestions is determined by Google.
The search suggestions group typically contains two previous searches followed by four engine suggestions, but the exact numbers depend on the number of matching previous searches and engine suggestions. Previous searches are limited in number so that they don’t dominate this group, allowing remote suggestions to provide content discovery benefits. Tail suggestions are shown only when there are no other suggestions.
The user can opt out of showing search suggestions in the address bar by visiting about:preferences#search and unchecking “Provide search suggestions” or “Show search suggestions in address bar results”.
4. General Results¶
The final group of results is a general group that includes the following types:
Bookmarks
History
Open tabs (switch to tab)
Remote tabs (via Sync)
Sponsored and Firefox Suggest results (part of the Firefox Suggest feature)
This general group is labeled “Firefox Suggest” in the Firefox Suggest feature.
Typically this group contains 3 results, but as with search suggestions, the exact number depends on certain factors (see Result Composition Nuances).
Most results within this group are first matched against the search string on their titles and URLs and then sorted by a metric called frecency, a combination of how frequently and how recently a page is visited. The top three results are shown regardless of their specific types.
This is the only group that is sorted by frecency.
A few important complexities of this group are discussed in the next subsections. The final subsection describes frecency in more detail.
Adaptive History¶
The first few bookmark and history results in the general group may come from adaptive history, a system that associates specific user search strings with URLs. (It’s also known as input history.) When the user types a search string and picks a result, Firefox stores a database record that associates the string with the result’s URL. When the user types the string or a part of it again, Firefox will try to show the URL they picked last time. This allows Firefox to adapt to a user’s habit of visiting certain pages via specific search strings.
This mechanism is mostly independent of frecency. URLs in the adaptive history database have their own sorting score based on how many times they have been used in the past. The score decays daily so that infrequently used search strings and URLs aren’t retained forever. (If two adaptive history results have the same score, they are secondarily sorted by frecency.)
Within the general group, the number of adaptive history results is not limited, but typically there aren’t many of them for a given search string.
Open and Remote Tabs¶
Unlike bookmarks and history, open and remote tabs don’t have a “natural” frecency, meaning a frecency that’s updated in response to user actions as described below in Frecency. Tabs that match the search string are assigned constant frecencies so they can participate in the sorting within the general group. Open tabs are assigned a frecency of 1000, and remote tabs are assigned a frecency of 1001. Picking appropriate frecencies is a bit of an art, but Firefox has used these values for some time.
Sponsored and Firefox Suggest Results¶
Sponsored and Firefox Suggest results are an exception within this group. They are matched on predetermined keywords, and when present, they always appear last in the general group. Frecency isn’t involved at all.
Frecency¶
Frecency is a complex topic on its own, but in summary, each URL stored in Firefox’s internal history database has a numeric score, the frecency, associated with it. Larger numbers mean higher frecencies, and URLs with higher frecencies are more likely to be surfaced to the user via the address bar. Each time the user visits a URL, Firefox increases its frecency by a certain “boost” amount that depends on how the visit is performed – whether the user picked it in the address bar, clicked its link on a page, clicked it in the history sidebar, etc. In order to prevent frecencies from growing unbounded and to penalize URLs that haven’t been visited in a while, Firefox decays the frecencies of all URLs over time.
For details on frecency, see The Frecency Algorithm.
Preferences that Affect Result Composition¶
There are a number of options in about:preferences that affect result composition.
The user can opt out of showing search suggestions in the address bar by unchecking “Provide search suggestions” or “Show search suggestions in address bar results” in about:preferences#search. (The first checkbox applies to both the address bar and search bar, so it acts as a global toggle.)
By default, the search suggestions group is shown before the general results group, but unchecking “Show search suggestions ahead of browsing history in address bar results” in about:preferences#search does the opposite. In that case, typically the general results group will contain at most 6 results and the search suggestions group will contain at most 3. In other words, regardless of which group comes first, typically the first will contain 6 results and the second will contain 3.
The “Address Bar” section in about:preferences#privacy has several checkboxes that allow for finer control over the types of results that appear in the view. The top sites feature can be turned off by unchecking “Shortcuts” in this section.
Result Composition Nuances¶
Among the search suggestions and general results groups, the group that’s shown first typically contains 6 results and the other group contains 3 results. The exact number in each group depends on several factors:
The total maximum result count (controlled by the
browser.urlbar.maxRichResultshidden preference).
The total number of results in the two groups scales up and down to accommodate this number so that the view is always full of results.
The number of extension results.
The extension results group comes before both groups, so if there are any extension results, there are fewer available slots for search suggestions and general results.
The number of matching results.
The search string may match only one or two search suggestions or general results, for example.
The number of results in the other group.
The first group will try to contain 6 results and the second will try to contain 3, but if either one is unable to fill up, then the other group will be allowed to grow to make up the difference.
Other Result Types¶
The most common result types are discussed above. This section walks through the other types.
An important trait these types have in common is that they do not belong to any group. Most of them appear at specific positions within the view.
Search Interventions¶
Search interventions help the user perform a task based on their search string. There are three kinds of interventions, and each is triggered by typing a certain set of phrases in the input. They always appear as the second result, after the heuristic result.
The three kinds of interventions are:
Currently this feature is limited to English-speaking locales, but work is ongoing to build a more sophisticated intent-matching platform to support other locales, more complex search strings, and more kinds of interventions.
Search Tips¶
Search tips inform the user they can perform searches directly from the address bar. There are two kinds of search tips:
Redirect search tip: Appears on the home page of the user’s default engine (only for Google, Bing, and DuckDuckGo)¶
In each case, the view automatically opens and shows the tip even if the user is not interacting with the address bar. Each tip is shown at most four times, and the user can stop them from appearing altogether by interacting with the address bar or clicking the “Okay, Got It” button.
Tab to Search¶
Tab to search allows the user to press the tab key to enter search mode while typing the domain name of a search engine. There are two kinds of tab-to-search results, and they always appear as the second result:
The onboarding type is shown until the user has interacted with it three times over a period of at least 15 minutes, and after that the regular type is shown.
Search Engine Offers¶
Typing a single “@” shows a list of search engines. Selecting an engine enters search mode.
Search Mode¶
Search mode is a feature that transforms the address bar into a search-only access point for a particular engine. During search mode, search suggestions are the only results shown in the view, and for that reason its result composition differs from the usual composition.
Firefox shows suggestions in search mode even when the user has otherwise opted out of them. Our rationale is that by entering search mode, the user has taken an action that overrides their usual opt out. This allows the user to opt out generally but opt back in at specific times.
Search mode is an effective replacement for the legacy search bar and may provide a good path forward for deprecating it.
The user can enter search mode in many ways:
Picking a search shortcut button at the bottom of the view
Typing an engine’s keyword (which can be set in about:preferences#search, and built-in engines have default keywords)
Typing a single “?” followed by a space (to enter search mode with the default engine)
Typing a single “@” to list all engines and then picking one
If the search bar is not also shown, pressing Ctrl+K (to enter search mode with the default engine)
To exit search mode, the user can backspace over the engine chiclet or click its close button. | https://firefox-source-docs.mozilla.org/browser/urlbar/nontechnical-overview.html | 2021-09-16T19:35:33 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../../_images/search-tip-redirect.png',
'Image of the redirect search tip with text "Start your search in the address bar to see suggestions from Google and your browsing history"'],
dtype=object) ] | firefox-source-docs.mozilla.org |
Skylight Portals
If the option Use as Skylight Portal is on, the entity will be used as a portal geometry for skylight luminances. The option is 'Off' by default.
Portals are useful for interior scenes for example, where sky light is entering by small openings which are difficult to sample efficiently. By marking explicitely the openings using portals, the engine can render difficult situations at a much higher quality in less time.
For all Luminance: If On the portal will be used for all luminances with skylight in scene, no matter how this sky light was created (drawing luminance or graphic luminance). If Off the portal will be used for this objects luminance only. The option is enabled if option only if Use as Skylight Portal is On. The option is 'On' by default.
Using portals
Portals are objects (surfaces) that control the flow of photons from the light source. More precisely: the spatial configuration of the photon's flow.
The simplified algorithm is the following: if the photon has not passed through the portals, it is not considered in the calculation of illumination and the CPU time is not spent on it. Instead of it the next photon is launched (generally at random). This continues until the number of photons passing through the portals will not be equal to the target parameters of the light source. In our case it is a sky light (RedSDK supports only sky lights for the portals). In the case of absence of portals all launched photons are involved in the calculation of the illumination.
So, portals allow you to concentrate the quality and the intensity of the light in the right places of the scene. This facilitates the creation of a scene with proper lighting and improves the speed of calculation.
The main purpose of the portals is using them for the openings through which the outer sky light penetrates. Portals need not to be rectangular necessarily. The portal may consist of any set of surfaces. At the same time you should not make the portal very complicated (for example as an ADT window), because of the additional time is required to clarify whether the photon passes through a portal or not. Ideally it should be 1 rectangular facet.
In TurboCAD any 3D primitive can be a portal. This simplifies creation of portals. It should be noted that in this case the geometry of this primitive is not loaded into the scene. In other words an option 'Use as skylight portal' is similar to the option "Load to render luminance only".
Below is shown the effect of the portal's size on the lighting quality and intensity. By increasing the size of the portal 2 times the quality worsens 4 times (at the square).
Below is shown how it is possible to highlight the places in the scene by using the portal's position.
Below is shown how it is possible to highlight the places in the scene by turning the option For all luminance Off.
| http://docs.imsidesign.com/projects/Turbocad-2018-User-Guide-Publication/TurboCAD-2018-User-Guide-Publication/Creating-3D-Objects/3D-Properties/Skylight-Portals/ | 2021-09-16T18:33:16 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../../Storage/turbocad-2018-user-guide-publication/skylight-portals-img0004.png',
'img'], dtype=object) ] | docs.imsidesign.com |
2021r2 Release Notes
From Xojo Documentation
Xojo 2021 Release 2 is now available with over 240 changes and improvements.
Notable changes include:
- iOS PDFDocument support.
- Xojo Cloud Remote Notification server for iOS notifications.
- Binary Enumeration Editor
- ColorGroups in desktop and web projects.
- Faster text project saving with fewer files marked as having changes.
- PDFDocument additions: Rotate, Translate, Scale.
- Code Editor improvements, including IDE line number settings, better row highlighting and improved drawing and performance.
- Linux IDE layout improvements.
- Lots of bug fixes. | http://docs.xojo.com/Resources:2021r2_Release_Notes | 2021-09-16T19:06:12 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.xojo.com |
This topic discusses a focused program that we internally call the "Appian Developer Co-Pilot" program. This program is about taking the large knowledge that Appian has built, how our customers build applications, how we internally build applications at Appian, mining that anonymized data and feed it into Artificial Intelligent (AI) agents capable of understanding the patterns of development. And then, building AI assistance into the development process.
One of the most powerful tools of the Appian Environment is the Appian Process Modeler. There is an infinite number of possibilities for orchestration and collaboration to build new business patterns.
Furthermore, as a developer, you have a large number of smart services available to design your applications.
Inside the Process Modeler, there's an area in the upper-left corner called "Recommendations."
The AI within the Process Modeler is going to automatically analyze the active process that you are working on and suggest to you the most likely pattern that you are going to design next.
It's going to predict all your design patterns, using all that in-depth knowledge of past behavior in years of Process Modeling.
As you design a step, it's automatically going to refresh the recommendations and know what to create next. It's going to introduce a new smart service for you to draw. It's pretty simple.
In a more elaborated example, one with multiple branches and decision logic, the AI-engine is going to take all that data and identify the best possible node for individual paths.
When you right-click on the flow, it's going to give you recommendations about what to add next based on the design pattern.
It's predicting what you need.
And if you don't see what you need, you can go ahead and search very quickly using the text search and find that specific smart service that you are looking for.
This is a very powerful feature that, without ever leaving your browser, it's always looking at the design of your process, trying to understand what the goal of your process is, and recommending the next smart service to add to your process model.
On This Page | https://docs.appian.com/suite/help/20.2/process-model-ai-assisted-development.html | 2021-09-16T19:15:26 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.appian.com |
- Storage Based Authorization (SBA)
- Tez View
- WebHCat
You can use Hue in lieu of Hive View..
Unsupported Connector Use
CDP does not support the Sqoop exports using the Hadoop
jar command (the
Java API) that Teradata documents. | https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh/topics/hive-unsupported.html | 2021-09-16T19:01:13 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.cloudera.com |
Date: Tue, 22 Sep 2009 19:06:19 -0400 From: Brent Bloxam <[email protected]> To: Dan Nelson <[email protected]> Cc: [email protected] Subject: Re: Device naming on scbus using isp Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
Dan Nelson wrote: > If you're mounting UFS filesystems, you can label them and mount them by > label (see the tunefs and glabel manpages for more info). ZFS should find > its pool devices automatically, but you can always manually label devices > with glabel and refer to the label instead of the da## name. > Thanks Dan, I'm using UFS so looks like labeling will be the solution to this issue
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=591380+0+archive/2009/freebsd-questions/20090927.freebsd-questions | 2021-09-16T19:04:03 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.freebsd.org |
.
//.
In our example we've taken the inbound attachment, built a payload and passed it to Unifi to process. More detail on how Unifi processes inbound attachments can be read in our section on Attachment Handling. | https://docs.sharelogic.com/unifi/v/3.0/polling/large-response-payloads | 2021-09-16T19:32:06 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.sharelogic.com |
This is an old revision of the document!
It's quite common situation when the queries generated with EasyQuery components must have an additional condition(s) not visible to end-users. For example you may need to limit the result set by user ID, department ID or some time frames. EasyQuery provides you with 2 possible ways of resolving this task:
To insert extra conditions into generated SQL statements you can use
BuildSQLEx() function of
SqlQueryBuilder class (instead of
BuildSQL you are using by default) and pass necessary condition(s) in its second parameter. The value of that parameter will be added into result SQL statement at the end of WHERE clause with AND conjunction to conditions, defined by end-users through visual controls.
The only trick here - you may also need to list all tables which take part in that extra condition.
AddSimpleCondition method will take care about everything.
Discussion
Actually, adding table into ExtraTables list is just a guarantee that the table will be included into generated SQL.
protected override string GenerateQueryStatement(Query query) {
SqlQueryBuilder builder = new SqlQueryBuilder((DbQuery)query);
if (builder.CanBuild) {
builder.BuildSQL();
return builder.Result.SQL;
}
else
return string.Empty;
}
Just change one call
''builder.BuildSQL()''
to something similar to the code listed in this article.
protected override string GenerateQueryStatement(Query query) {
SqlQueryBuilder builder = new SqlQueryBuilder((DbQuery)query);
if (builder.CanBuild) {
Korzh.EasyQuery.Db.Table table = Model.Tables.FindByName("policies");
query.ExtraTables.Add(table);
builder.BuildSQLEx("", "policies.PolicyID = 5");
return builder.Result.SQL;
}
else
return string.Empty;
}
I get the following error:
'Korzh.EasyQuery.Query' does not contain a definition for 'ExtraTables' and no extension method 'ExtraTables' accepting a first argument of type 'Korzh.EasyQuery.Query' could be found (are you missing a using directive or an assembly reference?)
((DbQuery)query).ExtraTables.Add(table);
When I don't include any field of table2 on the query it will rise the error "Cannot find a path between tables", if I include in the query a field of table2 the error won't be rise.
What is missing?
I execute builder.BuildSQLEx("", "Customers.CustomerID = 'ALFKI'");
then I execute builder.BuildSQL();
and there es where the error rise.
Just by executing builder.BuildSQLEx and erasing builder.BuildSQL() does the work.
Send them both to techsupport{at}korzh.com
doesnt work well.
if i use it to add a condition it adds an extra "AND" after condition due to which query execution fails.
This function is used to add "extra" conditions which are not visible to users but it your query must have at least one "normal" condition to use this method.
such as Model Specified in Korzh.EasyQuery.Db.Table table = Model.Tables.FindByName("policies")
Can you please paste the code to get Model, Whenloading entities/Data Model from xml file
DbModel model = (DbModel)query.Model;
Also I have one more query as, How to add extra condition in "ExecuteQuery(string queryJson, string optionsJson)" method of EQMvcDemoEF.NET45 Demo MVC project, Which works on sending json format query to the ExceuteQuery action method. Also can please provide us any idea regrading adding custom paging in the result grid of the tool.
After that you can use this object to add extra conditions. | http://docs.korzh.com/easyquery/how-to/add-extra-condition?rev=1402945492 | 2021-09-16T18:59:04 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.korzh.com |
The Message we shall be configuring for the Resolve Scenario is:
ResolveIncident
We will define which Field records require configuring for that Message at the appropriate time.
The scenario will need to be successfully tested before we can say it is complete.
We shall look in detail at the Message and its respective Fields in turn over the next few pages, before moving on to Test. | https://docs.sharelogic.com/unifi/integration-guides/outbound-incident-guide/resolve-scenario | 2021-09-16T18:33:03 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.sharelogic.com |
4. Dendrites and the (passive) cable equation¶
Book chapters
In Chapter 3 Section 2 the cable equation is derived and compartmental models are introduced.
Python classes
The
cable_equation.passive_cable module implements a passive cable using a Brian2 multicompartment model. To get started, import the module and call the demo function:
import brian2 as b2 import matplotlib.pyplot as plt from neurodynex3.cable_equation import passive_cable from neurodynex3.tools import input_factory passive_cable.getting_started()
The function
passive_cable.getting_started() injects a very short pulse current at (t=500ms, x=100um) into a finite length cable and then lets Brian evolve the dynamics for 2ms. This simulation produces a time x location matrix whose entries are the membrane voltage at each (time,space)-index. The result is visualized using
pyplot.imshow.
Note
The axes in the figure above are not scaled to the physical units but show the raw matrix indices. These indices depend on the spatial resolution (number of compartments) and the temporal resolution (
brian2.defaultclock.dt). For the exercises make sure you correctly scale the units using Brian’s unit system . As an example, to plot voltage vs. time you call
pyplot.plot(voltage_monitor.t / b2.ms, voltage_monitor[0].v / b2.mV)
This way, your plot shows voltage in mV and time in ms, which is useful for visualizations. Note that this scaling (to physical units) is different from the scaling in the theoretical derivation (e.g. chapter 3.2.1 where the quantities are rescaled to a unit-free characteristic length scale
Using the module
cable_equation.passive_cable, we study some properties of the passive cable. Note: if you do not specify the cable parameters, the function
cable_equation.passive_cable.simulate_passive_cable() uses the following default values:
CABLE_LENGTH = 500. * b2.um # length of dendrite CABLE_DIAMETER = 2. * b2.um # diameter of dendrite R_LONGITUDINAL = 0.5 * b2.kohm * b2.mm # Intracellular medium resistance R_TRANSVERSAL = 1.25 * b2.Mohm * b2.mm ** 2 # cell membrane resistance (->leak current) E_LEAK = -70. * b2.mV # reversal potential of the leak current (-> resting potential) CAPACITANCE = 0.8 * b2.uF / b2.cm ** 2 # membrane capacitance
You can easily access those values in your code:
from neurodynex3.cable_equation import passive_cable print(passive_cable.R_TRANSVERSAL)
4.1. Exercise: spatial and temporal evolution of a pulse input¶
Create a cable of length 800um and inject a 0.1ms long step current of amplitude 0.8nA at (t=1ms, x=200um). Run Brian for 3ms.
You can use the function
cable_equation.passive_cable.simulate_passive_cable() to implement this task. For the parameters not specified here (e.g. dentrite diameter) you can rely on the default values. Have a look at the documentation of
simulate_passive_cable() and the source code of
passive_cable.getting_started() to learn how to efficiently solve this exercise.
From the specification of
simulate_passive_cable() you should also note, that it returns two objects which are helpful to access the values of interest using spatial indexing:
voltage_monitor, cable_model = passive_cable.simulate_passive_cable(...) probe_location = 0.123 * b2.mm v = voltage_monitor[cable_model.morphology[probe_location]].v
4.1.1. Question:¶
- What is the maximum depolarization you observe? Where and when does it occur?
- Plot the temporal evolution (t in [0ms, 3ms]) of the membrane voltage at the locations 0um, 100um, … , 600 um in one figure.
- Plot the spatial evolution (x in [0um, 800um]) of the membrane voltage at the time points 1.0ms, 1.1ms, … , 1.6ms in one plot
- Discuss the figures.
4.2. Exercise: Spatio-temporal input pattern¶
While the passive cable used here is a very simplified model of a real dendrite, we can still get an idea of how input spikes would look to the soma. Imagine a dendrite of some length and the soma at x=0um. What is the depolarization at x=0 if the dendrite receives multiple spikes at different time/space locations? This is what we study in this exercise:
- Create a cable of length 800uM and inject three short pulses A, B, and C at different time/space locations:
- A: (t=1.0ms, x=100um)B: (t=1.5ms, x=200um)C: (t=2.0ms, x=300um)Pulse input: 100us duration, 0.8nA amplitude
Make use of the function
input_factory.get_spikes_current() to easily create such an input pattern:
t_spikes = [10, 15, 20] l_spikes = [100. * b2.um, 200. * b2.um, 300. * b2.um] current = input_factory.get_spikes_current(t_spikes, 100*b2.us, 0.8*b2.namp, append_zero=True) voltage_monitor_ABC, cable_model = passive_cable.simulate_passive_cable(..., current_injection_location=l_spikes, input_current=current, ...)
Run Brian for 5ms. Your simulation for this input pattern should look similar to this figure:
4.2.1. Question¶
- Plot the temporal evolution (t in [0ms, 5ms]) of the membrane voltage at the soma (x=0). What is the maximal depolarization?
- Reverse the order of the three input spikes:
C: (t=1.0ms, x=300um)B: (t=1.5ms, x=200um)A: (t=2.0ms, x=100um)
Again, let Brian simulate 5ms. In the same figure as before, plot the temporal evolution (t in [0ms, 5ms]) of the membrane voltage at the soma (x=0). What is the maximal depolarization? Discuss the result.
4.3. Exercise: Effect of cable parameters¶
So far, you have called the function
simulate_passive_cable() without specifying the cable parameters. That means, the model was run with the default values. Look at the documentation of
simulate_passive_cable() to see which parameters you can change.
Keep in mind that our cable model is very simple compared to what happens in dendrites or axons. But we can still observe the impact of a parameter change on the current flow. As an example, think of a myelinated fiber: it has a much lower membrane capacitance and higher membrane resistance. Let’s compare these two parameter-sets:
4.3.1. Question¶
Inject a very brief pulse current at (t=.05ms, x=400um). Run Brian twice for 0.2 ms with two different parameter sets (see example below). Plot the temporal evolution of the membrane voltage at x=500um for the two parameter sets. Discuss your observations.
Note
To better see some of the effects, plot only a short time window and increase the temporal resolution of the numerical approximation (
b2.defaultclock.dt = 0.005 * b2.ms).
# set 1: (same as defaults) membrane_resistance_1 = 1.25 * b2.Mohm * b2.mm ** 2 membrane_capacitance_1 = 0.8 * b2.uF / b2.cm ** 2 # set 2: (you can think of a myelinated "cable") membrane_resistance_2 = 5.0 * b2.Mohm * b2.mm ** 2 membrane_capacitance_2 = 0.2 * b2.uF / b2.cm ** 2
4.4. Exercise: stationary solution and comparison with theoretical result¶
Create a cable of length 500um and inject a constant current of amplitude 0.1nA at x=0um. You can use the
input_factory to create that current. Note the parameter
append_zero=False. As we are not interested in the exact values of the transients, we can speed up the simulation increase the width of a timestep dt:
b2.defaultclock.dt = 0.1 * b2.ms.
b2.defaultclock.dt = 0.1 * b2.ms current = input_factory.get_step_current(0, 0, unit_time=b2.ms, amplitude=0.1 * b2.namp, append_zero=False) voltage_monitor, cable_model = passive_cable.simulate_passive_cable( length=0.5 * b2.mm, current_injection_location = [0*b2.um], input_current=current, simulation_time=sim_time, nr_compartments=N_comp) v_X0 = voltage_monitor.v[0,:] # access the first compartment v_Xend = voltage_monitor.v[-1,:] # access the last compartment v_Tend = voltage_monitor.v[:, -1] # access the last time step
4.4.1. Question¶
Before running a simulation, sketch two curves, one for x=0um and one for x=500um, of the membrane potential \(V_m\) versus time. What steady state \(V_m\) do you expect?
Now run the Brian simulator for 100 milliseconds.
- Plot \(V_m\) vs. time (t in [0ms, 100ms]) at x=0um and x=500um and compare the curves to your sketch.
- Plot \(V_m\) vs location (x in [0um, 500um]) at t=100ms.
4.4.2. Question¶
- Compute the characteristic length \(\lambda\) (= length scale = lenght constant) of the cable. Compare your value with the previous figure.
\(\lambda=\sqrt{\frac{r_{Membrane}}{r_{Longitudinal}}}\)
4.4.3. Question (Bonus)¶
You observed that the membrane voltage reaches a location dependent steady-state value. Here we compare those simulation results to the analytical solution.
- Derive the analytical steady-state solution (finite cable length \(L\), constant current \(I_0\) at \(x=0\), sealed end: no longitudinal current at \(x=L\)).
- Plot the analytical solution and the simulation result in one figure.
- Run the simulation with different resolution parameters (change
b2.defaultclock.dtand/or the number of compartments). Compare the simulation with the analytical solution.
- If you need help to get started, or if you’re not sure about the analytical solution, you can find a solution in the Brian2 docs. | https://neuronaldynamics-exercises.readthedocs.io/en/latest/exercises/passive-cable.html | 2021-09-16T19:09:36 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../_images/cable_equation_pulse.png',
'../_images/cable_equation_pulse.png'], dtype=object)] | neuronaldynamics-exercises.readthedocs.io |
AgoraRtcEngineKit as any value beyond 30.
bitrate: The video encoding bitrate (Kbps). The default value is
AgoraVideoBitrateStandard,:
Adaptive,
FixedLandscape, and
FixedPortrait.
In the:
In the
FixedLandscape):
In the
FixedPortrait weak network conditions, Agora provides the
degradationPreference on video sharpness or smoothness, you can also use the following parameters to set the minimum frame rate or bitrate:
minFrameRate: The minimum video frame rate (fps). You can use
minFrameRateand
AgoraDegradationMaintainQualityto balance the video sharpness and video smoothness under unreliable connections. When minFrameRate is relatively low, the frame rate degrades significantly, fo:
//))
setVideoEncoderConfigurationare the maximum values under ideal network conditions. The SDK adapts (most often downwards) these parameters according to the network conditions in real-time.
setVideoEncoderConfigurationaffects your bill. In case network adaptation occurs, the unit price is calculated based on the actual video dimensions. For more information, see Billing for Real-time Communication. | https://docs.agora.io/en/Video/video_profile_apple?platform=iOS | 2021-09-16T18:08:19 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.agora.io |
smartcard.allow.noeku
This configuration parameter allows the use of certificates that do not have the Extended Key Usage (EKU) attribute. Normally, smart card use requires certificates with the EKU attribute. The value of this parameter can be true or false.
If you set this parameter to true, certificates without an EKU attribute can be used for SmartCard logon, and certificates with the following attributes can also be used to log on with a smart card:
- Certificates with no EKU
- Certificates with an All Purpose EKU
- Certificates with a Client Authentication EKU
If you set this parameter to false, only certificates that contain the smart card logon object identifier can be used to log on with a smart card. The default value of this parameter is false.
After changing the value of this parameter, you must re-enable smart card support by running the following sctool command as root:
[root]$ sctool -E
When you run sctool with the -E option, you must also specify the -a or -k option. You can also control this feature using group policy. | https://docs.centrify.com/Content/config-unix/smartcard_allow_noeku.htm | 2021-09-16T18:02:39 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.centrify.com |
Date: Fri, 10 Feb 1995 11:23:26 -0900 From: [email protected] (Bill Allison) To: [email protected] Subject: mounting floppy drive -- errors Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
I am trying to get my floppy drive to mount, and get errors. I issue the command: mount -t ufs /dev/fd0 /floppy and get the following error message: >fd0: Operation timeout >fd0d: hard error reading fsbn 16 of 16-31 > (ST0 ffffffff<invld,abnrml,seek_cmplt,equ_chck,drive_notrdy,top_head> ST1 ffffffff<end_of_cyl,bad_crc,data_overrun,sec_not_fnd,write_protect,no_am> ST2 ffffffff<ctrl_mrk,bad_crc,wrong_cyl,scn_eq,scn_not_fnd,bad_cyl,no_dam> cyl -1 hd -1 sec -1) > (The disk, incidentally was made using rawrite, and the cpio.flp image) What gives? FYI: Here is output from dmesg >FreeBSD 1.1.5.1(RELEASE) (GENERICAH) #0: Sun Jul 3 09:04:47 See zic(8) or link /etc/localtime directly 1994 > [email protected]:/usr/src/sys/compile/GENERICAH >CPU: i586 (586-class CPU) Id = 0x513real memory = 16384000 (4000 pages) >avail memory = 15216640 (3715 pages) >using 348 buffers containing 2854912 bytes of memory >Probing for devices on the ISA bus: >sc0 at 0x60-0x6f irq 1 on motherboard >sc0: VGA color <4 virtual consoles> >sio2 not found at 0x3e8 >sio3 not found at 0x2e8 >lpt0 at 0x378-0x37f irq 7 on isa >lpt0: Interrupt-driven port >lpt1 at 0x3bc-0x3c3 on isa >lpt2 not found at 0xffffffff >fdc0 at 0x3f0-0x3f7 irq 6 drq 2 on isa >fdc0: [0: fd0: 1.44MB 3.5in] >wdc0 at 0x1f0-0x1f7 irq 14 on isa >wdc0: unit 0 (wd0): <WDC AC2540H> >wd0: 515MB (1056384 total sec), 1048 cyl, 16 head, 63 sec, bytes/sec 512 >wdc1 not found at 0x170 >ahb0 not found >aha0 not found at 0x330 >sea: Board type unknown at address 0xf00c8000 >sea0 not found >wt0 not found at 0x300 >mcd0 not found at 0x300 >mcd1: version information is 10 D 2 >mcd1: Adjusted for newer drive model >mcd1 at 0x340-0x343 irq 11 on isa >ed0 not found at 0x280 >ed1 at 0x300-0x31f irq 5 on isa >ed1: address 00:40:05:12:44:a7, type NE2000 (16 bit) >ie0 not probed due to irq conflict with lpt0 at 7 >is0 not found at 0x280 >npx0 on motherboard >ISA strayintr 7 >wd0: can't handle 256 heads from partition table (controller value 16 restored) >fd0: Operation timeout > > --------------------- William Allison Ian Freed Consulting, Inc. Seattle, WA 98104 Tel: 206.583.8919 FAX: 206.583.8941
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=384765+0+archive/1995/freebsd-questions/19950205.freebsd-questions | 2021-09-16T20:03:10 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.freebsd.org |
Date: Tue, 5 Apr 2011 17:30:00 +1000 (EST) From: Ian Smith <[email protected]> To: Sebastian Ramadan <[email protected]> Cc: [email protected] Subject: Re: ipdivert.ko Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
In freebsd-questions Digest, Vol 357, Issue 3, Message: 8 On Tue, 5 Apr 2011 00:58:50 +0930 Sebastian Ramadan <[email protected]> wrote: > I wish to cause ipdivert.ko to load at boot time. Currently, ipfw.ko loads > correctly at boot time with domU-12-31-39-02-15-3A# kldstat > Id Refs Address Size Name > 1 8 0xc0000000 40000000 kernel > 2 1 0xc2bb3000 10000 ext2fs.ko > 3 1 0xc2d1f000 11000 ipfw.ko > 4 1 0xc2d30000 d000 libalias.ko Hmm, I'm a bit curious as to why libalias.ko was loaded. You don't have 'firewall_nat_enable="YES"' in rc.conf, do you? Anyway, loader.conf isn't the way to go for loading ipfw or ipdivert (presumably for use by natd?) these days. Instead you want these in /etc/rc.conf: ipfw_enable="YES" natd_enable="YES" plus any required ipfw_ and natd_ variables (see /etc/defaults/rc.conf) Then /etc/rc.d/ipfw will load ipfw.ko, and if natd_enable is set, will invoke /etc/rc.d/natd, which loads ipdivert.ko at the right# kldload ipdivert > domU-12-31-39-02-15-3A# kldstat > Id Refs Address Size Name > 1 10 0xc0000000 40000000 kernel > 2 1 0xc2bb3000 10000 ext2fs.ko > 3 2 0xc2d1f000 11000 ipfw.ko > 4 1 0xc2d30000 d000 libalias.ko > 5 1 0xc3cc7000 4000 ipdivert.ko > > My dmesg: > domU-12-31-39-02-15-3A# dmesg > 8.2-RELEASE #13: Mon Feb 21 20:13:46 UTC 2011 > [email protected]:/usr/obj/i386/usr/src/sys/XEN i386 [..] > start_init: trying /sbin/init > ipfw2 (+ipv6) initialized, divert loadable, nat loadable, rule-based > forwarding disabled, default to deny, logging disabled > ipfw0: bpf attached There are a number of outstanding PRs regarding module loading by natd and (if used) firewall_nat, and the use of these by /etc/rc.firewall. If enabling natd in rc.conf instead doesn't fix your issue, write to me privately and I'll put you onto some patches - but unless you're also (or instead) using kernel NAT (ipfirewall_nat - which needs to load libalias.ko) then the above settings should do you. cheers, Ian
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=390302+0+archive/2011/freebsd-questions/20110410.freebsd-questions | 2021-09-16T20:03:04 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.freebsd.org |
Command line (based on gdb) with graphic frontend. Remote debugging is possible via Ethernet or serial line.
WindView is timing debugger which displays detailed information for each event (such as the action that occurred, the context in which the action occurred, and the object associated with the action). In addition, WindView tags certain events with either high-resolution timestamps or event sequence numbers.
At the default logging level WindView shows only the context switches. You can configure WindView to show all task state transitions so that, for example, when a task goes from pended to active state, that event is logged and displayed. Or you can configure WindView to show details of selected objects in instrumented libraries. Instrumented objects include semaphore gives and takes, message queue sends and receives, timer expirations, and signals, as well as task and memory activities.
System trace and event record - yes, using VxView | https://docs.huihoo.com/os/rtos/rtos-state-of-the-art-analysis/x3414.html | 2021-09-16T18:41:22 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.huihoo.com |
The retry functionality is like a first line of defence when it comes to error handling. It builds in time to allow the system to deal with potential issues in the first instance and saves the analyst or administrator having to step in at the first sign of any problems. This can be useful in scenarios where, perhaps the external system is temporarily preoccupied with other tasks and is unable to respond in sufficient time.
Rather than fail or error a Transaction at the first unsuccessful attempt, Unifi will automatically retry and process it again. The number of times it attempts to do so and how long it waits (both for a response and before attempting to retry again) are configurable parameters.
Although the retry logic itself is applied at the HTTP Request level, the settings to configure it can be found on the Integration. This means that they can be configured uniquely and specifically for each Integration. Unifi will automatically retry errored Requests according to those settings.
The fields that can be configured are as follows:
In Unifi Integration Designer, navigate to and open < The Integration you wish to configure >.
Click the ‘Integration’ icon (this will open the Details page).
Navigate to Error Handling > Timeouts.
The Timeout fields that can be configured for the Integration are as follows:
Navigate to Error Handling > Retry.
The Retry fields that can be configured for the Integration are as follows:
Retry is automated in Unifi. Should the number of retries be exhausted, the Transaction will be errored and any subsequent Transactions are queued. This prevents Transactions from being sent out of sync and updates being made to bonded records in the wrong sequence.
It will require a user with the Unifi Manager role to intervene, investigate and correct the error before manually restarting the queue.
There are a number of UI Actions available to help and subsequent sections will look at each of those in turn.
In the next section, we'll look at the first of those UI Actions, the Replay feature. | https://docs.sharelogic.com/unifi/feature-guides/error-handling-tools/retry | 2021-09-16T19:14:06 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.sharelogic.com |
Contents:
Registered users of this product or Trifacta Wrangler Enterprise should login to Product Docs through the application.
Contents:
When data is imported from another system, you might discover that some values are missing in it. In some cases, these values simply contain no content. In other cases, these values are non-existent. Depending on how the missing values entered the data, you may end up processing them in different ways. This section describes how to identify and manage missing data in your datasets. Wrangler transform:
set col:column1 value:TRIM(column1)
You can paste Wrangle steps into the Transformer Page. dropping
When you discover mismatched data in your dataset, you have the following basic methods of fixing it:
Identify if the column values are required.
Check the target system to determine if the field must have a value. If values are not required, don't worry about it. Consider dropping the column.
Remember that null values imported into Trifacta Wrangler.
Insert a constant value. You can replace a missing value with a constant, which may make it easier to locate more important issues in the application.
Use a function. Particularly if the missing data can be computed, you can use one of the available functions to populate the missing values.
Copy values from another column. If a value from another column or a modified form of it can be used for the missing value, you can use the
settransform Modify.
You might seem something like the following:
set col: country value: NULL() row:ISMISSING([country])
The missing data is identified using the
row:ISMISSINGreference. To apply a constant, replace the
NULL()reference with a constant value, as in the following:
set col: country value: 'USA' row:ISMISSING([country])
Note that the single quotes around the value are required, since it identifies the value as a constant.
Click Add to Recipe.
Copy values from another column
You can populate missing values with values from another column. In the following example, the
nickname column is populated with the value of
first_name if it is missing:
set col: nickname value: first_name row:ISMISSING([nickname]).
set col: unit_price value: (price / weight_kg) row:ISMISSING([unit_price]) Wrangler.
- Wrangler.
After you have added back missing elements, you can change the data type to Date/Time through the data type drop-down for the column.
Before you begin reformatting your data, you should identify the target date format to which you want to match your timestamps. From the data type drop-down, select Date/Time. The dialog shows the following supported date formats:
The easiest way to handle the insertion of year information is to split out the timestamp data into separate components and then to merge back the content together with the inserted year information. Since the above timestamp data essentially contains three separate fields (Day of Month, Month, and Time), you can use a split command to break this information into three separate columns. Highlight one of the spaces between Day of Month and Month and select the
split suggestion. The Wrangle step should look similar to the following:
split col: column1 on: ' ' limit: 2
Now, your data should be stored in three separate columns.
Tip: You may notice that new data types have been applied to the generated columns. The data may be easier to handle if all column types are converted to String type for now.
The next step involves merging all of these columns back into a single field, augmented with the appropriate year information. Select the columns in the order in which you would like to see them in the new timestamp field. In this case, you can select them in the order that they were originally listed. When all three columns are selected, choose the
merge suggestion.
You may notice that the data has been formatted without spaces (
19May02:45:38), and there is no year information yet. Click Modify.
In the Transform Builder, you should see a command like the following:
merge col: column2,column3,column4
You need to modify the list of columns to insert spaces and the year identifier back into the data. It should look similar to the following:
merge col: column2,' 2015 ',column3,' ',column4
After you have inserted the year information and merged the columns, you should be able to change the column data type to the appropriate version of Date/Time.. | https://docs.trifacta.com/display/PE/Find+Missing+Data | 2018-03-17T14:31:19 | CC-MAIN-2018-13 | 1521257645177.12 | [array(['/download/resources/com.adaptavist.confluence.rate:rate/resources/themes/v2/gfx/loading_mini.gif',
None], dtype=object)
array(['/download/resources/com.adaptavist.confluence.rate:rate/resources/themes/v2/gfx/rater.gif',
None], dtype=object) ] | docs.trifacta.com |
If the installation or boot of ESXi from a software FCoE LUN fails, you can use several troubleshooting methods.
Problem
When you install or boot ESXi from FCoE storage using a VMware software FCoE adapter and a network adapter with partial FCoE offload capabilities, the installation or the boot process fails.
Results
Make sure that you correctly configured boot parameters in the option ROM of the FCoE network adapter.
During installation, monitor the BIOS of the FCoE network adapter for any errors.
If possible, check the VMkernel log for errors.
Use the esxcli command to verify whether the boot LUN is present.
esxcli conn_options hardware bootdevice list | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-DF5C23C3-685B-4FE3-BB87-DD869E2D586A.html | 2018-03-17T14:20:01 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.vmware.com |
You can uninstall I/O filters deployed in an ESXi host cluster.
Prerequisites
Required privileges: Host.Config.Patch.
Procedure
- Uninstall the I/O filter by running the installer that your vendor provides.
During uninstallation, vSphere ESX Agent Manager automatically places the hosts into maintenance mode.
If the uninstallation is successful, the filter and any related components are removed from the hosts.
- Verify that the I/O filter components are properly uninstalled from your ESXi hosts:
esxcli --server=server_name software vib list
The uninstalled filter no longer appears on the list. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.storage.doc/GUID-3AB3323A-7C30-4422-8A7B-C7C85E747C9A.html | 2018-03-17T14:20:59 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.vmware.com |
shmulateSeq - Simulate mutations in a single sequence
Description¶
Generates random mutations in a sequence iteratively using a targeting model. Targeting probabilities at each position are updated after each iteration.
Usage¶
shmulateSeq(sequence, mutations, targetingModel = HH_S5F)
Arguments¶
- sequence
- sequence string in which mutations are to be introduced.
- mutations
- number of mutations to be introduced into
sequence.
- targetingModel
- 5-mer TargetingModel object to be used for computing probabilities of mutations at each position. Defaults to HH_S5F.
Value¶
A string defining the mutated sequence.
Examples¶
# Define example input sequence sequence <- "NGATCTGACGACACGGCCGTGTATTACTGTGCGAGAGATAGTTTA" # Simulate using the default human 5-mer targeting model shmulateSeq(sequence, mutations=6)
[1] "NGATCTGACGGCACAGCCGTATATTCTTGTGCGAGCGATAGTTTA"
See also¶
See shmulateTree for imposing mutations on a lineage tree. See HH_S5F and MK_RS5NF for predefined TargetingModel objects. | http://shazam.readthedocs.io/en/version-0.1.8---mutation-profiling-enhancements/topics/shmulateSeq/ | 2018-03-17T14:23:19 | CC-MAIN-2018-13 | 1521257645177.12 | [] | shazam.readthedocs.io |
AWS X-Ray Sample Application
The AWS X-Ray eb-java-scorekeep sample app, available on GitHub, shows the use of the AWS X-Ray SDK to instrument incoming HTTP calls, DynamoDB SDK clients, and HTTP clients. The sample app uses AWS Elastic Beanstalk features to create DynamoDB tables, compile Java code on instance, and run the X-Ray daemon without any additional configuration.

The sample is an instrumented version of the Scorekeep project on AWSLabs. It includes a front-end web app, the API that it calls, and the DynamoDB tables that it uses to store data. All the components are hosted in an Elastic Beanstalk environment for portability and ease of deployment.
Basic instrumentation with filters, plugins, and instrumented AWS SDK clients is shown in the
project's
xray-gettingstarted branch. This is the branch that you deploy in the
getting started tutorial. Because this branch only
includes the basics, you can diff it against the
master branch to quickly
understand the basics.
The sample application shows basic instrumentation in these files:
HTTP request filter –
WebConfig.java
AWS SDK client instrumentation –
build.gradle
The
xray branch of the application adds the use of HTTPClient, Annotations, SQL queries, custom subsegments, an instrumented AWS Lambda function, and instrumented initialization code and scripts. This service
map shows the
xray branch running without a connected SQL database:
To support user log-in and AWS SDK for JavaScript use in the browser, the
xray-cognito
branch adds Amazon Cognito to support user authentication and authorization. With
credentials retrieved
from Amazon Cognito, the web app also sends trace data to X-Ray to record request
information from the
client's point of view. The browser client appears as its own node on the service
map, and
records additional information, including the URL of the page that the user is viewing,
and the
user's ID.
Finally, the
xray-worker branch adds an instrumented Python Lambda function that
runs independently, processing items from an Amazon SQS queue. Scorekeep adds an item
to the queue
each time a game ends. The Lambda worker, triggered by CloudWatch Events, pulls items
from the queue every
few minutes and processes them to store game records in Amazon S3 for analysis.
With all features enabled, Scorekeep's service map looks like this:
For instructions on using the sample application with X-Ray, see the getting started tutorial. In addition to the basic use of the X-Ray SDK for Java discussed in the tutorial, the sample also shows how to use the following features.
Advanced Features
- Manually Instrumenting AWS SDK Clients
- Creating Additional Subsegments
- Recording Annotations, Metadata, and User IDs
- Instrumenting Outgoing HTTP Calls
- Instrumenting Calls to a PostgreSQL Database
- Instrumenting AWS Lambda Functions
- Instrumenting Amazon ECS Applications
- Instrumenting Startup Code
- Instrumenting Scripts
- Instrumenting a Web App Client
- Using Instrumented Clients in Worker Threads
- Deep Linking to the X-Ray Console | https://docs.aws.amazon.com/xray/latest/devguide/xray-scorekeep.html | 2018-03-17T14:48:35 | CC-MAIN-2018-13 | 1521257645177.12 | [array(['images/scorekeep-gettingstarted-servicemap-after-github.png',
None], dtype=object)
array(['images/scorekeep-servicemap.png', None], dtype=object)
array(['images/scorekeep-servicemap-allfeatures.png', None], dtype=object)] | docs.aws.amazon.com |
Configuration¶
Configuration Basics¶
Configuration for pynsot consists of a single INI with two possible locations:
/etc/pynsotrc
~/.pynsotrc
The files are discovered and loaded in order, with the settings found in each location being merged together. The home directory takes precedence.
Configuration elements must be under the
pynsot section.
If you don’t create this file, running
nsot will prompt you to create one
interactively.
Like so:
$ nsot sites list /home/jathan/.pynsotrc not found; would you like to create it? [Y/n]: y Please enter URL: Please enter SECRET_KEY: qONJrNpTX0_9v7H_LN1JlA0u4gdTs4rRMQklmQF9WF4= Please enter EMAIL: jathan@localhost
Example Configuration¶
[pynsot] auth_header = X-NSoT-Email auth_method = auth_header default_site = 1 default_domain = company.com url = | http://pynsot.readthedocs.io/en/latest/config.html | 2017-07-20T16:20:27 | CC-MAIN-2017-30 | 1500549423269.5 | [] | pynsot.readthedocs.io |
Code Completion - FeatureSiteTemplateAssociation Id attribute
Description
Feature stapling is an approach used to associate custom features with a site definition. This feature will be activated automatically when a new site is created from the associated site definition. The Id attribute contains Ids of "Staplee" feature.
"Staplee" – the feature associated with a site definition or the feature that applies the customization.
reSP allows you to select from the drop-down list.
Just use Ctrl+Space shortcut.
Note
Feature stapling concept itself is fairly easy to use, but an important aspect in feature stapling is the order in which a site is provisioned:
- | http://docs.subpointsolutions.com/resp/code-completion/featuresitetemplateassociationfeatureid.html | 2017-07-20T16:44:02 | CC-MAIN-2017-30 | 1500549423269.5 | [array(['_img/featuresitetemplateassociationfeatureid.gif', None],
dtype=object) ] | docs.subpointsolutions.com |
ManyWho works seamlessly with Salesforce. You can build a flow using the ManyWho Drawing Tool, and run it as an app inside Salesforce.
What you need:
- ManyWho username/password
- Salesforce username/password
- The Salesforce service integration configured in ManyWho
Here is how you can run your flow inside our standard Visualforce page:
- Click LOG IN to login to the ManyWho Drawing Tool, and open your flow.
- Click Run or Publish in the right-hand-side navigation.
- Copy the Flow ID, including the term flow-id and the question mark preceding it.
If you have published the flow:
If you have clicked Run (The URL will contain the Version ID additionally):
- In the address bar in your browser, edit the URL to: …salesforce.com/apex/flow and paste the Flow ID details you have copied. The URL will now look like this (if you are using a published flow, the Version ID will be missing):
The format for the URL is …salesforce.com/apex/flow?flow-id={my flow id}&flow-version-id={only provided if running rather than publishing}
- Press Enter to load the page in your browser.
This will load the flow inside Salesforce:. | https://docs.manywho.com/running-flow-salesforce-visualforce-page/ | 2017-07-20T16:38:42 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.manywho.com |
Mass hire projects allow human resources specialists to create multiple positions and efficiently hire workers into those positions.
Overview
Use mass hire projects when you hire multiple workers at one time, such as when you hire to meet a seasonal demand. Creating a mass hire project is useful because you can create position records, worker records, and worker assignments for positions at the same time. When you create positions for a mass hire project, you can specify the following information:
- The number of positions to create
- The worker type of the people that you will hire for the positions
- The department and the job that are associated with the positions
- The full-time equivalent value of the position
Example
In the summer, you usually hire 15-20 part-time college students to fill available internships in your company. This year, you want to hire five accountants, five order processors, and five cashiers. Instead of creating each position record and worker record separately, you create one mass hire project called “SummerInterns”. The project start and end dates correlate with the start and end dates of the position durations for the positions you create for the mass hire project.
In the Mass hire projects page, select the “SummerInterns” project and then click Open project. In the open mass hire project, click Create positions and enter information about the accountant position. You can indicate that five accountant positions should be created using the same information for each one, and then click OK. Repeat this process for the order processor and cashier positions.
After selecting students to hire for the internship positions, you'll enter each student’s information in the Position details for the position that you're hiring them for. When you have entered all of the position details, select the position in the Mass hire projects page, and then click Hire. A position record will be created for each position and a worker record will be created and assigned to the correct position for each person who you hire.
Mass hire project statuses
A mass hire project can have the following statuses.
- Created
- Open
- Closed
On the Mass hire project page, click Open project or Close project to change the status of a mass hire project. The following table describes what you can do with a project according to its status. | https://docs.microsoft.com/en-us/dynamics365/unified-operations/fin-and-ops/hr/mass-hire-projects | 2017-07-20T16:25:27 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.microsoft.com |
HTTP/1.1 301 Moved Permanently Date: Thu, 20 Apr 2017 12:42:03 GMT Server: Apache Location: Vary: Accept-Encoding Content-Length: 235 Connection: close Content-Type: text/html; charset=iso-8859-1 HTTP/1.1 200 OK Date: Thu, 20 Apr 2017 12:42:05 GMT Server: Apache X-Powered-By: PHP/5.4.36-1+deb.sury.org~precise+2 Vary: Accept-Encoding Connection: close Transfer-Encoding: chunked Content-Type: text/html
[Querying whois.verisign-grs.com] [Redirected to whois.name.com] [Querying whois.name.com] [whois.name.com] Domain Name: DOCS-ENGINE.COM Registry Domain ID: 1734878778_DOMAIN_COM-VRSN Registrar WHOIS Server: whois.name.com Registrar URL: Updated Date: 2016-06-26T08:21:16Z Creation Date: 2012-07-20T13:43:20Z Registrar Registration Expiration Date: 2017-07-20T13:43: docs: docs: [email protected] Name Server: ns1vwx.name.com Name Server: ns2fw-20T06:38. | http://docs-engine.com.websiteoutlook.com/ | 2017-07-20T16:48:14 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs-engine.com.websiteoutlook.com |
. For more information about the AWS Tools for Windows PowerShell, see the Tools for Windows PowerShell User Guide.
The AWS Toolkit for Visual Studio. For more information about the AWS Toolkit for Visual Studio, see the Toolkit for Visual Studio User Guide.
As an alternative to installing the AWS Tools for Windows, you can use NuGet to download the AWSSDK assembly for a specific application project. For more information, see Install AWS Assemblies with NuGet.
Note
We recommend using Visual Studio Professional 2010 or higher to implement your applications. It is possible to use Visual Studio Express to implement applications with the the SDK, including installing the Toolkit for Visual Studio. However, the installation includes only the AWS project templates and the Standalone Deployment Tool. In particular, Toolkit for Visual Studio on Visual Studio Express does not support AWS Explorer.
How to Use This Guide
The AWS SDK for .NET Developer Guide describes how to implement applications for AWS using the the SDK, and includes the following:
- Getting Started with the AWS SDK for .NET
How to install and configure the the SDK. If you have not used the the SDK before or are having trouble with its configuration, you should start here.
- Programming with the AWS SDK for .NET
The basics of how to implement applications with the the SDK that applies to all AWS services. This chapter also includes information about how to migrate code to the latest version of the the SDK, and describes the differences between the last version and this one.
- Programming AWS Services with the AWS SDK for .NET
A set of tutorials, walkthroughs, and examples of how to use the the SDK to create applications for particular AWS services.
- Additional Resources
Additional resources outside of this guide that provide more information about AWS and the the SDK.
Note
A related document, AWS SDK for .NET API Reference, provides a detailed description of each namespace and class.
Supported Services and Revision History
The AWS SDK for .NET supports most AWS infrastructure products, and we regularly release updates to the the SDK to support new services and new service features. To see what changed with a given release, see the the SDK README file.
To see what changed in a given release, see the the SDK change log. go to Test-Driving AWS in the Free Usage Tier.
To obtain an AWS account, go to the AWS home page and click Sign Up Now. | http://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/welcome.html | 2017-07-20T16:27:47 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.aws.amazon.com |
Grocery List
Grocery List
Another cool feature added in this update is the Grocery List. For each recipe you can assign woocommerce products that are present in recipes. That way, If a customer wishes to make a recipe and doesn’t have the ingredients, they can buy it from you! Great marketing strategy!
Quick find
About & How To
Fully Customizable & Super-Fast Recipes Filtre
Grocery List
Timer & Estimated duration for recipe steps.
Shortcodes
Front-End
- Front-end Add a recipe
- Chef Dashboard
- Favourite Recipes
- Set-up Front-End Recipe List
- Template overriding | http://docs.aa-team.com/le-chef-recipes-manager-for-wordpress/documentation/grocery-list/ | 2017-07-20T16:37:56 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.aa-team.com |
Obtains the number of currently open windowsint sm_wcount(void);
-
1 The number of windows open.1 The number of windows open.
- 0 The base window is the only open screen.
- -1 There is no current screen.
sm_wcountreturns the number of windows currently open. The number is equivalent to the number of windows in the window stack, excluding the base window.
Use this function with
sm_wselectto activate another window from the window stack. For example, the following statement selects the screen beneath the current window:sm_wselect(sm_wcount()-1);
sm_wselect | http://docs.prolifics.com/panther/html/prg_html/libfu381.htm | 2017-07-20T16:44:06 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.prolifics.com |
Kubernetes Troubleshooting
Available Documentation
Log Files
Based on the error message that you see in the UI, you could perform basic troubleshooting steps if you have access to both the Kubernetes setup and to the CloudCenter platform:
Failure to Deploy a New Container
If you are unable to deploy a new container, revisit the following steps to ensure that you follow the prescribed process:
See Configure a Kubernetes Cloud for additional details.
Check you clusterrole assignment and ensure that it is set to cluster-admin:
Role binding the service account to the admin is essential to access the dropdown in the Cloud Defaults page.
kubectl create clusterrolebinding <name> --clusterrole=cluster-admin
If the details in the previous bullet did not address the issue, then create a dedicated service account for CloudCenter. The following example, walks you through the required steps for this process
kubectl create serviceaccount cloudcenterSA kubectl create clusterrolebinding cloudcentersabinding --clusterrole=cluster-admin --serviceaccount=default:cloudcenterSA #The following commands use jq. If not installed, you can install it using this command: sudo apt-get install jq kubectl get serviceaccount cloudcenterSA -o json | jq -Mr '.secrets[].name' #The cloudcenterSA-token-XXXXX name is unique and is gathered from this command -- be sure to replace the token in the following command kubectl get secrets cloudcenterSA-token-XXXXX -o json | jq -Mr '.data.token' | base64 -d
If the above two workarounds did not address the issue, verify the Kubernetes setting for the Default API version or the API version override. The API version is optional and not required.
To verify the version, access the Kubernetes Region UI > Kubernetes Settings.
If you have configured a specific API version in your environment, try leaving it blank and retry the deployment
Insufficient Permission
Issue: Your deployment fails with forbidden (networkpolicies.extensions is forbidden) or Code 403 (Received status: Status(apiVersion=v1, code=403) in the containerblade.log
Reason: The Service Account is associated with cluster role has insufficient permissions or has a non-existing cluster role
Solution: Update the Service Account to map to right cluster role. See Configure a Kubernetes Cloud for additional details.
Incorrect API Version
Issue: Your deployment fails with a Code 400 (no kind Network Policy is registered for v1 version. Received status:Status(apiVersion=v1, code=400) error in the containerblade.log
Reason: The API version for the object (Network Policy) in this case is not sent correctly. Either the user specified a wrong version or the CloudCenter platform could not auto-detect the version.
Solution: Try one of the following solutions:
Leave the Default and Override API versions blank – this often corrects the issue.
Alternately, find the right version by examining an existing object instance in the Kubernetes dashboard or using the kubectl GET API. In the CloudCenter Kubernetes region settings, set the API Version Override field with the identified version. For example, “NetworkPolicy:v1beta1”.
- No labels | https://docs.cloudmgmt.cisco.com/display/CCD482/Kubernetes+Troubleshooting | 2020-09-18T18:37:02 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.cloudmgmt.cisco.com |
Infrastructure as Code
Kaleido is built to empower modern DevOps practices, allow you to make your blockchain infrastructure a part of your application development and promotion pipeline.
Jump straight to the API docs at api.kaleido.io
Everything you do in the Kaleido console is available on the API, and within Kaleido we use those same APIs and automation tools as part of our own pipeline - making tools we developed internally available to you. Here is a summary, to give you an insight into how we deliver our enterprise-grade managed platform.
| https://docs.kaleido.io/developers/automation/ | 2020-09-18T16:48:28 | CC-MAIN-2020-40 | 1600400188049.8 | [array(['/images/kaleido_pipeline_summary.png',
'Kaleido Continuous Delivery Pipeline'], dtype=object)] | docs.kaleido.io |
IEnumerator Interface
Definition
Supports a simple iteration over a non-generic collection.
public interface class IEnumerator
public interface IEnumerator
[System.Runtime.InteropServices.Guid("496B0ABF-CDEE-11d3-88E8-00902754C43A")] public interface IEnumerator
[System.Runtime.InteropServices.Guid("496B0ABF-CDEE-11d3-88E8-00902754C43A")] [System.Runtime.InteropServices.ComVisible(true)] public interface IEnumerator
type IEnumerator = interface
[<System.Runtime.InteropServices.Guid("496B0ABF-CDEE-11d3-88E8-00902754C43A")>] type IEnumerator = interface
[<System.Runtime.InteropServices.Guid("496B0ABF-CDEE-11d3-88E8-00902754C43A")>] [<System.Runtime.InteropServices.ComVisible(true)>] type IEnumerator = interface
Public Interface IEnumerator
- Derived
-
- Attributes
-
Examples
IEnumerator is the base interface for all non-generic enumerators. Its generic equivalent is the System.Collections.Generic.IEnumerator<T> interface..
The Reset method is provided for COM interoperability and does not need to be fully implemented; instead, the implementer can throw a NotSupportedException.
Initially, the enumerator is positioned before the first element in the collection. You must call the MoveNext method to advance the enumerator to the first element of the collection before reading the value of Current; otherwise, Current is undefined., if it's implemented, followed by MoveNext. If Reset is not implemented, you must create a new enumerator instance to return to the first element of the collection.
If changes are made to the collection, such as adding, modifying, or deleting elements, the behavior of the enumerator is undefined.. | https://docs.microsoft.com/en-gb/dotnet/api/system.collections.ienumerator?view=netframework-4.8 | 2020-09-18T18:26:06 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.microsoft.com |
public class SimpleRemoteStatelessSessionProxyFactoryBean extends SimpleRemoteSlsbInvokerInterceptor implements FactoryBean<java.lang.Object>, BeanClassLoaderAware
FactoryBeanfor remote SimpleRemoteS).
This proxy factory is typically used with an RMI business interface, which serves as super-interface of the EJB component interface. Alternatively, this factory can also proxy a remote SLSB with a matching non-RMI business interface, i.e. an interface that mirrors the EJB business methods but does not declare RemoteExceptions. In the latter case, RemoteExceptions thrown by the EJB stub will automatically get converted to Spring's unchecked RemoteAccessException.
RemoteAccessException,
AbstractSlsbInvokerInterceptor.setLookupHomeOnStartup(boolean),
AbstractSlsbInvokerInterceptor.setCacheHome(boolean),
AbstractRemoteSlsbInvokerInterceptor.setRefreshHomeOnConnectFailure(boolean)
CONTAINER_PREFIX
logger
destroy, doInvoke, getSessionBeanInstance, refreshHome, releaseSessionBeanInstance, setCacheSessionBean
getCreateMethod, invokeInContext, isConnectFailure, isHomeRefreshable, newSessionBeanInstance, refreshAndRetry, removeSessionBeanInstance, setRefreshHomeOnConnectFailure
create, getHome, invoke, SimpleRemoteStatelessSessionProxyFactoryBean()
public void setBusinessInterface(@Nullable java.lang.Class<?> businessInterface)
You can also specify a matching non-RMI business interface, i.e. an interface that mirrors the EJB business methods but does not declare RemoteExceptions. In this case, RemoteExceptions thrown by the EJB stub will automatically get converted to Spring's generic RemoteAccessException.
businessInterface-()
@Nullable<java.lang.Object>
FactoryBean.getObject(),
SmartFactoryBean.isPrototype() | https://docs.spring.io/spring-framework/docs/5.1.0.RELEASE/javadoc-api/org/springframework/ejb/access/SimpleRemoteStatelessSessionProxyFactoryBean.html | 2020-09-18T16:16:25 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.spring.io |
To start up the iMETOS you will need a valid GSM contract with the possibility to send at least 5MB data
on GPRS per month and with the ability to send and receive SMS-messages. This contract has to be
activated up front. Please make sure that the PIN code of the SIM-card is deactivated. To deactivate it you
will need a mobile phone of the same company.
The iMETOS housing is closed with 6 M3 screws. To open them you will need an Allen key of 2.5 mm. This
is part of the delivery. When you opened it the lower part will be still hanging on the upper part, due to
the short connection of the battery. The battery has to be pulled out of the top with care. Do not use to
much force for this. It might help to knock at the site of the top and pull softly. Please open the iMETOS
and enter the SIM-card. If the system has been send by any type of parcel service the power will be
disconnected. Please connect the power after inserting the SIM card. Please check if the solar panel is
connected as well.
If the power is connected press the little black button to initialise an internet connection. The success of
this can be observed by a blinking code of the iMETOS LED and the modem PCB LED.
If the SIM card has been successfully inserted your iMETOS has registered in the internet and it will send
data to the web server. To access this data use. All settings on data upload
times, time zone, position and much more can be set up on this web site. To use it you will have to
register as a user first.
To close the iMETOS again it will need to enter the battery again and to fit the top on the button part. If
all fits together insert the 6 scres and tide them carefully. | http://docs.metos.at/Start-up+the+iMEOTS+ECO+D3?structure=iMETOS+ECO+Series | 2020-09-18T18:10:04 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.metos.at |
What's New¶
Last Updated: August 2020
Refer to this article for information about each new release of Tethys Platform.
Release 3.2¶
App Permissions Assigned to Groups¶
App permissions can now be assigned directly to permission Groups in the admin pages.
Adding an app to the permission Group will reveal a dialog to select which of the app permissions to also assign.
Assign users to the permission Group to grant them those permissions within the app.
This is an opt-in feature: set ENABLE_RESTRICTED_APP_ACCESS to True in portal_config.yml to use this feature.
See: Assign App Permission Groups
App Access Permissions¶
Apps now have an access permission that can be used to grant access to specific users or groups.
When a users does not have access to an app it will be hidden in the apps library.
If a user without permission to access an app enters one of the app URLs, they will see a 404 page.
Configure access to apps by creating a permission Group with access to the App and then assign any number of users to that group.
The access permission is automatically enforced for all views of apps.
This is an opt-in feature: set ENABLE_RESTRICTED_APP_ACCESS to True in portal_config.yml to use this feature.
See: Assign App Permission Groups
Custom Home and App Library Styles and Templates¶
Portal admins can now customize the Home or App library pages with custom CSS or HTML.
Two new settings groups in Site Settings section of admin pages.
Specify CSS directly into the setting or reference a file in a discoverable Static path.
Specify the path to custom templates in a discoverable Templates path.
See: Customize Portal Theme (Recommended)
Additional Base Templates for Apps¶
There are 9 new base templates for Apps that simplify implementing common layouts.
Specify the desired base template in the
extendstag of the app template.
See: Additional Base Templates
New Features for Jobs Table¶
Logs action that displays job logs while job is running.
Monitor job action that can be implemented to display live results of job as it runs.
Jobs can be grouped and filtered by permission Groups in addition to User.
Resubmit job action.
See: Jobs Table
Official Docker Image Improvements¶
Additional options added to
run.shto allow it to be run in different modes (daemon, test, etc.) to facilitate testing.
Adds two new Salt Scripts to make it easier to extend without duplicating steps.
pre_tethys.sls: prepares static and workspace directories in persistent volume location.
collectstaticand
collectalland syncs configuration files to persistent volume location.
New documentation for using the Official Docker Image
See: Official Docker Image
Tethys Portal Configuration¶
Fixed inconsistencies with documentation and behavior.
Fixed issues with some of the groups that weren't working.
The way logging settings are specified is more straight-forward now.
See: Tethys Portal Configuration
Install Command¶
New
--no-db-syncoption to
tethys installcommand to allow for installing the app code without the database sync portion.
This is helpful in contexts where the database is unavailable during installation such as in a Docker build.
See: install command
Collectstatic Command¶
Behavior of
collectstaticcommand changed to copy the static directory instead of link it to be more consistent with how other static files are handled.
Alleviates a workaround that was necessary in SE Linux environments (the links couldn't be followed).
Old linking behavior still available via the
--linkoption.
See: manage command
Expanded Earth Engine Tutorials¶
Two additional follow-on tutorials to the Earth Engine tutorial.
Part 2 - add additional pages to app, layout with Bootstrap grid system, upload files, add REST API.
Part 3 - prepare app for production deployment and publishing on GitHub, deploy to production server.
See: Google Earth Engine
All New Production Installation Guide¶
Near complete rewrite of the production installation documentation.
Examples shown for both Ubuntu and CentOS.
Expanded from a 1 page document to 25+ documents.
Moved many documents that were in Tethys Portal to configuration section of production installation docs.
All existing documentation was updated.
See: Production Installation Guide
Docs Fixes¶
Added example for SSL firewall configuration.
Various fixes to make THREDDS and GEE tutorials more clear.
Tethys Portal Configuration documentation fixed.
Bug Fixes¶
Fixed bug with scaffolding extensions.
Compatibility changes for Bokeh 2.0.0.
Fixes broken URIs for password reset capability. | http://docs.tethysplatform.org/en/latest/whats_new.html | 2020-09-18T16:24:27 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.tethysplatform.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.