content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
DIN Rail and Wall Mounting Plates
Every DS110x device is shipped with two mounting plates — one for the installation on a DIN rail, and one for mounting on the wall.
Both plates are secured onto the device using two supplied screws.
Wall mounting plate can be used to affix the DS110x to a wall in a semi-permanent or permanent manner. The diagram below shows important dimensions.
| https://docs.tibbo.com/phm/ds110x_mounting | 2020-11-23T19:38:44 | CC-MAIN-2020-50 | 1606141164142.1 | [array(['ds110x_mounting_1.png', 'ds110x_mounting_1'], dtype=object)
array(['ds110x_mounting_2.png', 'ds110x_mounting_2'], dtype=object)] | docs.tibbo.com |
Attributes
Updated
by Anya
Attributes are property fields that your Agents can create to provide additional information about your Contacts. They can also be used to segment your Contacts. Custom Attributes are fields are uniquely relevant to your company and can be used to keep a record of Contact details. Custom Attributes will differ for every company, but some examples include: industry, phone number, product model number, subscription plan, and more.
Adding a Custom Attribute
You'll need to add an Attribute so that the property field appears on all of your Contacts. After that, you'll be able to manually add values to the Attributes on Contacts, or you can set up an Experience step to map the Contact's response to an Attribute's value. To add a Custom Attribute to your Contacts, follow the steps below:
- On the navigation sidebar, select the Contacts section.
- Click on the name of any Contact to see their User Details.
- Click the blue "+ Add new user attribute" button.
- Enter a title for the Custom Attribute. This is the name of the property field that will appear in the User Details.
- The Identifier field will be defaulted to the Attribute title. You can edit if needed for backend development purposes, otherwise leave as the default.
- Select the corresponding Data Type that makes sense for the Attribute's value. The Data Types are as follows:
- Text: The value for this Attribute will be text.
- Number: The value for this Attribute will be a number.
- Date: The value for this Attribute is an exact date. To enter a value, select the date from a calendar popup.
- True/False: You will be able to choose either True or False for this Attribute's value.
- List: This option allows you to have multiple values for this Attribute.
- Once you select a Data Type, a field will appear for you to enter the value that you would like to appear for this specific Contact. If you selected the "List" data type, then you'll be able to enter multiple values here by entering each one and pushing the enter/return button in between each.
- Click Save and the Attribute will appear on the User Details of every Contact. The value of the Attribute will also be saved for this specific Contact.
To learn how to add values for this Attribute on other Contacts, follow the steps in the following sections.
Adding an Attribute Value Manually
After you add an Attribute it will appear on every one of your Contacts' details. You can add an Attribute value to a Contact manually by following the steps below.
- On the navigation sidebar, select the Contacts section.
- Click on the name of the Contact that you would like to add the Attribute value to.
- On the Contact's User Details, find the Attribute and click on the blue "Add" button.
- The Attribute fields will appear and you'll be able to add a value for this Contact.
- Click Save and the Attribute value will now show up on the Contact's details.
Mapping Experience Responses to Attributes
Attribute values can also be added to Contacts through their Experiences responses for Free Form Text and Multiple Choice response types. Follow the steps below to set up mapping Experience responses to Attributes.
- On the navigation sidebar, select the Experiences section.
- Open the Experiences Manager by adding a new Experience or editing an existing one.
- Add a step, enter your Bot's message, and select Free Form Text or Multiple Choice as the Response Type.
- A field titled "Map Response to Attribute" will appear and you will be able to select the Attribute that you would like to be connected with this step.
When a Contact responds to this step, their answer will be added as the value to the selected Attribute. | https://docs.table.co/article/qpf9f92mum-attributes | 2020-11-23T20:05:52 | CC-MAIN-2020-50 | 1606141164142.1 | [array(['https://files.helpdocs.io/m53zo2qeyd/articles/qpf9f92mum/1590525357727/screen-shot-2020-05-26-at-1-34-01-pm.png',
None], dtype=object)
array(['https://files.helpdocs.io/m53zo2qeyd/articles/qpf9f92mum/1590981030956/attribute.png',
None], dtype=object)
array(['https://files.helpdocs.io/m53zo2qeyd/articles/qpf9f92mum/1590981947142/add-attribute-value.png',
None], dtype=object)
array(['https://files.helpdocs.io/m53zo2qeyd/articles/qpf9f92mum/1590988052261/mapped-attribute-message.png',
None], dtype=object)
array(['https://files.helpdocs.io/m53zo2qeyd/articles/qpf9f92mum/1590988207017/experience-mappi.png',
None], dtype=object) ] | docs.table.co |
HTTP¶
Scapy supports the sending / receiving of HTTP packets natively.
HTTP 1.X¶
Note
Support for HTTP 1.X was added in
2.4.3, whereas HTTP 2.X was already in
2.4.0.
About HTTP 1.X¶
HTTP 1.X is a text protocol. Those are pretty unusual nowadays (HTTP 2.X is binary), therefore its implementation is very different.
For transmission purposes, HTTP 1.X frames are split in various fragments during the connection, which may or not have been encoded. This is explain over
To summarize, the frames can be split in 3 different ways:
chunks: split in fragments called chunks that are preceded by their length. The end of a frame is marked by an empty chunk
using
Content-Length: the header of the HTTP frame announces the total length of the frame
None of the above: the HTTP frame ends when the TCP stream ends / when a TCP push happens.
Moreover, each frame may be aditionnally compressed, depending on the algorithm specified in the HTTP header:
compress: compressed using LZW
deflate: compressed using ZLIB
br: compressed using Brotli
gzip
Let’s have a look at what happens when you perform an HTTPRequest using Scapy’s
TCP_client (explained below):
Once the first SYN/ACK is done, the connection is established. Scapy will send the
HTTPRequest(), and the host will answer with HTTP fragments. Scapy will ACK each of those, and recompile them using
TCPSession, like Wireshark does when it displays the answer frame.
HTTP 1.X in Scapy¶
Let’s list the module’s content:
>>> explore(scapy.layers.http) Packets contained in scapy.layers.http: Class |Name ------------|------------- HTTP |HTTP 1 HTTPRequest |HTTP Request HTTPResponse|HTTP Response
There are two frames available:
HTTPRequest and
HTTPResponse. The
HTTP is only used during dissection, as a util to choose between the two.
All common header fields should be supported.
Default HTTPRequest:
>>> HTTPRequest().show() ###[ HTTP Request ]### Method= 'GET' Path= '/' Http_Version= 'HTTP/1.1' A_IM= None Accept= None Accept_Charset= None Accept_Datetime= None Accept_Encoding= None [...]
Default HTTPResponse:
>>> HTTPResponse().show() ###[ HTTP Response ]### Http_Version= 'HTTP/1.1' Status_Code= '200' Reason_Phrase= 'OK' Accept_Patch43= None Accept_Ranges= None [...]
Use Scapy to send/receive HTTP 1.X¶
To handle this decompression, Scapy uses Sessions classes, more specifically the
TCPSession class.
You have several ways of using it:
Examples:
TCP_client.tcplink:
Send an HTTPRequest to and write the result in a file:
load_layer("http") req = HTTP()/HTTPRequest( Accept_Encoding=b'gzip, deflate', Cache_Control=b'no-cache', Connection=b'keep-alive', Host=b'', Pragma=b'no-cache' ) a = TCP_client.tcplink(HTTP, "", 80) answser = a.sr1(req) a.close() with open("", "wb") as file: file.write(answser.load)
TCP_client.tcplink makes it feel like it only received one packet, but in reality it was recombined in
TCPSession.
If you performed a plain
sniff(), you would have seen those packets.
This code is implemented in a utility function:
http_request(), usable as so:
load_layer("http") http_request("", "/", display=True)
This will open the webpage in your default browser thanks to
display=True.
sniff():
Dissect a pcap which contains a JPEG image that was sent over HTTP using chunks.
Note
The
http_chunk.pcap.gz file is available in
scapy/test/pcaps
load_layer("http") pkts = sniff(offline="http_chunk.pcap.gz", session=TCPSession) # a[29] is the HTTPResponse with open("image.jpg", "wb") as file: file.write(pkts[29].load)
HTTP 2.X¶
The HTTP 2 documentation is available as a Jupyther notebook over here: HTTP 2 Tuto | https://scapy.readthedocs.io/en/latest/layers/http.html | 2020-11-23T19:40:30 | CC-MAIN-2020-50 | 1606141164142.1 | [array(['../_images/http_tcp.png', '../_images/http_tcp.png'], dtype=object)] | scapy.readthedocs.io |
1 runtime.exec
<runtime.exec
<string /> !
</runtime.exec>
Exceptions
requires 1 arguments, received: [...]
A incorrect number of arguments has been specified.
access to [...] forbiden, check security [...]
The file or command are in a prohibited access directory.
command [...] not found
The command to execute is not found in any of the authorized folders.
Remarks
For security, before use <runtime.exec>, you should perform the setting of the Axional Studio server node which will execute this commands. It consist in indicate in the maintenance of the Axional Studio Nodes of the wic_conf in the field Command directories of the section XSQL-Script the path of the directories of which is allowed the execution of commands, for example /bin, /usr/bin, /usr/sbin... (for the execution of command as ls, mkdir ...).
In addition there is also to indicate which are the routes of the accessible files from the Axional Studio server. It consists in indicate in the maintenance of Axional Studio Nodes of the wic_conf in the field File directory of the section XSQL-Script the path of the directories of which is allowed to use the files.
The differences paths separated by the character ',' (comma) and respeting the notation of the platform, the name of the disk drive in uppercase in the case of Windows systems and the separator character '/' (backslash) in the case of Unix/Linux systems.
The standard exit of the executed process is returned as result, being able to be captured in a variable for its previous processing.
To obtain the status of the execution should use the tag <runtime.status>
List the content of the directory /tmp.
<xsql-script <body> <set name="m_result"> <runtime.exec> <string>/bin/ls -l /tmp</string> </runtime.exec> </set> <println><m_result/></println> </body> </xsql-script>
Execute the command dd of the operative system to transform a file from the format EBCDIC to ASCII.
<xsql-script> <body> <set name="m_result"> <runtime.exec <string>/bin/dd conv=ascii,unblock cbs=128 obs=128 if=/home/jas/temp/536500070514204810_00029418.dat of=/home/jas/temp/536500070514204810_00029418.txt</string> </runtime.exec> </set> </body> </xsql-script> | https://docs.deistercloud.com/content/Axional%20development%20products.15/Axional%20Studio.4/Development%20Guide.10/Languages%20reference.16/XSQL%20Script.10/Packages.40/runtime/runtime.exec.2.xml?embedded=true | 2020-11-23T19:05:03 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.deistercloud.com |
Subcommittee on Energy (Committee on Science, Space, and Technology)
Subcommittee on Energy (Committee on Science, Space, and Technology)
Thursday, September 27, 2018 (10:00 AM)
2318 RHOB Washington, D.C.
Mr. Edward McGinnis Principal Deputy Assistant Secretary for Nuclear Energy, U.S. Department of Energy
Mr. Harlan Bowers President, X-energy
Dr. John Parsons Co-Chair, MIT Study on the Future of Nuclear Energy in a Carbon-Constrained World
Dr. John Wagner Associate Laboratory Director, Nuclear Science & Technology, Idaho National Laboratory
First Published:
September 20, 2018 at 02:27 PM
Last Updated:
September 27, 2018 at 10:24 AM | https://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=108723 | 2020-11-23T19:12:42 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.house.gov |
Hills Hoist Covers
Can I Find A Cover For My Hills Hoist?
Yes, you can find Hills Hoist Covers here. We have a range of different hoist covers available to choose from in different sizes.
You'll find these products under the Covers Category on the website. We have three cover styles available for purchase, including the Shade Cloth Rotary Cover, the Folding Frame Clothesline Cover and the Rotary Clothesline Cover.
You'll just need to measure your clothesline first and then work out which size will best suit your current clothesline.
If you're not sure about Hills Hoist Covers though, just give us a call on 1300 798 779 and we can help you out or visit us at.
Questions About Hills Hoist Covers?
To see photos, videos and reviews, please visit the Rotary Clothesline Cover product page here! | https://docs.lifestyleclotheslines.com.au/article/705-hills-hoist-covers | 2020-11-23T19:41:48 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.lifestyleclotheslines.com.au |
The Sales Channel API is deprecated and will be removed with 6.4.0.0. Consider using the Store-API
The SalesChannel-API is part of our API family. It allows access to all sales channel operations, such as creating new customers, customer login and logout, various cart operations and a lot more.
It's ideal if you want to build your own storefront. You could create a mobile app based on the Sales Channel API or just embed it into your existing application to have a solid base for payment and transaction handling. You find more information about the concept behind the Sales Channel here.
The Storefront API has no authentication since it is designed to be a public API. Some user related endpoints require a logged in user.
The access key for the SalesChannel-API can be found in the administration ().
The access key must always be included in your API request. The custom header
sw-access-key is used for this purpose.
The Storefront API supports a simple JSON formatted response similar to the Shopware 5 API. | https://docs.shopware.com/en/shopware-platform-dev-en/sales-channel-api/sales-channel-api-authentication | 2020-11-23T19:54:46 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.shopware.com |
SMTPSecureSocket.ServerError
From Xojo Documentation
Event
SMTPSecureSocket.ServerError(ErrorID as Integer,ErrorMessage as String, Email as EmailMessage)
Supported for all project types and targets.
Supported for all project types and targets.
Executes when an error occured while sending a message.
Notes
ErrorID and ErrorMessage are sent by the mail server and Email is the message that was being sent when the error occurred. Email is removed from the queue at the time the event executes. | http://docs.xojo.com/SMTPSecureSocket.ServerError | 2020-11-23T18:32:27 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.xojo.com |
Subcommittee on Health (Committee on Energy and Commerce)
Subcommittee on Health (Committee on Energy and Commerce)
Thursday, September 27, 2018 (10:00 AM)
2123 RHOB Washington, D.C.
The Honorable Jaime Herrera Beutler Member, U.S. House of Representatives
Dr. Lynne Coslett-Charlton Pennsylvania District Legislative Chair, The American College of Obstetricians and Gynecologists
Dr. Joia Crear Perry Founder and President, National Birth Equity Collaborative
Mr. Charles Johnson Founder, 4Kira4Moms
Ms. Stacey Stewart President, March of Dimes
First Published:
September 20, 2018 at 02:46 PM
Last Updated:
November 2, 2018 at 10:42 AM | https://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=108724 | 2020-11-23T20:12:08 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.house.gov |
Custom map styles in Unity
Familiarity with Unity.
Mapbox allows you to create completely custom maps that can be used across platforms. This tutorial will walk you through how to add a Mapbox designer map to your account as a custom style and use that custom style with the Mapbox Maps SDK for Unity.
Getting started
Here are some resources you'll need to get started:
- Mapbox account and access token. Sign up for an account at mapbox.com/signup. You can find your access tokens on your Account page.
- A Unity scene including the Mapbox Maps SDK for Unity. You can follow our tutorial, Mesh generation with the Maps SDK for Unity Part 1, to add a map to a scene in Unity.
- Style URL. You will use the style URL associated with your custom style to add the style to your Unity project.
A map style is a document that defines the visual appearance of a map. The style document states which data sources to use and creates style layers that specify how that data should be styled. With the Mapbox Maps SDK for Unity, you can use one of our core Mapbox styles (like Mapbox Streets, Mapbox Outdoors, and Mapbox Satellite) or a custom map style from your Mapbox account.
You can create a custom style in the Mapbox Studio style editor. For more information on creating custom styles, explore the following resources:
- Our Create a custom style tutorial.
- The Styles section of the Mapbox Studio manual.
- Our Map design guide.
You can also add one of our designer map styles to your account and use it in your Unity application. In this tutorial, you'll use the designer map style, Whaam!Browse all designer maps and add them to your account
Add a designer style to your account
Start by signing into your Mapbox account. Once you are signed in, visit the designer map page to add a new style to your account:
- Visit mapbox.com/designer-maps.
- Find the Whaam! style and click Add this style.
- The style will automatically be added to your Mapbox account.
When the style is added to your account, it will appear on the Styles page in Mapbox Studio. From here, you can find the style URL for the style. You'll use the style URL to add this style to your Unity application:
- Find the style on your Styles page.
- Click on the Menu next to the style name to uncover options for altering and using that style.
- Find the Style URL. You can use the clipboard icon to copy the style URL, which you'll paste into your Unity project in a later step.
Add a custom style in Unity
Next, you'll add this style in Unity using the style URL.
Set up a map in Unity
First, you'll need to set up a new scene displaying a map. You can follow our tutorial for how to add a map in Unity. After completing the tutorial, you should see a map using a default Mapbox style within the Unity interface.
Change the map style in Unity
Next, change the style of the map. In Unity, open the inspector window of your Map object. Look for the IMAGES section of the
Abstract Map component. Change the Style Name to Custom. Copy and paste the style URL from the Mapbox Studio Styles page into the Map Id field.
Final product
When you run your scene with the new style, you'll see how your map has changed.
Next steps
There are many other ways for you to customize your Unity application. Explore the following resources to continue building your application:
- Create your own custom map style in the Mapbox Studio style editor following the Create a custom style tutorial.
- Work through our Unity tutorial series to learn how to create 3D buildings and style those building in a Unity application.
- Try out other designer styles to find the right one for your project. | https://docs.mapbox.com/help/tutorials/unity-custom-map-style/ | 2020-11-23T20:04:43 | CC-MAIN-2020-50 | 1606141164142.1 | [array(['/help/img/unity/unity-custom-style-final-product.png',
'A map of San Francisco using the custom Whaam! style'],
dtype=object)
array(['/help/img/studio/designer-maps.png',
'a preview of all designer map styles'], dtype=object)
array(['/help/img/unity/unity-custom-style-starting-map.png',
'A map of San Francisco using the built in Dark theme'],
dtype=object)
array(['/help/img/unity/unity-custom-style.png', 'Custom Map ID field'],
dtype=object)
array(['/help/img/unity/unity-custom-style-final-product.png',
'A map of San Francisco using the custom Whaam! style'],
dtype=object) ] | docs.mapbox.com |
Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by clicking Home > Dashboards >
Virtual machines (available if container-native virtualization is installed)
Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment).
Cluster Health summarizes the current health of the cluster as a whole, including relevant alerts and descriptions. If container-native virtualization is installed, the overall health of container-native virtualization is diagnosed as well. If more than one subsystem is present, click See All to view the status of each subsystem.). | https://docs.openshift.com/container-platform/4.2/cnv/cnv_users_guide/cnv-using-dashboard-to-get-cluster-info.html | 2020-11-23T20:02:29 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.openshift.com |
API Reference¶
- class
mailmanclient.
Client(baseurl, name=None, password=None)[source]¶
Access the Mailman REST API root.
create_domain(mail_host, base_url=<object object>, description=None, owner=None, alias_domain=None)[source]¶
Create a new Domain.
find_lists(subscriber, role=None, count=50, page=1, mail_host=None)[source]¶
Given a subscriber and a role, return all the list they are subscribed to with given role.
If no role is specified all the related mailing lists are returned without duplicates, even though there can potentially be multiple memberships of a user in a single mailing list.
get_list_page(count=50, page=1, advertised=None, mail_host=None)[source]¶
Get a list of all MailingList with pagination.
get_member(fqdn_listname, subscriber_address)[source]¶
Get the Member object for a given MailingList and Subsciber’s Email Address.
get_nonmember(fqdn_listname, nonmember_address)[source]¶
Get the Member object for a given MailingList and Non-member’s Email.
set_template(template_name, url, username=None, password=None)[source]¶
Set template in site-context.
- class
mailmanclient.
Domain(connection, url, data=None)[source]¶
- class
mailmanclient.
MailingList(connection, url, data=None)[source]¶
get_requests(token_owner=None)[source]¶
Return a list of dicts with subscription requests.
This is the new API for requests which allows filtering via token_owner since it isn’t possible to do so via the property requests.
get_requests_count(token_owner=None)[source]¶
Return a total count of pending subscription requests.
This should be a faster query when all the requests aren’t needed and only a count is needed to display on the badge in List’s settings page.
is_member(address)[source]¶
Given an address, checks if the given address is subscribed to this mailing list.
is_moderator(address)[source]¶
Given an address, checks if the given address is a moderator of this mailing list.
is_owner(address)[source]¶
Given an address, checks if the given address is an owner of this mailing list.
is_owner_or_mod(address)[source]¶
Given an address, checks if the given address is either a owner or a moderator of this list.
It is possible for them to be both owner and moderator.
mass_unsubscribe(email_list)[source]¶
Unsubscribe a list of emails from a mailing list.
This function return a json of emails mapped to booleans based on whether they were unsubscribed or not, for whatever reasons
requests¶
See
get_requests().
subscribe(address, display_name=None, pre_verified=False, pre_confirmed=False, pre_approved=False)[source]¶
Subscribe an email address to a mailing list.
- class
mailmanclient.
ListArchivers(connection, url, mlist)[source]¶
Represents the activation status for each site-wide available archiver for a given list.
- class
mailmanclient.
Bans(connection, url, data=None, mlist=None)[source]¶
The list of banned addresses from a mailing-list or from the whole site.
- class
mailmanclient.
HeaderMatches(connection, url, mlist)[source]¶
The list of header matches for a mailing-list.
- class
mailmanclient.
Member(connection, url, data=None)[source]¶
- class
mailmanclient.
User(connection, url, data=None)[source]¶
add_address(email, absorb_existing=False)[source]¶
Adds another email adress to the user record and returns an _Address object.
- class
mailmanclient.
Addresses(connection, url, data=None)[source]¶
- class
mailmanclient.
Address(connection, url, data=None)[source]¶
- class
mailmanclient.
HeldMessage(connection, url, data=None)[source]¶ | https://docs.mailman3.org/projects/mailmanclient/en/latest/src/mailmanclient/docs/apiref.html | 2020-05-25T04:46:09 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.mailman3.org |
Uninstalling encryption security
These instructions assume that you have installed or upgraded to BMC Remedy Action Request System.
Note
By design, if more than one BMC Remedy AR System component was encrypted using the BMC Remedy Encryption Security installer, and you attempt to uninstall encryption from only one component, the encryption unistaller will be available to uninstall the remaining components.
To uninstall encryption on Microsoft Windows
Note
You cannot uninstall standard security, because it is built into the BMC Remedy AR System application programming interface (API). To disable it, see Configuring the data key in the BMC Remedy AR System documentation.
- Shut down any BMC Remedy AR System processes that are running.
- From the Start menu, select Settings > Control Panel.
- Double-click the Add or Remove Programs icon.
- In the Add or Remove Programs dialog box, select the appropriate encryption product:
- BMC Remedy Encryption Performance Security
- BMC Remedy Encryption Premium Security
- Click Change/Remove.
In the uninstaller Welcome screen, click Next.
Note
At any time during setup, you can click Cancel to exit the uninstaller. However, your settings up to that point in the uninstallation process are not saved.
- In the Select Features screen, select the encryption product to uninstall.
- Click Next.
- In the Uninstallation Preview screen, review the information and perform one of these tasks:
- To change the uninstallation setup, click the Previous buttons and return to the screens that require editing.
- To start the uninstallation, click Uninstall .
- When the uninstallation is finished:
- (Optional) Click View Log to review the uninstall log file.
Click Done to exit the wizard.
- To remove the encryption libraries for a third-party or user-developed application, delete the arencrypt75.dll file from the folder that contains the application's arapi75.dll file.
- Restart the AR System server.
To uninstall encryption on UNIX
Note
You cannot uninstall standard security, because it is built into the BMC Remedy AR System API. To disable it, see Configuring the data key in the BMC Remedy AR System documentation.
- Shut down any BMC Remedy Action Request (AR) System processes that are running.
- Go to <RemedyEncryptionInstallDir>/uninstallRemedyEncryption.
- Run ./uninstall.bin.
Follow the wizard's prompts.
Note
By default, Remedy components such as AR and Mid Tier are marked for uninstallation. You can either procede with the default selctions or opt to remove only Remedy Encryption Security. | https://docs.bmc.com/docs/brid91/en/uninstalling-encryption-security-825210621.html | 2020-05-25T05:39:47 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.bmc.com |
Module Name: Governance Module
Type/Category: Governance —> Chief.sol, Pause.sol, Spell.sol
The Governance Module contains the contracts that facilitate MKR voting, proposal execution, and voting security of the Maker Protocol.
The Governance Module has 3 core components consisting of the
Chief,
Pause and
Spell contracts.
Chief - The Ds-Chief smart contract provides a method to elect a "chief" contract via an approval voting system. This may be combined with another contract, such as
DSAuthority, to elect a ruleset for a smart contract system.
Pause - The
ds-pause is a delegatecall based proxy with an enforced delay. This allows authorized users to schedule function calls that can only be executed once a predetermined waiting period has elapsed. The configurable delay attribute sets the minimum wait time that will be used during the governance of the system.
Spell - A
DS-Spell is an un-owned object that performs one action or series of atomic actions (multiple transactions) one time only. This can be thought of as a one-off DSProxy with no owner (no DSAuth mixing, it is not a DSThing).
Chief
In general, when we refer to the "chief", it can be both addresses or people that represent contracts. Thus, ds-chief can work well as a method for selecting code for execution just as well as it can for realizing political processes.
IOU Token: The purpose of the IOU token is to allow for the chaining of governance contracts. In other words, this allows you to have a number of
DSChief,
DSPrism, or other similar contracts use the same governance token by means of accepting the IOU token of the
DSChief contract before it is a governance token.
Pause
Identity & Trust: In order to protect the internal storage of the pause from malicious writes during plan execution, a delegatecall operation is performed in a separate contract with an isolated storage context (DSPauseProxy), where each pause has its own individual proxy. This means that plans are executed with the identity of the
proxy. Thus when integrating the pause into some auth scheme, you will want to trust the pause's proxy and not the pause itself.
Spell
The spell is only marked as "done" if the CALL it makes succeeds, meaning it did not end in an exceptional condition and it did not revert. Conversely, contracts that use return values instead of exceptions to signal errors could be successfully called without having the effect you might desire. "Approving" spells to take action on a system after the spell is deployed generally requires the system to use exception-based error handling to avoid griefing.
Chief
MKR users moving their votes from one spell to another: One of the biggest potential failure modes occurs when people are moving their votes from one spell to another. This opens up a gap/period of time when only a small amount of MKR is needed to lift a random hat.
Spell
The main failure mode of the
spell arises when there is an instance of the spell remaining uncast when it has an amount of MKR voting for it that later becomes a target. | https://docs.makerdao.com/smart-contract-modules/governance-module | 2020-05-25T04:09:14 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.makerdao.com |
New-NAVCompany
Syntax
New-NAVCompany [-Tenant <TenantId>] [-CompanyName] <String> [-EvaluationCompany] [[-CompanyDisplayName] <String>] [-ServerInstance] <String> [-Force] [-WhatIf] [-Confirm] [<CommonParameters>]
Description
Use the New-NAVCompany cmdlet to create a new company in a Business Central database. The company that the New-NAVCompany cmdlet creates is empty. To create a company that includes the data from an existing company, use the Copy-NAVCompany cmdlet.
Examples
EXAMPLE 1
New-NAVCompany -ServerInstance BC -Tenant CRONUS -CompanyName 'CRONUS Subsidiary'
This example creates the company CRONUS Subsidiary in the CRONUS tenant database, which is mounted against the BC server instance.
Parameters
Specifies a name that can be displayed for the company in the application (UI) instead of the name specified by the -CompanyName parameter.
Specifies the name of the company that you want to create. If a company with that name already exists in the Business Central database, the cmdlet fails.
Prompts you for confirmation before executing the command.
Specifies whether the company that you want to create is an evaluation company. This parameter is only relevant for Business Central online; it does not apply to on-premise deployments.
Forces the command to run without asking for user confirmation. must be created in, such as Tenant1. This parameter is required unless the specified service instance is not configured to run multiple tenants.
Describes what would happen if you executed the command without actually executing the command.
Notes
Because cmdlets do not execute application code, if there is any logic on application objects that are associated with creating or modifying companies from the client, be aware that the logic will not be executed when you run the cmdlet. | https://docs.microsoft.com/en-us/powershell/module/microsoft.dynamics.nav.management/new-navcompany?view=businesscentral-ps-16 | 2020-05-25T03:28:52 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.microsoft.com |
Modality Teams Usage
Thank you for installing the Modality Teams Usage Power BI app.
This application reports on information from the Modality Teamwork Analytics data engine. It comes pre-installed with demo data, allowing you to navigate the app and explore how the app visualises Microsoft Teams information to monitor and improve user adoption of Microsoft Teams. on the app will take you to Summary page, try changing the Country dropdown to United Kingdom. All other visuals on the page will change to now only show information about Teams activity in the United Kingdom.
Visual Filtering
You can also click on visuals to set filters. Try clicking the Top Departments by Messages in Last 30 Days ring to only show Sales (UK). The other visuals on the page will refresh to only show information related to that specific department.
Drillthrough
Some pages support Drillthrough, enabling you to move between different reports to gain more detail. Start with the Interactions page. Choose a country from the top-left visual, for instance Australia. Let’s say you are interesed in knowing more about the message usage for Australia. Right-click on the Australia slice and choose Drillthrough > Month on Month Graph. This shows more detail about usage in Australia.
Insights
When looking at graphs, you can also use insights to attempt to explain changes. Right-click on a point in a graph, choose Analyze > Explain:
.
Questions about the report content?
You’ve now deployed the reports, but have questions about the content. Follow the link below to details about the deployed reports.
Usage Report Guidance | https://docs.modalitysoftware.com/twa/ModalityTeamsUsage.html | 2020-05-25T05:39:01 | CC-MAIN-2020-24 | 1590347387219.0 | [array(['images/usage/icon.png', 'Modality Teams Usage Icon alt text'],
dtype=object)
array(['images/usage/connectordemo.png', 'Get Started Dialog alt text'],
dtype=object)
array(['images/usage/summary.png', 'Summary alt text'], dtype=object)
array(['images/usage/explain.png', 'Analysis alt text'], dtype=object)
array(['images/usage/connecttodata.png', 'Connect to Data alt text'],
dtype=object)
array(['images/usage/credentials.png', 'Credentials alt text'],
dtype=object) ] | docs.modalitysoftware.com |
Team Champions
Team Champions focuses on Users ranked by an Activity score. By default, the Top Champion per Department is listed but it’s possible to see the Top 1, 2, 3, etc Team Champions per Department by using the Top X Champions drop down.
Page Visuals
1. Team Champions
Team Champion details including Department, Country, Location and Team activity contributing to their score
2. Top X Champions
Dropdown list for selecting which Team Champion level to display | https://docs.modalitysoftware.com/twa/Reports/TeamsUsage/17_TeamChampions.html | 2020-05-25T03:54:42 | CC-MAIN-2020-24 | 1590347387219.0 | [array(['/twa/Reports/TeamsUsage/images/TeamChampions.png',
'TeamChampions'], dtype=object) ] | docs.modalitysoftware.com |
Draft: PureEngage
How Things Work in PureEngage Cloud
Find all How It Works and Getting Started articles for the various PureEngage Cloud applications and features.
Release Notes
See all PureEngage Release Notes.
PureEngage Cloud
Find content for Genesys PureEngage Cloud agents, supervisors, and administrators.
PureEngage On-Premises
Find all information for all users, administrators, and installers of PureEngage On-Premises.
PureEngage Cloud Developers
Learn about the Developer role in PureEngage. If you're interested in all Genesys services and APIs, check out Developer Resources.
Provisioning PureEngage Hybrid Integrations
Learn the essential provisioning steps to enable a hybrid integration between PureEngage On-Prem deployments and Genesys PureCloud services. | https://all.docs.genesys.com/Draft:PureEngage | 2020-05-25T06:23:45 | CC-MAIN-2020-24 | 1590347387219.0 | [] | all.docs.genesys.com |
Appcelerator Dashboard Save PDF Selected topic Selected topic and subtopics All content Sending and Scheduling Push Notifications This guide explains how to send and schedule push notifications in the Appcelerator Dashboard. Sending push notifications To send a push notification, you must provide Dashboard the following information: Notification recipients and channel – If your application users are. Icon A warning will be displayed if you attempt to perform a push with an expired or disabled iOS certificate. To send push notifications from the Dashboard: In the Dashboard, select your application from the Dashboard home page Projects tab. From the left-side navigation, select Push Notifications. Select Send to open the Send Push Notification form. Select either. Icon: { "alert": "Sample alert", "badge": "+2", "category": "sampleCategory", "icon": "little_star", "sound": "door_bell", "title": "Example", "vibrate": true, "custom_field_1": "Arrow Push Rocks!", "custom_field_2": "Hi Push" }: { "android": { "title": "Example", "alert": "Sample alert", "icon": "little_star", "badge": "+2", "sound": "door_bell", "vibrate": true, }, "category": "sampleCategory", "custom_field_1": "Arrow Push Services Rocks!", "custom_field_2": "Hi Push" }: { "aps": { "alert": "Sample alert", "badge": "+2", "category": "sampleCategory", "sound": "door_bell" }, "title": "Example", "icon": "little_star", "vibrate": true, "custom_field_1": "Arrow Push Rocks!", "custom_field_2": "Hi Push" } Notification Features Rich Notifications (iOS 10 and later) Since Titanium SDK 7.3.0, you can create rich notifications for users running iOS 10 or later. Rich notifications can include additional meta-data like a subtitle, location-based triggers, and attachments. While most of the new properties can be configured in existing UserNotificationAction instances, there is one special case to remember when working with rich notifications: If you want to display an attachment, you have to distinguish between local and remote images: Local images: Can be specified when scheduling a local notification from your application, for example using the attachments property inside the creation dictionary of the notification. Remote image: Can be specified when scheduling a remote notification using an UNNotificationServiceExtension. App extensions in Titanium can be written in both Objective-C and Swift. Learn more about them here. Remote attachments example: { "aps": { "alert": { "title": "Weather Update", "body": "The weather out here is getting serious, remember to bring an umbrella!" }, "mutable-content": 1 }, "attachment-url": "", "attachment-name": "example.gif" } Important: Make sure to include the mutable-content flag in your JSON payload, which is used to trigger the notification extension. Also, the attachement-url is downloaded and persisted in your local filesystem using the attachment-name key. The developer is responsible to structure the extension and the way it deals with remote content. See our example Swift extension that can be used as part of the App Extensions guide. In addition to that, iOS 10 also introduces a NotificationCenter API that is made available in Titanium via the Ti.App.iOS.UserNotificationCenter API. It represents a powerful binding to manage notifications by being able to change or cancel notifications that are currently pending. While most of its API's are made for iOS 10 and later, the changes have been made in a way to be backward compatible with iOS 8, so you don't need to call multiple methods to manage your push notifications. Some useful links to get started: Apple: WWDC 2017: Rich Notifications Apple: Local and Remote Notifications Titanium: iOS Push Notifications Sample App Titanium: App Extensions Guide Interactive Notifications (iOS 8 and later) You can create interactive notifications for users running iOS 8 or later can respond to without launching the application to the foreground. Your Titanium application defines one or more notification categories, each of which consists of one or more notification actions. When you create a push notification in the Dashboard, the Category form field lets you specify the category of interactive notification to display when the push notification arrives. To create an interactive notification: In your Titanium application: Create and configure notification actions. Create notification categories and assign notification actions to them. Register the application for the desired notification categories, and to receive push notifications. Register an event listener for the remotenotificationaction event, to respond to user actions when they interact with the notification. In the Dashboard, send a new push notification and set the Category field to the desired notification category. When the notification arrives, the device displays the set of actions defined by the category. The remotenotificationaction event fires when the user interacts with the notification. In addition, you can set the behavior property of the Ti.App.iOS.NotificationAction to Ti.App.iOS.USER_NOTIFICATION_BEHAVIOR_TEXTINPUT which will show a text field that can be used to respond to actions without opening the app. Silent Push Notifications The Content-Available form field lets you silently notify a Titanium or native iOS/Android application, without alerting the user at all. A silent push is often used to alert the application that new content is available to download. Once the download (or another task) initiated by the silent push is complete, the application can display a notification to the user that new content is available. For detailed steps on enabling silent push notifications in your Titanium application, see Silent Push in the Titanium SDK guides. Notification Badges A badge is a number displayed on the application icon (on iOS), or in the notification area (on Android). You can specify a specific badge value to display (2 or 10, for example), or a number prefixed by plus (+) or minus (-) symbol (+3 or -6, for example). When prefixed, the currently displayed badge number is incremented or decremented by the specified amount, respectively. To remove an application badge on iOS, specify a badge value of 0 (zero). Notification Sounds The sound field in a notification payload specifies the name (minus the extension) of a local sound file resource to play the notification arrives. When a push notification arrives, you can specify a custom sound to play, the default system sound, or no sound. For Android applications built with Titanium, place the file in the /Resources/sound directory. For iOS applications built with Titanium, place the file in the /Resources directory. For native Android applications, place the file in the /assets/sound directory. For native iOS applications, place the file in the main bundle. Android-specific payload fields In addition to the standard notification fields (alert, badge, and sound) Android devices support the following fields: title icon vibrate The Titanium application may also specify any of the properties in Titanium.Android.Notification, except for contentIntent or deleteIntent. For instance, you can add a tickerText field to the notification payload that scrolls the specified text across the notification area. Title field Title – A string to display above the alert message in the notification area. If not specified in the payload, the application's name is displayed, as specified by the <name> element in your project's tiapp.xml file. Icon field The icon payload field specifies an image to display with the notification on Android devices. (For image specifications, see Icons and Splash Screens: Notification Icons.) Its value is the name of a local image file, minus the extension of the icon to display. The file must be placed your project's /res/drawable folder for native Android applications or the /Resources folder for Titanium applications. By default, the application's icon is displayed with the notification. { "alert": "You're a star!" "icon": "little_star" } Vibrate field A Boolean that specifies whether the device should vibrate when the notification arrives. Troubleshooting common errors This section lists errors that may occur when sending push notifications. 'Subscription not found' error geo-based push notifications, this error can also indicate that no devices were found in the selected geographic area. Try the following: Make sure your application is sending its current location to API Builder. See Updating Subscriptions with Device Location. Devices must report their location to API Builder to enable geo-based push. Try using a larger Radius value to encompass a larger geographic area. Related Links | https://docs.axway.com/bundle/Appcelerator_Dashboard_allOS_en/page/sending_and_scheduling_push_notifications.html | 2020-05-25T03:41:41 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.axway.com |
Installing additional language packs
During the BMC Atrium Core installation, the English language pack is installed by default. You can choose to install more language packs after you install BMC Atrium Core.
In an AR System server group environment, install the language packs only on the primary server.
To install additional language packs
- Start the BMC Atrium Core installer.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/brid90/installing-additional-language-packs-537625137.html | 2020-05-25T06:21:51 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.bmc.com |
Message-ID: <1598442489.172289.1590384580225.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_172288_2032317646.1590384580224" ------=_Part_172288_2032317646.1590384580224 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The goal of the Progress OpenEdge RDBMS Database Detail pattern= is to obtain the list of Databases being managed by Progress OpenEdge RDBM= S.
The information is then stored within the Atrium Discovery Model as DatabaseDetail Nodes. In Atrium Discovery 8.3= and later, the DatabaseDetail node has additional attributes added which p= ermit easy mapping by the CMDB sync mechanism to the BMC_Database CIs in At= rium CMDB.
UNIX platform:
The pattern obtains the list of databases running on the server by runni= ng the /usr/sysop/dbbin/prodbchk command after which the following= regular expression is used to retrieve the database names:-
db/(.*)\.db
Windows platform:
The pattern obtains the list of databases from configuration file, speci= fied in "-properties" argument for process "_mprosrv". Names stored as = ;[database.<db_name>] keys.
A DatabaseDetail node of type "Progress OpenEdge Database" is created fo= r all the databases. The key of the node is a combination of Progress OpenE= dge RDBMS SI key, DatabaseDetail type and Database name. A containment rela= tionship is created between the list of databases and the Progress OpenEdge= RDBMS SI. This ensures that for every run the DatabaseDetail nodes which a= re absent since the previous run are deleted.
Where two or more installations on a single host, the exact same pro= dbchk command will be executed and the exact same set of databases ret= urned. Any assistance you can provide to resolve this matter will be greatl= y appreciated. | https://docs.bmc.com/docs/exportword?pageId=586752271 | 2020-05-25T05:29:40 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.bmc.com |
Workspace app to the time when the desktop is ready to use.). default Windows shell (explorer.exe) and not on custom shells.
-
Logon duration for Remote PC Access is available only when Citrix User Profile Manager and the Citrix User Profile Manager WMI Plugin are installed as additional components during Remote PC installation. For more information, see Step 4 in Remote PC Access configuration sequence.
Steps to troubleshoot user logon issuesSteps the Microsoft Technet article Configuring the Event Logs.
Logon scripts
If logon scripts are configured for the session, this is the time taken for the logon scripts to be executed., an increased duration is displayed by the Profile Load bar. may not be visible to end users. However, they are included in the profile drilldown and displayed in the list of files in Root Folder.
- Certain hidden files in the AppData folder are not included in Profile drilldown.
- Number of files and profile size data may not match with the data in the Personalization panel due to certain Windows limitations.. upgrade, no error message is displayed.. | https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/director/troubleshoot-deployments/user-issues/user-logon.html | 2020-05-25T04:04:57 | CC-MAIN-2020-24 | 1590347387219.0 | [array(['/en-us/citrix-virtual-apps-desktops/media/Profile_drilldown_1.png',
'Profile Drilldown'], dtype=object)
array(['/en-us/citrix-virtual-apps-desktops/media/Profile_drilldown_2.png',
'Detailed Drilldown'], dtype=object) ] | docs.citrix.com |
Removing a cluster node
When a node is removed from the cluster, the cluster configurations are cleared from the node (by internally executing the clear ns config -extended command). The SNIP addresses, MTU settings of the backplane interface, and all VLAN configurations (except the default VLAN and NSVLAN) are also cleared from the appliance.
Note
- If the deleted node was the cluster configuration coordinator, another node is automatically selected as the cluster configuration coordinator, and the cluster IP address is assigned to that node. All the current cluster IP address sessions will be invalid and you will have to start a new session.
- To delete the whole cluster, you must remove each node individually. When you remove the last node, the cluster IP address(es) are deleted.
- When an active node is removed, the traffic serving capability of the cluster is reduced by one node. Existing connections on this node are terminated.
To remove a cluster node by using the command line interfaceTo remove a cluster node by using the command line interface
For NetScaler 10.1 and later versions
Logon to the cluster IP address and at the command prompt, type:
rm cluster node <nodeId>
Note
If the cluster IP address is unreachable from the node, execute the rm cluster instance command on the NSIP address of that node itself.
For NetScaler 10
Log on to the node that you want to remove from the cluster and remove the reference to the cluster instance.
rm cluster instance <clId> save ns config
Log on to the cluster IP address and remove the node from which you removed the cluster instance.
rm cluster node <nodeId> save ns config
Make sure you do not run the
rm cluster nodecommand from the local node as this results in inconsistent configurations between the configuration coordinator and the node.
To remove a cluster node by using the configuration utilityTo remove a cluster node by using the configuration utility
On the cluster IP address, navigate to System > Cluster > Nodes, select the node you want to remove and click Remove. | https://docs.citrix.com/en-us/netscaler/11-1/clustering/cluster-managing/cluster-node-removal.html | 2020-05-25T05:27:00 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.citrix.com |
Ownership Tab
FlexNet Manager Suite 2019 R1 (On-Premises Edition)
The Ownership tab displays the details about the ownership of this VDI template by enterprise groups, including up to one each from location, corporate unit, and cost center. This tab also displays the assigned user for the template.
For more information about using lists, filters, and other UI options, see the topics under Using Lists in FlexNet Manager Suite. The following table displays the ownership and user properties in an alphabetical order:
Remember: All properties of a remote device are read only. | https://docs.flexera.com/fnms2019r1/EN/WebHelp/topics/Vir-Virtual_Desktop_Ownership_tab.html | 2020-05-25T04:38:31 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.flexera.com |
Responsibilities Tab
The Responsibilities tab helps you keep track of the players involved with the contract, perhaps during its development and negotiation and perhaps during its fulfillment. Managers, approval gatekeepers, and other interested parties can all be listed.
Responsibilities can be assigned to any person whose details are stored in the list of users in the system. User information is imported during the inventory import, or records can be created manually (navigate to Enterprise > Create a User). This means that, should you wish to use the Responsibilities tab to track involvement for people from outside your company (other parties to the contract), you will first need to manually create user records to represent them.
- Choose users to identify with this contract, to whom you will assign responsibilities
- Assign a responsibility to each linked user
- Add a brief comment about your assignment
- Navigate to review the user's recorded details
- Remove the link between a user and this contract, so that the user no longer bears any responsibilities in relation to it.
The list of responsibilities is directly editable, and has the following columns available: | https://docs.flexera.com/fnms2019r1/EN/WebHelp/topics/CntrctProp-ResponsibilitiesTab.html | 2020-05-25T04:11:59 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.flexera.com |
Utility functions and includes for autopilots. More...
#include "std.h"
#include "subsystems/commands.h"
Go to the source code of this file.
Utility functions and includes for autopilots.
Definition in file autopilot_utils.h.
Set descent speed in failsafe mode.
Definition at line 37 of file autopilot_utils.h.
Referenced by autopilot_static_set_mode().
Definition at line 64 of file autopilot_utils.h.
Referenced by autopilot_static_periodic().
Display descent speed in failsafe mode if needed.
Definition at line 41 of file autopilot_utils.c.
get autopilot mode as set by RADIO_MODE 3-way switch
Definition at line 90 of file autopilot_utils.c.
Set Rotorcraft commands.
Limit thrust and/or yaw depending of the in_flight and motors_on flag status
A default implementation is provided, but the function can be redefined
Set Rotorcraft commands.
RADIO_MODE switch just selectes between MANUAL and AUTO. If not MANUAL, the RADIO_AUTO_MODE switch selects between AUTO1 and AUTO2.
This is mainly a cludge for entry level radios with no three-way switch, but two available two-way switches which can be used.Set Rotorcraft commands. Limit thrust and/or yaw depending of the in_flight and motors_on flag status
Definition at line 135 of file autopilot_utils.c. | http://docs.paparazziuav.org/latest/rotorcraft_2autopilot__utils_8h.html | 2020-05-25T06:14:19 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.paparazziuav.org |
Observer nodes.
Caveats
- Nodes which act as both observers and direct participants in the ledger are not supported at this time. In particular, coin selection may return states which you do not have the private keys to be able to sign for. Future versions of Corda may address this issue, but for now, if you wish to both participate in the ledger and also observe transactions that you can’t sign for you will need to run two nodes and have two separate identities
- Nodes only record each transaction once. If a node has already recorded a transaction in non-observer mode, it cannot later re-record the same transaction as an observer. This issue is tracked here: | https://docs.corda.net/docs/corda-os/3.0/tutorial-observer-nodes.html | 2020-05-25T05:54:22 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.corda.net |
Best practices from the field: Build dynamic, lean, and universal packages for Microsoft 365 Apps
Important
We’re making some changes to the update channels for Microsoft 365 Apps, including adding a new update channel (Monthly Enterprise Channel) and changing the names of the existing update channels. To learn more, read this article.
Note
This article was written by Microsoft experts in the field who work with enterprise customers to deploy Office.
As an admin, you might have to deploy Microsoft 365 Apps (previously named Office 365 Business or Office 365 ProPlus) in your organization. But such a deployment is more than just Office: After the initial migration to Microsoft 365 Apps, you might have to provide ways for your users to automatically install additional language packs, proofing tools, products like Visio and Project, or other components. We often refer to these scenarios as 2nd installs, while the initial upgrade to Microsoft 365 Apps from a legacy Office is called 1st install.
This article shows you how to build dynamic, lean, and universal packages for Microsoft 365 Apps. This method can greatly reduce long-term maintenance costs and effort in managed environments.
The challenge
When you plan your upgrade to Microsoft 365 Apps, the actual upgrade from a legacy version to the always-current Microsoft 365 Apps is front and center (1st install scenario). But looking beyond the initial deployment, there are other scenarios you’ll need to cover as an admin (2nd install). Sometimes, after you upgrade your users, they might need any of the following components:
- Additional language packs
- Proofing tools
- Visio
- Project
Historically, each of these scenarios was addressed by creating a dedicated installation package for automatic, controlled installation for users. Usually, an admin would combine the necessary source files (of ~2.5 gigabytes) and a copy of the Office Deployment Tool (ODT) together with a configuration file into a package for each of these components.
But, especially in larger organizations, you often don't have a single configuration set of Microsoft 365 Apps. You might have a mix of update channels (often SAC and SAC-T). And maybe you're currently transitioning from 32-bit to 64-bit, and maybe you'll have to support both architectures for quite some time.
So in the end, you wouldn't have 1 package per component but 4, covering each possible permutation of SAC/SAC-T and x86/x64. The end result would be:
- A large number of packages. The 4 listed components would result in 16 or more packages.
- High-bandwidth consumption, as a client might get the full 2.5-GB package pushed down before installation.
- High maintenance costs to keep embedded source files current.
- High user impact, if you haven’t kept the source files current and installing a component will perform a downgrade just to perform an update to the current version soon after.
- Low satisfaction for users who have to pick the matching package from many options presented in the software portal.
While the initial upgrade to Microsoft 365 Apps is a one-time activity, the scenarios described previously will be applicable over a longer period. Users might need additional components days, weeks, or even years after the initial deployment.
So, how do you build packages that are less costly to maintain over a long time frame and avoid the downsides?
The solution: Dynamic, lean, and universal packages
You can resolve these issues by implementing self-adjusting, small, and universal packages. Let's cover the basic concepts before we dive into sample scenarios.
Build dynamic packages where you don’t hard-code anything. Use features of the Office Deployment Tool (ODT) to enable the packages to self-adjust to the requirements:
- Use Version=MatchInstalled to prevent unexpected updates and stay in control of the version installed on a client. No hard coding of a build number, which gets outdated quickly, is required.
- Use Language=MatchInstalled to instruct e.g. Visio or Project to install with the same set of languages as Office is already using. No need to list them or build a script that injects the required languages.
Build lean packages by removing the source files from the packages. This has multiple benefits:
- Package size is much smaller, from 2.5 GB down to less than 10 megabytes for the ODT and its configuration file.
- Instead of pushing a 2.5-GB install package to clients, you let clients pull what they need on demand from Office Content Delivery Network (CDN), which saves bandwidth.
- When you add Project to an existing Microsoft 365 Apps installation, you need to download less than 50 megabytes, as Office shared components are already installed.
- Visio installs are typically 100-200 megabytes, based on the number of languages, as the templates/stencils are a substantial part of the download.
- Installing proofing tools is typically 30-50 megabytes, versus a full language pack, which is 200-300 megabytes.
- A second install scenario is often less frequent, which lowers the internet traffic burden, ultimately reducing the impact.
- You don’t have to update the source files every time Microsoft releases new features or security and quality fixes.
Build universal packages by not hard coding things like the architecture or update channel. ODT will dynamically match the existing install, so your packages work across all update channels and architectures. Instead of having 4 packages to install Visio, for example, you'll have a single, universal package that will work across all permutations of update channels and architectures.
- Leaving out OfficeClientEdition makes your package universal for mixed x86/x64 environments.
- Leaving out Channel makes your package universal across update channels.
How to build and benefit from building dynamic, lean, and universal packages
The idea is to not hard code everything in the configuration file, but to instead utilize the cleverness of the Office Deployment Tool as much as possible.
Let’s have a look at a "classic" package that was built to add Project to an existing install of Microsoft 365 Apps. We have the source files (of ~2.5 gigabytes) and a configuration file, which explicitly states what we want to achieve:
<Configuration> <Add OfficeClientEdition="64" Channel="Broad"> <Product ID="ProjectProRetail"> <Language ID="en-us" /> </Product> </Add> <Display Level="None" /> </Configuration>
When we apply the concepts of dynamic, lean, and universal packages, the result would look like this:
<Configuration> <Add Version="MatchInstalled"> <Product ID="ProjectProRetail"> <Language ID="MatchInstalled" TargetProduct="O365ProPlusRetail" /> </Product> </Add> <Display Level="None" /> </Configuration>
So what have we changed, and what are the benefits?
- We removed the OfficeClientEdition-attribute, as the ODT will automatically match the installed version.
- Benefit: The configuration file now works for both x86 and x64 scenarios.
- We removed the channel for the same reason. ODT will automatically match the already-assigned update channel.
- Benefit I: The package works for all update channels (Monthly, Semi-Annual, SAC-T, and others).
- Benefit II: It also works for update channels you don’t offer as central IT. Some users are running Monthly Channel, some are on Insider builds? Don’t worry, it just works.
- We added Version=MatchInstalled, which ensures that ODT will install the same version that's already installed.
- Benefit: You're in control of versions deployed, with no unexpected updates.
- We added Language ID="MatchInstalled" and TargetProduct to match the currently installed languages, replacing a hard-coded list of languages to install.
- Benefit I: The user will have the same languages for Project as were already installed for Office.
- Benefit II: No need to re-request language pack installs.
- Benefit III: Also works for rarely used languages that you as the central IT admin don’t offer, which makes users happy.
- We removed the source files. The ODT will fetch the correct set of source files from the Office CDN just in time.
- Benefit I: The package never gets outdated. No maintenance of source files is needed.
- Benefit II: The download is about 50 megabytes instead of about 2.5 GB.
Another example: Add language packs and proofing tools the dynamic, lean, and universal way
Let’s have a brief look at other scenarios as well, like adding language packs and proofing tools. The classic configuration file to install the German Language Pack might look like this:
<Configuration> <Add OfficeClientEdition="64" Channel="Broad"> <Product ID="LanguagePack"> <Language ID="de-de" /> </Product> </Add> <Display Level="None" /> </Configuration>
If you’re running SAC as well as SAC-T and have an x86/x64 mixed environment, you'd need three additional files to cover the remaining configuration permutations. Or, you just go the dynamic, lean, and universal way:
<Configuration> <Add Version="MatchInstalled"> <Product ID="LanguagePack"> <Language ID="de-de" /> </Product> </Add> <Display Level="None" /> </Configuration>
This single configuration file will work across x86/x64 and all update channels (Insider Fast, Monthly Targeted, Monthly, SAC-T, SAC, and others). So, if you want to offer five additional languages in your environment, just build five of these "config file + ODT" packages. For proofing tools, you just change the ProductID to "ProofingTools".
Prerequisites
There are some prerequisites you must meet to make this concept work in your environment:
- Use Office Deployment Tool 16.0.11615.33602 or later to enable Version=MatchInstalled to work.
- The ODT must be able to locate the matching source files on the Office CDN.
- Make sure that the context you're using for running the install can traverse the proxy. For details, see Office 365 ProPlus Deployment and Proxy Server Guidance.
- Make sure that the account (user or system) that's used to install the apps can connect to the internet. | https://docs.microsoft.com/en-us/deployoffice/fieldnotes/build-dynamic-lean-universal-packages | 2020-05-25T06:23:37 | CC-MAIN-2020-24 | 1590347387219.0 | [array(['../images/lean5-pic1.jpg', 'Sample package'], dtype=object)
array(['../images/lean5-pic2.jpg', 'Lean sample package'], dtype=object)] | docs.microsoft.com |
Basic
- How to Copy a Local Website
- How to Remove a Local Website
- How to Move a Local Website
- Where to Find a List of My Created Websites
- How to Transfer a Website From One Computer to Another
- Creating a Website Archive Manually
- Description of the Scrubbing Process
- How to Backup Your DesktopServer Installation
- How to Restore Your DesktopServer Installation
- Database tools | https://docs.serverpress.com/category/118-basic | 2020-05-25T05:39:13 | CC-MAIN-2020-24 | 1590347387219.0 | [] | docs.serverpress.com |
Create a new NIC Team on a host computer or VM
Applies to: Windows Server (Semi-Annual Channel), Windows Server 2016
In this topic, you create a new NIC Team on a host computer or in a Hyper-V virtual machine (VM) running Windows Server 2016.
Network configuration requirements
Before you can create a new NIC Team, you must deploy a Hyper-V host with two network adapters that connect to different physical switches. You must also configure the network adapters with IP addresses that are from the same IP address range.
The physical switch, Hyper-V Virtual Switch, local area network (LAN), and NIC Teaming requirements for creating a NIC Team in a VM are:
The computer running Hyper-V must have two or more network adapters.
If connecting the network adapters to multiple physical switches, the physical switches must be on the same Layer 2 subnet.
You must use Hyper-V Manager or Windows PowerShell to create two external Hyper-V Virtual Switches, each connected to a different physical network adapter.
The VM must connect to both external virtual switches you create.
NIC Teaming, in Windows Server 2016, supports teams with two members in VMs. You can create larger teams, but there is no support.
If you are configuring a NIC Team in a VM, you must select a Teaming mode of Switch Independent and a Load balancing mode of Address Hash.
Step 1. Configure the physical and virtual network
In this procedure, you create two external Hyper-V Virtual Switches, connect a VM to the switches, and then configure the VM connections to the switches.
Prerequisites
You must have membership in Administrators, or equivalent.
Procedure
- On the Hyper-V host, open Hyper-V Manager, and under Actions, click Virtual Switch Manager.
- In Virtual Switch Manager, make sure External is selected, and then click Create Virtual Switch.
In Virtual Switch Properties, type a Name for the virtual switch, and add Notes as needed.
In Connection type, in External network, select the physical network adapter to which you want to attach the virtual switch.
Configure additional switch properties for your deployment, and then click OK.
Create a second external virtual switch by repeating the previous steps. Connect the second external switch to a different network adapter.
In Hyper-V Manager, under Virtual Machines, right-click the VM that you want to configure, and then click Settings.
The VM Settings dialog box opens.
Ensure that the VM is not started. If it is started, perform a shutdown before configuring the VM.
In Hardware, click Network Adapter.
In Network Adapter properties, select one of the virtual switches that you created in previous steps, and then click Apply.
In Hardware, click to expand the plus sign (+) next to Network Adapter.
Click Advanced Features to enable NIC Teaming by using the graphical user interface.
Tip
You can also enable NIC Teaming with a Windows PowerShell command:
Set-VMNetworkAdapter -VMName <VMname> -AllowTeaming On
a. Select Dynamic for MAC address.
b. Click to select Protected network.
c. Click to select Enable this network adapter to be part of a team in the guest operating system.
d. Click OK.
To add a second network adapter, in Hyper-V Manager, in Virtual Machines, right-click the same VM, and then click Settings.
The VM Settings dialog box opens.
In Add Hardware, click Network Adapter, and then click Add.
- In Network Adapter properties, select the second virtual switch that you created in previous steps, and then click Apply.
In Hardware, click to expand the plus sign (+) next to Network Adapter.
Click Advanced Features, scroll down to NIC Teaming, and click to select Enable this network adapter to be part of a team in the guest operating system.
Click OK.
Congratulations! You have configured the physical and virtual network. Now you can proceed to creating a new NIC Team.
Step 2. Create a new NIC Team
When you create a new NIC Team, you must configure the NIC Team properties:
Team name
Member adapters
Teaming mode
Load balancing mode
Standby adapter
You can also optionally configure the primary team interface and configure a virtual LAN (VLAN) number.
For more details on these settings, see NIC Teaming settings.
Prerequisites
You must have membership in Administrators, or equivalent.
Procedure
In Server Manager, click Local Server.
In the Properties pane, in the first column, locate NIC Teaming, and then click the Disabled link.
The NIC Teaming dialog box opens.
In Adapters and Interfaces, select the one or more network adapters that you want to add to a NIC Team.
Click TASKS, and click Add to New Team.
The New team dialog box opens and displays network adapters and team members.
In Team name, type a name for the new NIC Team, and then click Additional properties.
In Additional properties, select values for:
Teaming mode. The options for Teaming mode are Switch Independent and Switch Dependent. The Switch Dependent mode includes Static Teaming and Link Aggregation Control Protocol (LACP).
Switch Independent. With Switch Independent mode, the switch or switches to which the NIC Team members are connected are unaware of the presence of the NIC team and do not determine how to distribute network traffic to NIC Team members - instead, the NIC Team distributes inbound network traffic across the NIC Team members.
Switch Dependent. With Switch Dependent modes, the switch to which the NIC Team members are connected determines how to distribute the inbound network traffic among the NIC Team members. The switch has complete independence to determine how to distribute the network traffic across the NIC Team members.
Load balancing mode. The options for Load Balancing distribution mode are Address Hash, Hyper-V Port, and Dynamic.
Address Hash. With Address Hash, this mode creates a hash based on address components of the packet, which then get assigned to one of the available adapters. Usually, this mechanism alone is sufficient to create a reasonable balance across the available adapters.
Hyper-V Port. With Hyper-V Port, NIC Teams configured on Hyper-V hosts give VMs independent MAC addresses. The VMs MAC address or the VM ported connected to the Hyper-V switch, can be used to divide network traffic between NIC Team members. You cannot configure NIC Teams that you create within VMs with the Hyper-V Port load balancing mode. Instead, use the Address Hash mode.
Dynamic. With Dynamic, outbound loads are distributed based on a hash of the TCP ports and IP addresses. Dynamic mode also rebalances loads in real time so that a given outbound flow may move back and forth between team members. Inbound loads, on the other hand, get distributed the same way as Hyper-V Port. In a nutshell, Dynamic mode utilizes the best aspects of both Address Hash and Hyper-V Port and is the highest performing load balancing mode.
Standby adapter. The options for Standby Adapter are None (all adapters Active) or your selection of a specific network adapter in the NIC Team that acts as a Standby adapter.
Tip, do one of the following:
Provide a tNIC interface name.
Configure VLAN membership: click Specific VLAN and type the VLAN information. For example, if you want to add this NIC Team to the accounting VLAN number 44, Type Accounting 44 - VLAN.
Click OK.
Congratulations! You've created a new NIC Team on a host computer or VM.
Related topics
NIC Teaming: In this topic, we give you an overview of Network Interface Card (NIC) Teaming in Windows Server 2016. NIC Teaming allows you to group between one and 32 physical Ethernet network adapters into one or more software-based virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the event of a network adapter failure.
NIC Teaming MAC address use and management:.
NIC Teaming settings: In this topic, we give you an overview of the NIC Team properties such as teaming and load balancing modes. We also give you details about the Standby adapter setting and the Primary team interface property. If you have at least two network adapters in a NIC Team, you do not need to designate a Standby adapter for fault tolerance.
Troubleshooting NIC Teaming: In this topic, we discuss ways to troubleshoot NIC Teaming, such as hardware, physical switch securities, and disabling or enabling network adapters using Windows PowerShell.
Feedback | https://docs.microsoft.com/en-us/windows-server/networking/technologies/nic-teaming/create-a-new-nic-team-on-a-host-computer-or-vm?redirectedfrom=MSDN | 2019-11-12T09:22:13 | CC-MAIN-2019-47 | 1573496664808.68 | [array(['../../media/create-a-new-nic-team-in-a-vm/nict_hv.jpg',
'Virtual Switch Manager'], dtype=object)
array(['../../media/create-a-new-nic-team-in-a-vm/nict_hv_02.jpg',
'Create Virtual Switch'], dtype=object)
array(['../../media/create-a-new-nic-team-in-a-vm/nict_hvs_01.jpg',
'Network Adapter'], dtype=object)
array(['../../media/create-a-new-nic-team-in-a-vm/nict_hvs_06.jpg',
'Add a network adapter'], dtype=object)
array(['../../media/create-a-new-nic-team-in-a-vm/nict_hvs_07.jpg',
'Apply a virtual switch'], dtype=object) ] | docs.microsoft.com |
Windows performance monitoring - remote
Splunk is the simple, web-based alternative to Performance Monitor. Whether you want to watch disk I/O, memory metrics such as free pages or commit charge, or network statistics, Splunk's collection, charting and reporting utilities increase its extensibility. And, like Performance Monitor, you can monitor machines remotely.
Here's how to get your performance metrics with Splunk:
1. Go to the Windows performance data page in Splunk Web.
2. From there, locate Windows event logs from another machine and click Next.
3. Under Collection name, enter a unique name for this collection that you'll remember.
4. In the Select target host field, enter the hostname for a machine on your Windows network.
You can specify a short hostname, the server's fully qualified domain name, or its IP address.
5. Click Query… to get a list of the available performanc objects on the remote machine.
6. In the Available objects drop-down box, select a performance object that you would like for Splunk to monitor.
The Available counters window appears, containing counters that are specific to the object you just selected.
7. From the Available counters listbox, click once on each counter that you would like for Splunk to collect performance data.
The desired performance counters appear in the Selected counters window.
8. Next, from the Available instances listbox, click once on each of the desired instances for the counters selected above, that you would like for Splunk to track.
The desired instances will appear in the Selected instances list box.
9. Optionally, you can specify additional servers from which to collect the same set of performance metrics. Type in each of the hostnames into the field, separating them with commas.
You can usually leave the other settings as they are, though if you want to change the polling interval, you can do so by specifying it in the "Polling interval" field. Look here for detailed information on those settings. performance monitor data from remote machines, see "Monitor WMI data"! | https://docs.splunk.com/Documentation/Splunk/6.1/Data/Windowsperformanceremote | 2019-11-12T07:51:46 | CC-MAIN-2019-47 | 1573496664808.68 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
>> software. search job properties in the Search Manual.
Configure batch mode search in limits.conf
If you have a Splunk Enterprise deployment (as opposed to Splunk.
! | https://docs.splunk.com/Documentation/Splunk/6.3.2/Knowledge/Configurebatchmodesearch | 2019-11-12T08:33:00 | CC-MAIN-2019-47 | 1573496664808.68 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
WSO2 App Manager facilitates Web application authorization for reliability and security of Web applications. Users can enable different levels of access rights and authorization for a single Web application resource. When authorization is enabled, users can access that resource based on the authorization policies or granted permissions. WSO2 App Manager has two types of authorization mechanisms as follows.
Role-based resource authorization
In WSO2 App Manager, the Web application invocation requests are authorized and access is granted based on the role assigned to the user. This is called role-based resource authorization. In the Step 2 - Policies of creating a Web application in the App Publisher, you can associate roles for Web application resources, by defining Accessible User Roles in the resource policy as shown below.
After defining the accessible user roles in the resource policy as shown above, you can associate that policy to the HTTP verbs of URL patterns in the Step 3 - Web Application Resources section. For example, if you are adding the resource policy created above to the GET HTTP verb of the
/{context}/{version}/timeTables URL pattern as shown below, then a HTTP GET request sent to
/{context}/{version}/timeTables is authorized only for a users of member and admin roles.
XACML policy based resource authorization
XACML is a widely used authorization mechanism for Web resources. XACML provides fine grained policy-based access control. WSO2 App Manager provides Web application resource authorization facility with the use of XACML policies associated with resources.
Defining the XACML policy conditions
Follow the below steps to define the conditions of a XACML-based entitlement policy.
- Log in to the admin dashboard of WSO2 App Manager using admin/admin credentials and the following URL:
- Click Entitlement Policies, and then click Add New.
- Enter a name for the entitlement policy.
- Enter a description for the entitlement policy.
Define the conditions of the entitlement policy in the provided editor as shown below.
For more information on defining XACML policies, see OASIS XACML Version 3.0 documentation.
- Select Permit or Deny under Effect section to create a new resource policy by enabling the defined XACML policy. If you select Permit, the user will be permitted to access, and if you select Deny, the Web app resource access will be denied.
- Click Validate to check the validity of the policy. It checks for syntax errors and verifies whether the condition adheres to XACML policy language specifications.
Click Save to save the policy condition details.
Only the author of the policy can edit shared policies.
Click Entitlement Policies in the left menu, and then click View All. You view the saved policy under the list of XACML policies as shown below.
You can edit and delete defined XACML policies using the provided buttons under the Action column as shown above.
Associating XACML policies with Web application resources
Follow the steps below to associate the defined XACML policies with the HTTP verbs of the URL Pattern of Web application resources when creating a Web application.
In the Step 2 - Policies of creating a Web application, select the Entitlement Policy as shown below.
Associate the XACML policy defined above to a HTTP Verb of a specific URL Pattern of a Web app resource in Step 3 - Web Application Resources section as shown below. | https://docs.wso2.com/display/APPM120/Web+Application+Resource+Authorization | 2019-11-12T08:31:52 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.wso2.com |
Signature rule patterns
You can add a new pattern to a signature rule or modify an existing pattern of a signature rule to specify a string or expression that characterizes an aspect of the attack that the signature matches. To determine which patterns an attack exhibits, you can examine the logs on your web server, use a tool to observe connection data in real time, or obtain the string or expression from a third-party report about the attack.
Caution:
Any new pattern that you add to a signature rule is in an AND relationship with the existing patterns. Do not add a new pattern to an existing signature rule if you do not want a potential attack to have to match all of the patterns in order to match the signature.
Each pattern can consist of a simple string, a PCRE-format regular expression, or the built-in SQL injection or cross-site scripting pattern. Before you attempt to add a pattern that is based on a regular expression, you should make sure that you understand PCRE-format regular expressions. PCRE expressions are complex and powerful; if you do not understand how they work, you can unintentionally create a pattern that matches something that you did not want (a false positive) or that fails to match something that you did want (a false negative).
If you are not already familiar with PCRE-format regular expressions, you can use the following resources to learn the basics, or for help with some specific issue:
- “Mastering Regular Expressions”, Third Edition. Copyright (c) 2006 by Jeffrey Friedl. O’Reilly Media, ISBN: 9780596528126.
- “Regular Expressions Cookbook”. Copyright (c) 2009 by Jan Goyvaerts and Steven Levithan. O’Reilly Media, ISBN: 9780596520687
- PCRE Man page/Specification (text/official):
- PCRE Man Page/Specification (html/gammon.edu.au):
- Wikipedia PCRE entry: “”
- PCRE Mailing List (run by exim.org):
If you need to encode non-ASCII characters in a PCRE-format regular expression, the NetScaler platform supports encoding of hexadecimal UTF-8 codes. For more information, see “PCRE Character Encoding Format.”
To configure a signature rule patternTo configure a signature rule pattern
Navigate to Security > Application Firewall > Signatures.
In the details pane, select that signatures object that you want to configure, and then click Open.
In the Modify Signatures Object dialog box, in the middle of the screen beneath the Filtered Results window, either click Add to create a signature rule, or select an existing signature rule and click Open.
Note:
You can modify only signature rules that you added. You cannot modify the default signature rules.
Depending on your action, either the Add Local Signature Rule or the Modify Local Signature Rule dialog box appears. Both dialog boxes have the same contents.
Under the Patterns window in the dialog box, either click Add to add a new pattern, or select an existing pattern from the list beneath the Add button and click Open. Depending on your action, either the Create New Signature Rule Pattern or the Edit Signature Rule Pattern dialog box appears. Both dialog boxes have the same contents.
In the Pattern Type drop-down list, choose the type of connection that the pattern is intended to match.
- If the pattern is intended to match request elements or features, such as injected SQL code, attacks on web forms, cross-site scripts, or inappropriate URLs, choose Request.
- If the pattern is intended to match response elements or features, such as credit card numbers or safe objects, choose Response.
In the Location area, define the elements to examine with this pattern.
The Location area describes what elements of the HTTP request or response to examine for this pattern. The choices that appear in the Location area depend upon the chosen pattern type. If you chose Request as the pattern type, items relevant to HTTP requests appear; if you chose Response, items relevant to HTTP responses appear.
In addition, as you choose a value from the Area drop-down list, the remaining parts of the Location area change interactively. Following are all configuration items that might appear in this section.
- Area Drop-down list of elements that describe a particular portion of the HTTP connection. The choices are as follows:
- HTTP_ANY. All parts of the HTTP connection.
- HTTP_COOKIE. All cookies in the HTTP request headers after any cookie transformations are performed. Note: Does not search HTTP response “Set-Cookie:” headers.
- HTTP_FORM_FIELD. Form fields and their contents, after URL decoding, percent decoding, and removal of excess whitespace. You can use the
<Location>tag to further restrict the list of form field names to be searched.
- HTTP_HEADER. The value portions of the HTTP header after any cross-site scripting or URL decoding transformations.
- HTTP_METHOD. The HTTP request method.
- HTTP_ORIGIN_URL. The origin URL of a web form.
- HTTP_POST_BODY. The HTTP post body and the web form data that it contains.
- HTTP_RAW_COOKIE. All HTTP request cookie, including the “Cookie:” name portion. Note: Does not search HTTP response “Set-Cookie:” headers.
- HTTP_RAW_HEADER. The entire HTTP header, with individual headers separated by linefeed characters (\n) or carriage return/line-feed strings (\r\n).
- HTTP_RAW_RESP_HEADER. The entire response header, including the name and value parts of the response header after URL transformation has been done, and the complete response status. As with HTTP_RAW_HEADER, individual headers are separated by linefeed characters (\n) or carriage return/line-feed strings (\r\n).
- HTTP_RAW_SET_COOKIE. The entire Set-Cookie header after any URL transformations have been performed Note: URL transformation can change both the domain and path parts of the Set-Cookie header.
- HTTP_RAW_URL. The entire request URL before any URL transformations are performed, including any query or fragment parts.
- HTTP_RESP_HEADER. The value part of the complete response headers after any URL transformations have been performed.
- HTTP_RESP_BODY. The HTTP response body
- HTTP_SET_COOKIE. All “Set-Cookie” headers in the HTTP response headers.
- HTTP_STATUS_CODE. The HTTP status code.
- HTTP_STATUS_MESSAGE. The HTTP status message.
- HTTP_URL. The value portion of the URL in the HTTP headers, excluding any query or fragment ports, after conversion to the UTF-* character set, URL decoding, stripping of whitespace, and conversion of relative URLs to absolute. Does not include HTML entity decoding.
- URL Examines any URLs found in the elements specified by the Area setting. Select one of the following settingss.
-.
- Field Name Examines any form field names found in the elements specified by the Area selection.
-.
In the Pattern area, define the pattern. A pattern is a literal string or PCRE-format regular expression that defines the pattern that you want to match. The Pattern area contains the following elements:
- Match A drop-down list of search methods that you can use for the signature. This list differs depending on whether the pattern type is Request or Response.
Request Match Types PCRE. A PCRE-format regular expression. NOTE: When you choose PCRE, the regular expression tools beneath the Pattern window are enabled. These tools are not useful for most other types of patterns.
Injection. Directs the App Firewall to look for injected SQL in the specified location. The Pattern window disappears, because the App Firewall already has the patterns for SQL injection.
CrossSiteScripting. Directs the App Firewall to look for cross-site scripts in the specified location. The Pattern window disappears, because the App Firewall already has the patterns for cross-site scripts.
Expression. An expression in the NetScaler default expressions language. This is the same expressions language that is used to create App Firewall policies and other policies on the NetScaler appliance. Although the NetScaler expressions language was originally developed for policy rules, it is a highly flexible general purpose language that can also be used to define a signature pattern.
When you choose Expression, the NetScaler Expression Editor appears beneath Pattern window. For more information about the Expression Editor and instructions on how to use it, see “To add a firewall rule (expression) by using the Add Expression dialog box
Response Match Types:
Literal. A literal string
PCRE. A PCRE-format regular expression.
NOTE:
When you choose PCRE, the regular expression tools beneath the Pattern window are enabled. These tools are not useful for most other types of patterns.
- Credit Card. A built-in pattern to match one of the six supported types of credit card number.
Note: The Expression match type is not available for Response-side signatures.
- Pattern Window (unlabeled)
In this window, type the pattern that you want to match, and fill in any additional data.
- Literal. Type the string you want to search for in the text area. - PCRE. Type the regular expression in the text area. Use the **Regex Editor** for more assistance in constructing the regular expression that you want, or the Regex Tokens to insert common regular expression elements at the cursor. To enable UTF-8 characters, click UTF-8. - Expression. Type the NetScaler advanced expression in the text area. Use Prefix to choose the first term in your expression, or Operator to insert common operators at the cursor. Click **Add** to open the Add Expression dialog box for more assistance in constructing the regular expression that you want. Click Evaluate to open the Advanced Expression Evaluator to help determine what effect your expression has. - Offset. The number of characters to skip over before starting to match on this pattern. You use this field to start examining a string at some point other than the first character. - Depth. How many characters from the starting point to examine for matches. You use this field to limit searches of a large string to a specific number of characters. - Min-Length. The string to be searched must be at least the specified number of bytes in length. Shorter strings are not matched. - Max-Length. The string to be searched must be no longer than the specified number of bytes in length. Longer strings are not matched. - Search method. A check box labeled fastmatch. You can enable fastmatch only for a literal pattern, to improve performance.
- Click OK.
- Repeat the previous four steps to add or modify additional patterns.
When finished adding or modifying patterns, click OK to save your changes and return to the Signatures pane.
Caution:
Until you click OK in the Add Local Signature Rule or Modify Local Signature Rule dialog box, your changes are not saved. Do not close either of these dialog boxes without clicking OK unless you want to discard your changes. | https://docs.citrix.com/en-us/netscaler/12/application-firewall/signatures/editing-signatures/add-signature-rule-patterns.html | 2019-11-12T09:46:32 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.citrix.com |
Message-ID: <1431657353.581328.1573547608746.JavaMail.confluence@docs-node.wso2.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_581327_1218376858.1573547608745" ------=_Part_581327_1218376858.1573547608745 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
API creation is the process of linking an exist= ing backend API implementation to the API Publisher so that you can manage = and monitor the API's lifecycle, documentation, security, community and sub= scriptions. Alternatively, you can provide the API implementation in-line i= n the API Publisher itself.
Click the following topics for a description of the concepts that you ne= ed to know when creating an API:
The steps below show how to create a new API.
Click the Add link and provide the information give=
n in the table below.
=
Click Add New Resource. After the resource is =
added, expand its
GET method, add the following parameters to =
it and click Implement.
You add these parameters as th= ey are required to invoke the API using our integrated API Console in later= tutorials.
The
Implement tab opens. Provide the information given =
in the table below. Click the Show More Options link to se=
e the options that are not visible by default.
Click Manage to go to the
Manage tab a=
nd provide the following information.
You have created an API. | https://docs.wso2.com/exportword?pageId=41747113 | 2019-11-12T08:33:28 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.wso2.com |
Counters Module
The
Counters module defines a general purpose counter service that allows to
associate counters to database entities. For example it can be used to track the number
of times a blog post or a wiki page is accessed. The
Counters module maintains the
counters in a table on a per-day and per-entity basis. It allows to update the full counter
in the target database entity table.
Counter Module
The
Counter_Module manages the counters associated with database entities.
To avoid having to update the database each time a counter is incremented, counters
are kept temporarily in a
Counter_Table protected type. The table contains
only the partial increments and not the real counter values. Counters are flushed
when the table reaches some limit, or, when the table is oldest than some limit.
Counters are associated with a day so that it becomes possible to gather per-day counters.
The table is also flushed when a counter is incremented in a different day.
Integration
An instance of the
Counter_Module must be declared and registered in the
AWA application. The module instance can be defined as follows:
with AWA.Counters.Modules; ... type Application is new AWA.Applications.Application with record Counter_Module : aliased AWA.Counters.Modules.Counter_Module; end record;
And registered in the
Initialize_Modules procedure by using:
Register (App => App.Self.all'Access, Name => AWA.Counters.Modules.NAME, Module => App.Counter_Module'Access);
Configuration
Counter Declaration
Each counter must be declared by instantiating the
Definition package.
This instantiation serves as identification of the counter and it defines the database
table as well as the column in that table that will hold the total counter. The following
definition is used for the read counter of a wiki page. The wiki page table contains a
read_count column and it will be incremented each time the counter is incremented.
with AWA.Counters.Definition; ... package Read_Counter is new AWA.Counters.Definition (AWA.Wikis.Models.WIKI_PAGE_TABLE, "read_count");
When the database table does not contain any counter column, the column field name is not given and the counter definition is defined as follows:
with AWA.Counters.Definition; ... package Login_Counter is new AWA.Counters.Definition (AWA.Users.Models.USER_PAGE_TABLE);
Sometimes a counter is not associated with any database entity. Such counters are global and they are assigned a unique name.
with AWA.Counters.Definition; ... package Start_Counter is new AWA.Counters.Definition (null, "startup_counter");
Incrementing the counter
Incrementing the counter is done by calling the
Increment operation.
When the counter is associated with a database entity, the entity primary key must be given.
The counter is not immediately incremented in the database so that several calls to the
Increment operation will not trigger a database update.
with AWA.Counters; ... AWA.Counters.Increment (Counter => Read_Counter.Counter, Key => Id);
A global counter is also incremented by using the
Increment operation.
with AWA.Counters; ... AWA.Counters.Increment (Counter => Start_Counter.Counter);
Ada Bean
The Counter_Bean allows to represent a counter associated with some database
entity and allows its control by the
HTML components
The counter component is an Ada Server Faces component that allows to increment
and display easily the counter. The component works by using the
Counter_Bean
Ada bean object which describes the counter in terms of counter definition, the
associated database entity, and the current counter value.
<awa:counter
When the component is included in a page the
Counter_Bean instance associated
with the EL
value attribute is used to increment the counter. This is similar
to calling the
AWA.Counters.Increment operation from the Ada code.
Data model
The
Counters module has a simple database model which needs two tables.
The
Counter_Definition table is used to keep track of the different counters
used by the application. A row in that table is created for each counter declared by
instantiating the
Definition package. The
Counter table holds the counters
for each database entity and for each day. By looking at that table, it becomes possible
to look at the daily access or usage of the counter.
| https://ada-awa.readthedocs.io/en/latest/AWA_Counters/ | 2019-11-12T09:02:59 | CC-MAIN-2019-47 | 1573496664808.68 | [array(['../images/awa_counters_model.png', None], dtype=object)] | ada-awa.readthedocs.io |
.
- -^.
Remarks¶ remove a planar trend from | https://docs.generic-mapping-tools.org/latest/grdtrend.html | 2019-11-12T08:53:46 | CC-MAIN-2019-47 | 1573496664808.68 | [] | docs.generic-mapping-tools.org |
Web Farm Framework 2.0 for IIS 7 cmdlets for Windows PowerShell
by Randall DuBois
You can use the Web Farm Framework cmdlets for Windows PowerShell to configure and manage your server farm. You must have Windows PowerShell installed on the web farm controller.
To use the Web Farm Framework cmdlets for Windows PowerShell
- On the controller server, open a command prompt.
To start the PowerShell console, enter the following command:
PowerShell
At the PowerShell prompt, enter the following command:
Add-PSSnapin WebFarmSnapin
The Web Farm Framework snapin for Windows PowerShell is loaded.
At the PowerShell prompt, type the following command to display a list of the available WFF cmdlets:
Get-Command WebFarmSnapin\*
The list is displayed as in the following image:
Viewing cmdlet Syntax and Help
To get help for each cmdlet, enter get-Help <cmdletName> -full. For example, to get help for the Get-ActiveOperation cmdlet, enter the following command.
Get-Help Get-ActiveOperation -full
Managing Servers Using the cmdlets
You can perform management tasks for the server farm or a specific server in the farm using the cmdlets. The following table lists the cmdlets for these tasks.
Creating a Server Farm Using the cmdlets
To create a server farm, at the PowerShell prompt, enter the following cmdlet:
New-WebFarm
Provide the name of the new web farm and your credentials as prompted.
To verify the server farm was created, use the Get-WebFarm cmdlet, as follows:
Get-WebFarm
Adding a Server to a Server Farm Using the cmdlets
To add a server to an existing server farm, at the PowerShell prompt, enter the following command:
New-Server
Provide the name of the web farm and server address as prompted.
To verify that the server was added, use the Get-Server Nmdlet as follows:
Get-Server
The servers in the farm are displayed.
Adding Credentials to Windows Credential Store
Using Window's credential store, users can store passwords and access credential stored password information via the command-line. This is very useful if you want keep your password from showing up in any log files that may capture command line input. The credential store saves a target along with your user name and password. The target is a string that is used to identify the credential information.
To add a new target to an existing server farm, at the PowerShell prompt, enter the following command:
New-CredentialStoreTarget
Provide the target and your credentials as prompted.
Removing Credentials from Windows Credential Store
To remove an existing target (and therefore the associated credentials), at the PowerShell prompt, enter the following command:
Remove-CredentialStoreTarget
Provide the target to be removed as prompted.
| https://docs.microsoft.com/en-us/iis/web-hosting/microsoft-web-farm-framework-20-for-iis-7/web-farm-framework-20-for-iis-cmdlets-for-windows-powershell | 2019-11-12T09:39:10 | CC-MAIN-2019-47 | 1573496664808.68 | [array(['web-farm-framework-20-for-iis-cmdlets-for-windows-powershell/_static/image1.png',
None], dtype=object)
array(['web-farm-framework-20-for-iis-cmdlets-for-windows-powershell/_static/image3.png',
None], dtype=object)
array(['web-farm-framework-20-for-iis-cmdlets-for-windows-powershell/_static/image5.png',
None], dtype=object)
array(['web-farm-framework-20-for-iis-cmdlets-for-windows-powershell/_static/image7.png',
None], dtype=object)
array(['web-farm-framework-20-for-iis-cmdlets-for-windows-powershell/_static/image9.png',
None], dtype=object)
array(['web-farm-framework-20-for-iis-cmdlets-for-windows-powershell/_static/image11.png',
None], dtype=object)
array(['web-farm-framework-20-for-iis-cmdlets-for-windows-powershell/_static/image13.png',
None], dtype=object) ] | docs.microsoft.com |
Version statusVersion status
About the version "active" flagAbout the version "active" flag
By default, any version of a package you upload to Packagr is marked as
active. Only versions marked as active will be made available via
pip. You can see the version's active flag in the package view list -
see the green ticks on the right hand side below:
Packagr lets you mark a specific version as inactive. Doing this means that the specific version will no longer be made available via pip. This provides a convenient way of making older versions of a package unavailable, without the need for deleting them
Manually editing the version's "active" flagManually editing the version's "active" flag
To toggle the active status of a particular version, click on the green tick shown in the above screenshot. It will turn grey, meaning the the version is inactive
Automaticaly managing the active versions of a packageAutomaticaly managing the active versions of a package
Packagr also givens you the option to automatically manage the active flag of a package's versions, by specifying how many active versions that you want to mark as active. This can be done via the Package settings section of the package view.
By default, all versions are marked as active, but by changing the settings as shown above, you can ensure that Packagr only display the most recent N versions of a given package.
If you do this, every time a version is added or deleted, the logic is applies and the number of available versions is updated. The ordering of versions in Packagr is based on the version number.
Keep mind that if you use this setting, any manual changes to this flag will be overwritten | https://docs.packagr.app/guide/version-status.html | 2019-11-12T08:05:43 | CC-MAIN-2019-47 | 1573496664808.68 | [array(['/active-flag.png', 'Changing the version active status'],
dtype=object)
array(['/package-settings.png', 'Changing the version active status'],
dtype=object) ] | docs.packagr.app |
Set-MsolDirSyncFeature
Syntax
Set-MsolDirSyncFeature -Feature <String> -Enable <Boolean> [-TenantId <Guid>] [-Force] [<CommonParameters>]
Description
The Set-MsolDirSyncFeature cmdlet sets identity synchronization features for a tenant.
Synchronization features that can be used with this cmdlet include the following:
- EnableSoftMatchOnUpn. Soft Match is the process used to link an object being synced from on-premises for the first time with one that already exists in the cloud. When this feature is enabled Soft Match will first be attempted using the standard logic, based on primary SMTP address. If a match is not found based on primary SMTP, then a match will be attempted based on UserPrincipalName. Once this feature is enabled it cannot be disabled.
- PasswordSync
- SynchronizeUpnForManagedUsers. allows for the synchronization of UserPrincipalName updates from on-premises for managed (non-federated) users that have been assigned a license. These updates will be blocked if this feature is not enabled. Once this feature is enabled it cannot be disabled.
Enabling some of these features, such as EnableSoftMatchOnUpn and SynchronizationUpnForManagedUsers is a permanent operation. These features cannot be disabled once they are enabled.
Examples
Example 1: Enable a feature for the tenant
PS C:\> Set-MsolDirSyncFeature -Feature EnableSoftMatchOnUpn -Enable $True
This command enables the SoftMatchOnUpn feature for the tenant.
Required Parameters
Indicates whether the specified feature will be turned on for the company.
Specifies the directory synchronization features to turn on or off.
Optional Parameters
Forces the command to run without asking for user confirmation.
Specifies the unique ID of the tenant to perform the operation on. If you do not specify this parameter the cmdlet will use the ID of the current user. This parameter is only applicable to partner users. | https://docs.microsoft.com/en-us/powershell/module/msonline/set-msoldirsyncfeature?view=azureadps-1.0 | 2017-07-20T14:43:52 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.microsoft.com |
FCKeditor, The Name
The FCK letters in FCKeditor are the initials of Frederico Caldeira Knabben, the project starter and lead developer of FCKeditor.
Frederico was used to use "fckVarName" while developing to indicate temporary things he introduced in the code. While living in Rome, his Italian co-workers were used to play with him point things like the "FCK Thing" because of it. So, in the late 2002, he decided to call his editor project "FCKeditor", as it fitted well with his friends jokes. The name sounded good in any case.
The Problem
Well, "FCKeditor" doesn't always sound good really. It depends on the person reading it.
For native English speakers (Frederico is Brazilian), the FCK letters combined together are a shortcut for a bad word (probably the most used bad word in English). This information came to Frederico too late, after 2 years of FCKeditor, and the project was already too diffuse and mature to think about changing its name.
Many will say that the name is not a problem. The important thing is the quality of the software. This is true, but many others will feel the overall editor quality lower just because of that name fact. People may not understand how serious we are about FCKeditor, not taking us seriously because of it.
Rebranding
Changing the editor name right now is an extremely complex task. In the marketing point of view, it may be a bad decision, but it depends on the benefits it could bring.
We feel that we need to face this change. We want to make our editor perfect in all senses, so why not work on its name?
The New Name: CKEditor
After long discussions at our forums, polls, and in-depth thoughts, we have defined the new name for the editor: CKEditor. The "F" has been dropped from "FCK", and the "E" is now uppercased to avoid the confusion we had in the past.
Being this new name a perfect solution for a new editor product is still discussible, but it's proving to be the best compromise for a successful name changing in our case.
This change would also allow us following a product like with the "CK" prefix: CKEditor, CKFinder, CKPackager, etc. The CK letters stand for "Content and Knowledge". | http://docs.cksource.com/FCKeditor_3.x/Design_and_Architecture/Rebranding | 2017-07-20T14:45:58 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.cksource.com |
Creating an Alarm
Alarms you create in the Amazon Redshift console are Amazon CloudWatch alarms. They
are useful
because they help you make proactive decisions about your cluster and its databases.
You can set one or more alarms on any of the metrics listed in Amazon Redshift CloudWatch Metrics. For example, setting an alarm for high
CPUUtilization on a cluster
node will help indicate when the node is over-utilized. Likewise, setting an alarm
for low
CPUUtilization on a cluster node, will help indicate when the
node is underutilized.
This section explains how to create an alarm using the Amazon Redshift console. You can create an alarm using the Amazon CloudWatch console or any other way you typically work with metrics such as with the Amazon CloudWatch Command Line Interface (CLI) or one of the Amazon Software Development Kits (SDKs). To delete an alarm, you must use the Amazon CloudWatch console.
To create an alarm on a cluster metric in the Amazon Redshift console
Sign in to the AWS Management Console and open the Amazon Redshift console at.
In the left navigation, click Clusters.
In the Cluster list, select the cluster for which you want to view cluster performance during query execution.
Select the Events+Alarms tab.
Click Create Alarm.
In the Create Alarm dialog box, configure an alarm, and click Create.
Note
The notifications that are displayed the Send a notification to box are your Amazon Simple Notification Service (Amazon SNS) topics. To learn more about Amazon SNS and creating topics, go to Create a Topic in the Amazon Simple Notification Service Getting Started Guide. If you don't have any topics in Amazon SNS, you can create a topic in the Create Alarm dialog by clicking the create topic link.
The details of your alarm will vary with your circumstance. In the following example, the average CPU utilization of a node (Compute-0) has an alarm set so that if the CPU goes above 80 percent for four consecutive five minute periods, a notification is sent to the topic redshift-example-cluster-alarms.
In the list of alarms, find your new alarm.
You may need to wait a few moments as sufficient data is collected to determine the state of the alarm as shown in the following example.
After a few moments the state will turn to OK.
(Optional) Click the Name of the alarm to change the configuration of the alarm or click the view link under More Options to go to this alarm in the Amazon CloudWatch console. | http://docs.aws.amazon.com/redshift/latest/mgmt/performance-metrics-alarms.html | 2017-07-20T14:44:26 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.aws.amazon.com |
Writing Client Applications
- Add Self-Signed SSL Certificate to JVM Truststore
- Use Configuration Values
- Vary Configurations Based on Profiles
- View Client Application Configuration
- Refresh Client Application Configuration
- Use Client-Side Decryption
- Use a HashiCorp Vault Server
- Disable HTTP Basic Authentication
Page last updated:
Refer to the “Cook” sample application to follow along with the code in this topic.
To use a Spring Boot application as a client for a Config Server instance, you must add the dependencies listed in the Client Dependencies topic to your application’s build file. Be sure to include the dependencies for Config Server as well.
Important: Because of a dependency on Spring Security, the Spring Cloud® Config Client starter will by default cause all application endpoints to be protected by HTTP Basic authentication. If you wish to disable this, please see Disable HTTP Basic Authentication below.
Add Self-Signed SSL Certificate to JVM Truststore
Spring Cloud Services uses HTTPS for all client-to-service communication. If your Pivotal Cloud Foundry installation is using a self-signed SSL certificate, the certificate will need to be added to the JVM truststore before your client application can consume properties from a Config Server service instance.
Spring Cloud Services can add the certificate for you automatically. For this to work, you must set the
TRUST_CERTS environment variable on your client application to the API endpoint of your Elastic Runtime instance:
$ cf set-env cook TRUST_CERTS api.cf.wise.com Setting env variable 'TRUST_CERTS' to 'api.cf.wise.com' for app cook in org myorg / space development as user... OK TIP: Use 'cf restage' to ensure your env variable changes take effect $ cf restage cook
Note: The
CF_TARGET environment variable was formerly recommended for configuring Spring Cloud Services to add a certificate to the truststore.
CF_TARGET is still supported for this purpose, but
TRUST_CERTS is more flexible and is now recommended instead.
As the output from the
cf set-env command suggests, restage the application after setting the environment variable.
Use Configuration Values
When the application requests a configuration from the Config Server, it will use a path containing the application name (as described in the Configuration Clients topic). You can declare the application name in
bootstrap.properties,
bootstrap.yml,
application.properties, or
application.yml.
In
bootstrap.yml:
spring: application: name: cook
This application will use a path with the application name
cook, so the Config Server will look in its configuration source for files whose names begin with
cook, and return configuration properties from those files.
Now you can (for example) inject a configuration property value using the
@Value annotation. The Menu class reads the value of
special from the
cook.special configuration property.
@RefreshScope @Component public class Menu { @Value("${cook.special}") String special; //... public String getSpecial() { return special; } //... }
The
Application class is a
@RestController. It has an injected
menu and returns the
special (the value of which will be supplied by the Config Server) in its
restaurant() method, which it maps to
/restaurant.
@RestController @SpringBootApplication public class Application { @Autowired private Menu menu; @RequestMapping("/restaurant") public String restaurant() { return String.format("Today's special is: %s", menu.getSpecial()); } //...
Vary Configurations Based on Profiles
You can provide configurations for multiple profiles by including appropriately-named
.yml or
.properties files in the Config Server instance’s configuration source (the Git repository). Filenames follow the format
{application}-{profile}.{extension}, as in
cook-production.yml. (See the The Config Server topic.)
The application will request configurations for any active profiles. To set profiles as active, you can use the
SPRING_PROFILES_ACTIVE environment variable, set for example in
manifest.yml.
applications: - name: cook host: cookie services: - config-server env: SPRING_PROFILES_ACTIVE: production
The sample configuration source cook-config contains the files
cook.properties and
cook-production.properties. With the active profile set to
production as in
manifest.yml above, the application will make a request of the Config Server using the path
/cook/production, and the Config Server will return properties from both
cook-production.properties (the profile-specific configuration) and
cook.properties (the default configuration); for example:
{ "name":"cook", "profiles":[ "production" ], "label":"master", "propertySources":[ { "name":"", "source": { "cook.special":"Cake a la mode" } }, { "name":"", "source": { "cook.special":"Pickled Cactus" } } ] }
As noted in the Configuration Clients topic, the application must decide what to do when the server returns multiple values for a configuration property, but a Spring application will take the first value for each property. In the example response above, the configuration for the specified profile (
production) is first in the list, so the Boot sample application will use values from that configuration.
View Client Application Configuration>
If using Gradle, add to
build.gradle:
compile("org.springframework.boot:spring-boot-starter-actuator")
You can now visit
/env to see the application environment’s properties (the following shows an excerpt of an example response):
$ curl { "profiles":[ "dev","cloud" ], "configService:":{ "cook.special":"Pickled Cactus" }, "vcap":{ "vcap.application.limits.mem":"512", "vcap.application.application_uris":"cookie.apps.wise.com", "vcap.services.config-server.name":"config-server", "vcap.application.uris":"cookie.apps.wise.com", "vcap.application.application_version":"179de3f9-38b6-4939-bff5-41a14ce4e700", "vcap.services.config-server.tags[0]":"configuration", "vcap.application.space_name":"development", "vcap.services.config-server.plan":"standard", //...
Refresh Client Application Configuration
Spring Boot Actuator also adds a
refresh endpoint to the application. This endpoint is mapped to
/refresh, and a POST request to the
refresh endpoint refreshes any beans which are annotated with
@RefreshScope. You can thus use
@RefreshScope to refresh properties which were initialized with values provided by the Config Server.
The
Menu.java class is marked as a
@Component and also annotated with
@RefreshScope.
import org.springframework.cloud.context.config.annotation.RefreshScope; import org.springframework.stereotype.Component; @RefreshScope @Component public class Menu { @Value("${cook.special}") String special; //...
This means that after you change values in the configuration source repository, you can update the
special on the
Application class’s
menu with a refresh event triggered on the application:
$ curl Today's special is: Pickled Cactus $ git commit -am "new special" [master 3c9ff23] new special 1 file changed, 1 insertion(+), 1 deletion(-) $ git push $ curl -X POST ["cook.special"] $ curl Today's special is: Birdfeather Tea
Use Client-Side Decryption
On the Config Server, the decryption features are disabled, so encrypted property values from a configuration source are delivered to client applications unmodified. You can use the decryption features of Spring Cloud Config Client to perform client-side decryption of encrypted values.
To use the decryption features in a client application, you must use a Java buildpack which contains the Java Cryptography Extension (JCE) Unlimited Strength policy files. These files are contained in the Cloud Foundry Java buildpack from version 3.7.1.
If you cannot use version 3.7.1 or later, you can add the JCE Unlimited Strength policy files to an earlier version of the Cloud Foundry Java buildpack. Fork the buildpack on GitHub, then download the policy files from Oracle and place them in the buildpack’s
resources/open_jdk_jre/lib/security directory. Follow the instructions in the Managing Custom Buildpacks topic to add this buildpack to Pivotal Cloud Foundry. Be sure that it has the lowest position of all enabled Java buildpacks.
You must also include Spring Security RSA as a dependency.
If using Maven, include in
pom.xml:
<dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-rsa</artifactId> </dependency>
If using Gradle, include in
build.gradle:
compile("org.springframework.security:spring-security-rsa")
Encrypted values must be prefixed with the string
{cipher}. If using YAML, enclose the entire value in single quotes, as in
'{cipher}vALuE'; if using a properties file, do not use quotes. The configuration source for the Cook application has a
secretMenu property in its
cook-encryption.properties:
secretMenu={cipher}AQA90Q3GIRAMu6ToMqwS++En2iFzMXIWX99G66yaZFRHrQNq64CntqOzWymd3xE7uJpZK Qc9XBIkfyRz/HUGhXRdf3KZQ9bqclwmR5vkiLmN9DHlAxS+6biT+7f8ptKo3fzQ0gGOBaR4kTnWLBxmVaIkjq1Qz e4aIgsgUWuhbEek+3znkH9+Mc+5zNPvwN8hhgDMDVzgZLB+4YnvWJAq3Au4wEevakAHHxVY0mXcxj1Ro+H+ZelIz fF8K2AvC3vmvlmxy9Y49Zjx0RhMzUx17eh3mAB8UMMRJZyUG2a2uGCXmz+UunTA5n/dWWOvR3VcZyzXPFSFkhNek w3db9XZ7goceJSPrRN+5s+GjLCPr+KSnhLmUt1XAScMeqTieNCHT5I=
Put the key on the client application classpath. You can either add the keystore to the buildpack’s
resources/open_jdk_jre/lib/security directory (as described above for the JCE policy files) or include it with the source for the application. In the Cook application, the key is placed in
src/main/resources.
cook └── src └── main └── resources ├── application.yml ├── bootstrap.yml └── server.jks
Important: Cook is an example application, and the key is packaged with the application source for example purposes. If at all possible, use the buildpack for keystore distribution.
Specify the key location and credentials in
bootstrap.properties or
bootstrap.yml using the
encrypt.keyStore properties:
encrypt: keyStore: location: classpath:/server.jks password: letmein alias: mytestkey secret: changeme
The Menu class has a String property
secretMenu.
@Value("${secretMenu}") String secretMenu; //... public String getSecretMenu() { return secretMenu; }
A default value for
secretMenu is in
bootstrap.yml:
secretMenu: Animal Crackers
In the Application class, the method
secretMenu() is mapped to
/restaurant/secret-menu. It returns the value of the
secretMenu property.
@RequestMapping("/restaurant/secret-menu") public String secretMenu() { return menu.getSecretMenu(); }
After making the key available to the application and installing the JCE policy files in the Java buildpack, you can cause the Config Server to serve properties from the
cook-encryption.properties file by activating the
encryption profile on the application, e.g. by running
cf set-env to set the
SPRING_PROFILES_ACTIVE environment variable:
$ cf set-env cook SPRING_PROFILES_ACTIVE dev,encryption Setting env variable 'SPRING_PROFILES_ACTIVE' to 'dev,encryption' for app cook in org myorg / space development as user... OK TIP: Use 'cf restage' to ensure your env variable changes take effect $ cf restage cook
The application will decrypt the encrypted property after receiving it from the Config Server. You can view the property value by visiting
/restaurant/secret-menu on the application.
Use a HashiCorp Vault Server
You can configure the Config Server to use a HashiCorp Vault server as a configuration source, as described in the Configuring with Vault topic. To consume configuration from the Vault server via the service instance, your client application must be given a Vault token. You can give the token to an application by setting the
SPRING_CLOUD_CONFIG_TOKEN environment variable on the application. The Spring Cloud Services Connectors for Config Server will automatically renew the application’s token for as long as the application is running.
Important: If the application is entirely stopped (i.e., no instances continue to run) and its Vault token expires, you will need to create a new token for the application and re-set the
SPRING_CLOUD_CONFIG_TOKEN environment variable.
To generate a token for use in the application, you can run the
vault token-create command, providing a Time To Live (TTL) that is long enough for the application to be restaged after you have set the environment variable. The following command creates a token with a TTL of one hour:
$ vault token-create -ttl="1h"
After generating the token, set the environment variable on your client application and then restage the application for the environment variable setting to take effect:
$ cf set-env cook SPRING_CLOUD_CONFIG_TOKEN c3432ef5-6a78-8673-ea23-5528c26849e4 Setting env variable 'SPRING_CLOUD_CONFIG_TOKEN' to 'c3432ef5-6a78-8673-ea23-5528c26849e4' for app cook in org myorg / space development as user... OK TIP: Use 'cf restage cook' to ensure your env variable changes take effect $ cf restage cook
The Spring Cloud Services Connectors for Config Server renew the application token for as long as the application continues to run. For more information about the token renewal performed by the connectors, see the HashiCorp Vault Token Renewal section of the Spring Cloud Connectors topic.
Renew Vault Token Manually
After creating a Vault token for an application, you can renew the token manually using the Config Server service instance bound to the application.
Note: The following procedure uses the jq command-line JSON processing tool.
Run
cf env, giving the name of an application that is bound to the service instance:
$ cf services Getting services in org myorg / space development as admin... OK name service plan bound apps last operation config-server p-config-server standard vault-app create succeeded $ cf env vault-app Getting env variables for app vault-app in org myorg / space development as admin... OK System-Provided: { "VCAP_SERVICES": { "p-config-server": [ { "credentials": { "access_token_uri": "", "client_id": "p-config-server-876cd13b-1564-4a9a-9d44-c7c8a6257b73", "client_secret": "rU7dMUw6bQjR", "uri": "" }, [...]
Then create a Bash script that accesses the Vault token renewal endpoint on the service instance backing application. In the following two commands:
TOKEN=$(curl -k [ACCESS_TOKEN_URI] -u [CLIENT_ID]:[CLIENT_SECRET] -d grant_type=client_credentials | jq -r .access_token); curl -H "Authorization: bearer $TOKEN" -H "X-VAULT-Token: [VAULT_TOKEN]" -H "Content-Type: application/json" -X POST [URI]/vault/v1/auth/token/renew-self -d '{"increment": [INTERVAL]}'
Replace the following placeholders using values from the
cf env command above:
[ACCESS_TOKEN_URI]with the value of
credentials.access_token_uri
[CLIENT_ID]with the value of
credentials.client_id
[CLIENT_SECRET]with the value of
credentials.client_secret
[URI]with the value of
credentials.uri
Replace the following placeholders with the relevant values:
[VAULT_TOKEN]with the Vault token string
[INTERVAL]with the number of seconds to set as the Vault token’s Time To Live (TTL)
After renewing the token, you can view its TTL by looking it up using the Vault command line. Run
vault token-lookup [TOKEN], replacing
[TOKEN] with the Vault token string:
$ vault token-lookup 72ec7ca0-de41-b2dc-8fe4-d74c4c9a4e75 Key Value --- ----- accessor 436db91b-6bfb-9eec-7bfb-913260488ce8 creation_time 1493360487 creation_ttl 3600 display_name token explicit_max_ttl 0 id 72ec7ca0-de41-b2dc-8fe4-d74c4c9a4e75 last_renewal_time 1493360718 meta
num_uses 0 orphan false path auth/token/create policies [root] renewable true ttl 997
Disable HTTP Basic Authentication
The Spring Cloud Config Client starter has a dependency on Spring Security. Unless your application has other security configuration, this will cause all application endpoints to be protected by HTTP Basic authentication.
If you do not yet want to address application security, you can turn off Basic authentication by setting the
security.basic.enabled property to
false. In
application.yml or
bootstrap.yml:
security: basic: enabled: false
You might make this setting specific to a profile (such as the
dev profile if you want Basic authentication disabled only for development):
--- spring: profiles: dev security: basic: enabled: false
For more information, see “Security” in the Spring Boot Reference Guide.
Note: Because of the Spring Security dependency, HTTPS Basic authentication will also be enabled for Spring Boot Actuator endpoints. If you wish to disable that as well, you must also set the
management.security.enabled property to
false. See “Customizing the management server port” in the Spring Boot Reference Guide. | http://docs.pivotal.io/spring-cloud-services/1-4/common/config-server/writing-client-applications.html | 2017-07-20T14:26:46 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.pivotal.io |
Message-ID: <1737345856.1761.1416538440506.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1760_788847141.1416538440505" ------=_Part_1760_788847141.1416538440505 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Releases:
Changes:
Lots of great new features:
FOSS4G:
Documentation:.
A few specific call outs:
GeoTools is shaping up for an excellent year in 2012, you can get a snea= k peak by viewing the change proposals already underway. | http://docs.codehaus.org/exportword?pageId=228185680 | 2014-11-21T02:54:00 | CC-MAIN-2014-49 | 1416400372542.20 | [] | docs.codehaus.org |
adoc Plugin - Generate Javadoc for the project.
JSPC Plugin - Pre-compile JavaServer Pages (JSP)
One Plugin - Build Maven 1.x plugins with Maven 2.x. Tools - Merge repositories, etc.
Remote Resources Plugin - Filter and include packaged resources.
Resources Plugin - Copy the resources to the output directory for including in the JAR.
Shade Plugin - Bundle project classes and dependencies into an uber JAR.
Site Plugin - Generate a site for the current project.
Source Plugin - Build a JAR of sources for use in IDEs and distribution to the repository.
Stage Plugin - Copy artifacts from one repository to another.
Surefire Plugin - Run the Junit tests in an isolated classloader.
Verifier Plugin - Useful for integration tests - verifies the existence of certain conditions.
War Plugin - Build a WAR from the current project.
XSLT Plugin - Run XSL Transformations. | http://docs.codehaus.org/pages/viewpage.action?pageId=101089367 | 2014-11-21T02:54:02 | CC-MAIN-2014-49 | 1416400372542.20 | [] | docs.codehaus.org |
Office 365 SharePoint Online performance troubleshooter
Introduction
The Office 365 SharePoint Online client performance diagnostic package collects information that can be used to troubleshoot SharePoint Online client performance issues. This diagnostic package also lets you capture a Fiddler trace of HTTP(S) traffic while you reproduce these performance issues.
This diagnostic package uploads trace files of up to 2 gigabytes (GB) after the files are compressed.
More Information
Required permissions
The rules in the diagnostic package require that you are the SharePoint Online site collection administrator for the SharePoint Online URL that you enter.
This article describes the information that may be collected from a computer that's trying to connect to SharePoint Online in Office 365.
Fiddler or network trace output
The following data may be collected by the Network Capture diagnostic that's run by the Microsoft Support Diagnostic Tool.
The files are typically large, and therefore the diagnostic may take several minutes to finish. After this diagnostic runs, the collected traces will be automatically compressed and then uploaded to Microsoft Support. A total size of up to 2 GB can be uploaded.
If the results files are larger than 2 GB after compression, some files won't be uploaded and will be left on your system. In this case, you must contact a support professional to ask for an alternative way to upload the remaining collected information.
Fiddler output
The fiddler tracing output is described in the following Microsoft Knowledge Base article, Fiddler tracing of HTTP(S).
Site performance rules
Prerequisites
To install this package, you must have Windows PowerShell 2.0 installed on the computer. For more information, go to the following Microsoft Knowledge Base article, Windows Management Framework (Windows PowerShell 2.0, WinRM 2.0, and BITS 4.0).
The following checks are performed by the Office 365 SharePoint Online diagnostic package:
References
For more information about which operating systems can run Microsoft Support's diagnostic packages, go to Information about Microsoft Automated Troubleshooting Services and Support Diagnostic Platform.
Still need help? Go to Microsoft Community.
Feedback | https://docs.microsoft.com/en-us/sharepoint/support/performance/sharepoint-online-performance-troubleshooter?redirectSourcePath=%252fnl-nl%252farticle%252f-SDP-5-Probleemoplosser-voor-Office-365-SharePoint-Online-prestaties-739BE25A-F880-440D-8B6D-E7CBE4D8DC97 | 2019-09-15T12:43:25 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['../sharepointonline/performance/media/sharepoint-online-performance-troubleshooter/credentials.png',
'Type your credentials'], dtype=object) ] | docs.microsoft.com |
To communicate with the New Relic collector over HTTPS, you need to have the proper certificates for trusted signers in the trust store on your app server. There are two ways to do this:
- Use YAML-based configuration.
- Add the bundled list of New Relic trusted signers to the local store.
Using YAML-based configuration
The New Relic Java agent bundles the list of trusted signers in the agent
newrelic.jar file. If you do not want to change the local trust store, you can activate them by setting
use_private_ssl to
true in the
newrelic.yml agent configuration file:
common: default_settings use_private_ssl: true # # ============================== LICENSE KEY =============================== # You must specify the license key associated with your New Relic ...
Adding New Relic trusted signers to the local store
You can also add the bundled list of trusted signers to your local trust store. The default location for the local trust store is
$JAVA_HOME/jre/lib/security/cacerts. To override this location, set the
javax.net.ssl.truststore property in your launch command to the target location.
To add the bundled list of trusted signers to your local trust store:
Make a backup copy of your trust store:
cp /path/to/truststore /path/to/truststore.orig
Extract the New Relic trust store from the agent jar:
jar xvf /path/to/newrelic.jar nrcerts
If you are prompted for a password during this step, leave the password space blank and confirm.
Merge the New Relic trust store into your trust store:
keytool -importkeystore -srckeystore nrcerts -destkeystore /path/to/truststore
- Restart your app server to take advantage of the updated trust store and communicate with New Relic securely.
Step 3 refers to
srckeystore and
destkeystore even though we are manipulating trust stores. This is correct. A trust store is a key store used for client side certificates.
For more help
Additional documentation resources include New Relic for Java (compatibility and requirements, installation, and configuration.) | https://docs.newrelic.com/docs/agents/java-agent/configuration/configuring-your-ssl-certificates | 2019-09-15T12:05:05 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.newrelic.com |
Released on:
Tuesday, September 19, 2017 - 14:44
Notes
New Relic Infrastructure Agent builds are available for multiple platforms. See Update the Infrastructure agent for how to download and update the appropriate version of the agent for your platform.
Note also that this is a Windows and Linux release.
Improvements
- Container names will now be sent up as a part of the process sample | https://docs.newrelic.com/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes/new-relic-infrastructure-agent-10783 | 2019-09-15T12:20:45 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.newrelic.com |
Released on:
Wednesday, May 23, 2018 - 07:53
Notes
A new version of the Infrastructure agent has been released.
Improvements
- Unix processes are now sampled in a non-blocking manner.
Bug fixes
- In Linux, solved a bug that prevented the agent submitting the metrics from all the storage devices when reading the data from a single device failed.
- Fixed negative CPU steal in some old versions of Linux paravirtualized kernels. | https://docs.newrelic.com/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes/new-relic-infrastructure-agent-10909 | 2019-09-15T12:25:09 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.newrelic.com |
.
See the following topics for instructions on how to work with WSO2 Git repositories:
Identifying WSO2 Git Repositories
In GitHub, the WSO2 source code is organized into separate repositories and each WSO2 product is built using several of these repositories. Therefore, if you are interested in editing the source code and building a customized product, you need to first identify the Git repositories that you require.
The Git repositories used by WSO2 products are of two categories:
Component-level repositories: A component.
Cloning a Git repository
Given below are the steps that you need to follow in order to clone a Git repository to your computer.
Clone the repository, so that the files that are in the WSO2 Git repository are available on your computer:
git clone <DEPENDENT_REPOSITORY_URL> <LOCAL_FOLDER_PATH>
For example, clone the
carbon-commonsrepository, which is in the WSO2 Git repository, to a folder named
CC_SOURCE_HOMEon your computer:
git clone /Users/testuser/Documents/CC_SOURCE_HOME
Navigate to the folder in your computer to which the code base is cloned:
cd <DEPENDENT_REPOSITORY_FOLDER_PATH>
Example:
cd /Users/testuser/Documents/CC_SOURCE_HOME
Clone the dependent repository tag that corresponds to the version of the code base:
git checkout -b <REMOTE_BRANCH/TAG> <LOCAL_BRANCH> repository that you have cloned to your computer.
When you build the product repository, Maven will first check in the local Maven repository on your computer and fetch the repositories that you built in Step 1. Maven will then fetch the remaining dependent repositories from Nexus. This process will give you a new product pack with your changes.
You can find the new binary pack (ZIP file), inms1024m -Xmx2048m -XX:MaxPermSize=1024m"to avoid the Maven
OutOfMemoryError.
Use one of the following Maven commands to build your repositories: | https://docs.wso2.com/pages/viewpage.action?pageId=47532118&navigatingVersions=true | 2019-09-15T12:08:42 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.wso2.com |
We have had reports about user groups losing access to modules and some AD users having had problems uploading into the Media Archive. This has been corrected and improvements including the option to use single quotation marks in the Page name and Title as well as better scaling of gif, bmp and png images has been included in this hotfix.
The hotfix is classified as recommended to customers using groups for permissions and customers needing to use single quotation marks in Page name and Title.
For more details please read the release notes.
Download Litium Studio 4.5.2 Hotfix 3 (you need to be signed in to access the release page)
About Litium
Join the Litium team
Support | https://docs.litium.com/news/hotfix-3-available-for-litium-studio-4-5-2 | 2019-09-15T12:25:11 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.litium.com |
Cert2SPC
The Cert2SPC tool creates a test Software Publisher Certificate (SPC) by using existing X.509 certificates. Cert2SPC can wrap multiple X.509 certificates into a PKCS #7 signed-data object. The tool is installed in the \Bin folder of the Microsoft Windows Software Development Kit (SDK) installation path.
Cert2SPC is available as part of the Windows SDK, which you can download from.
[!Note]This tool is for test purposes only. A valid SPC is obtained from a certification authority.
Cert2SPC Cert1.cer Cert2.cer … Output.spc | https://docs.microsoft.com/en-us/windows/win32/seccrypto/cert2spc | 2019-09-15T13:19:58 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.microsoft.com |
ワールド空間の Transform の赤軸
Manipulate a GameObject’s position on the X axis (red axis) of the transform in world space. Unlike Vector3.right, Transform.right moves the GameObject while also considering its rotation.
When a GameObject is rotated, the red arrow representing the X axis of the GameObject also changes direction. Transform.right moves the GameObject in the red arrow’s axis (X).
For moving the GameObject on the X axis while ignoring rotation, see Vector3.right.
//Attach this script to a GameObject with a Rigidbody2D component. Use the left and right arrow keys to see the transform in action. //Use the up and down keys to change the rotation, and see how using Transform.right differs from using Vector3.right.RightArrow)) { //Move the Rigidbody to the right constantly at speed you define (the red arrow axis in Scene view) m_Rigidbody.velocity = transform.right * m_Speed; }
if (Input.GetKey(KeyCode.LeftArrow)) { //Move the Rigidbody to the left constantly at the speed you define (the red arrow axis in Scene view) m_Rigidbody.velocity = -transform.right * m_Speed; }
if (Input.GetKey(KeyCode.UpArrow)) { //rotate the sprite about the Z axis in the positive direction transform.Rotate(new Vector3(0, 0, 1) * Time.deltaTime * m_Speed, Space.World); }
if (Input.GetKey(KeyCode.DownArrow)) { //rotate the sprite about the Z axis in the negative direction transform.Rotate(new Vector3(0, 0, -1) * Time.deltaTime * m_Speed, Space.World); } } } | https://docs.unity3d.com/ja/2017.4/ScriptReference/Transform-right.html | 2019-09-15T12:54:19 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.unity3d.com |
remote access. T.
In addition to the AWS instances, WSO2 requires access to the following resources:
Implement monitoring and alerting a pplication health. All statistics collected via the NRPE agents are presented using ICinga , the monitoring and dashboard tool. We also configure all Linux hosts with Simple Network Management Protocol (SNMP) and host the st atistics that are collected via SNMP using Cacti , the the network graphing solution. All statistical d ashboards are exposed only to the WSO2 network over HTTP/S. To communicate with the third-party services required to extend alerts, all monitoring hosts need to have Internet connectivity. However, this doesn't mean that the monitoring hosts are placed in the public subnet.
W, phones, etc.
Implement backup and disaster recovery
<coming up soon>
Commit the artifacts
<coming up soon>
Next, go to Support and Maintenance. | https://docs.wso2.com/pages/viewpage.action?pageId=48290427 | 2019-09-15T12:37:32 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.wso2.com |
MAAS communcation
Machine/rack
In multi-region/rack clusters (i.e. HA clusters), all machine communication with MAAS is proxied through rack controllers, including HTTP metadata, DNS, syslog and APT (proxying via Squid). Note that in single-region/rack clusters, the region controller manages communication.
Proxying through rack controllers is useful in environments where communication between machines and region controllers is restricted.
MAAS creates an internal DNS domain, not manageable by the user, and a special multiple rack controllers. This.
Note: Zone management and maintenance still happen within the region controller.
Rack/region
Each rack controller must be able to initiate TCP connections on the following ports:
HTTP
The rack controller installs
nginx, which serves as a proxy and as an HTTP
server, binding to port 5248. Machines contact the metadata server via the rack
controller.
Syslog
See Syslog for more information about MAAS syslog communication as well as how to set up a remote syslog server. | https://old-docs.maas.io/2.5/en/intro-communication | 2019-09-15T13:48:04 | CC-MAIN-2019-39 | 1568514571360.41 | [] | old-docs.maas.io |
Creates an object of the type designated by the specified generic type parameter.
Namespace: DevExpress.ExpressApp
Assembly: DevExpress.ExpressApp.v19.1.dll
public ObjectType CreateObject<ObjectType>()
Public Function CreateObject(Of ObjectType) As ObjectType
This method calls a protected virtual method, CreateObjectCore, which must be overridden in the BaseObjectSpace class descendants. After an object of the specified type is created, the BaseObjectSpace.SetModified method is called so that the object is saved to the database during the next changes commit (see BaseObjectSpace.CommitChanges).
Use this method to create objects in Controllers. In a regular business class, create objects directly via their constructor. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.BaseObjectSpace.CreateObject--1 | 2019-09-15T12:15:44 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.devexpress.com |
Call Actions for Outbound Calls
Call actions are standard controls for outbound interactions. Interaction outbound interaction to a contact or internal target that you select by using the Team Communicator.
Note: When you transfer an ASM call, the outbound record is also transferred. The ownership of the record might also be transferred to the transfer target if this agent is also part of the campaign. If the agent is not part of the campaign, the ownership of the record stays with you.
- Instant Call Conference—Click Instant Call Conference (
) to start a voice conference instantly with the current outbound interaction and a contact or internal target that you select by using the Team Communicator.
- Send DTMF—You can attach numerical data to a call by entering dual-tone multifrequency (DTMF) digits into the call case history. Click the keypad button (
) to open the DTMF keypad. Type numbers into the number field, or click the keypad numbers to enter numbers.
- Schedule a Callback—Click Schedule a Callback (
) to reschedule a call (for example, if the contact is too busy to respond now) for a different date and/or time.
- Start Consultation—Start a (
) voice consultation with an internal target or a contact. The target can choose not to accept the request. The target can end the consultation. You can end the consultation, or you can transfer or conference your current interaction to or with the consultation target.
- Mark Done—Complete a call, close the Voice Interaction window, and preview the next contact on the campaign call list by clicking Mark Done (
). You might be configured to specify a disposition code before you can click Mark Done.
(Outbound Preview calls only) Click Done and Stop (
) to stop opening the preview for the next call automatically.
- Party Action Menu—In the call-status area, click the down-arrow that is beside the name of the contact to start a different interaction type with the contact, such as an e-mail interaction, if the contact has additional channel information available in the contact database.
This page was last modified on February 21, 2014, at 05:57.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/IW/8.5.0/Help/Call_Actions_for_Outbound_Calls | 2019-09-15T12:01:09 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.genesys.com |
To open a configuration (settings) dialog use one of the menubar items: → or → , sub-menu Only the latter dialog has the options to change the cube dimensions and shuffling difficulty. Below is a list of the options available.
- Watch shuffling in progress?
Provides an animated view of the cube when it is being shuffled by the Kubrick program. You can select the speed of animation.
- Watch your moves in progress?
Provides an animated view of your own moves at a speed you can select.
- Speed of moves:
Sets the speed at which animations go. The range is 1 to 15 degrees of turn per animation frame.
- % of bevel on edges of cubies:
Sets the percentage of bevelled edge on each cubie, relative to the size of the colored stickers. It affects the overall shape of each cubie. The range is from 4% to 30%.
- Cube dimensions:
Sets the three dimensions of the cube, brick or mat in cubies per side. Dimensions can range from 2x2x1 up to 6x6x6: the larger the dimensions, the harder the puzzle. Only one of the dimensions can be 1, otherwise the puzzle becomes too easy.
- Moves per shuffle (difficulty):
Sets the number of moves the Kubrick program will use to shuffle the cube. The number can range from 0 to 50: the more moves, the harder the puzzle. 2, 3 or 4 shuffling moves make for relatively easy puzzles, especially if the shuffling moves can be watched.
Selecting zero moves can be useful if you wish to experiment with different sequences of moves and what they do to the cube, such as when you are searching for pretty patterns or new solving moves. | https://docs.kde.org/stable5/en/kdegames/kubrick/configuration.html | 2019-09-15T13:04:53 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)] | docs.kde.org |
The Identity and Access Management (IAM) API allows you to manage users, user groups, permissions, and LDAP configuration settings through a RESTful interface. It offers more functionality than the DC/OS UI. program is running.
Use cluster URL, if your program runs outside of the DC/OS cluster. This can be obtained by launching the DC/OS UI.
Use
master.mesos, if your program runs inside of the cluster.
Using the IAM APIUsing the IAM API
To get an authentication token, pass the credentials of a local user or service account in the body of a
POST request to
/auth/login.
To log in local user accounts supply
uid and
password in the request.
curl -i -X POST https://<host-ip>/acs/api/v1/auth/login -d '{"uid": "<uid>", "password": "<password>"}' -H 'Content-Type: application/json'
To log in service accounts supply user ID and a service login token in the request. The service login token is a RFC 7519 JWT of type RS256. It must be constructed by combining the service account (
uid) and an expiration time (
exp) claim in the JWT format. The JWT requirements for a service login token are:
- Header
{ "alg": "RS256", "typ": "JWT" }
- Payload
{ "uid": "<uid>", "exp": "<expiration_time>" }
The provided information must then be encrypted using the service account’s private key. This can be done manually using jwt.io or programmatically with your favorite JWT library. The final encoding step should result in a
base64 encoded JWT which can be passed to the IAM.
curl -X POST https://<host-ip>/acs/api/v1/auth/login -d '{"uid": "<service-account-id>", "token": "<service-login-token>"}' -H 'Content-Type: application/json'
Both requests return a DC/OS" }
The DC/OS authentication token is also a RFC 7519 JWT of type RS256.
Using the DC/OS CLIUsing the DC/OS CLI
When you log in to
Using the HTTP headerUsing. See provisioning custom services for more information.
API referenceAPI reference
LoggingLogging
While the API returns informative error messages, you may also find it useful to check the logs of the service. Refer to Service and Task Logging for instructions. | http://docs-staging.mesosphere.com/mesosphere/dcos/1.13/security/ent/iam-api/ | 2019-09-15T13:33:59 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs-staging.mesosphere.com |
Create.
You can now proceed to enhance your project by connecting it with MongoDB or by adding AngularJS.
Connecting to MongoDB
You can connect your application with MongoDB using MongooseJS, an object modelling driver for Node.js. It is already installed in the MEAN stack so you only have to add the following lines to your app.js file:
var Mongoose = require('mongoose'); var db = Mongoose.createConnection('mongodb://USER:PASSWORD@localhost/DATABASE');. | https://docs.bitnami.com/bch/infrastructure/mean/get-started/start-mean-project/ | 2019-09-15T13:46:48 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.bitnami.com |
Shape¶
Reference
Cloth Shape.
- Pin Group
Vertex Group to use for pinning.
The shape of the cloth can be controlled by pinning cloth to a Vertex Group. There are several ways of doing this including Weight Painting areas you want to pin. The weight of each vertex in the group controls how strongly it is pinned.
- Stiffness
- Target position stiffness.
- Sewing
Another method of restraining cloth similar to pinning is sewing springs. Sewing springs are virtual springs that pull vertices in one part of a cloth mesh toward vertices in another part of the cloth mesh. This is different from pinning which binds vertices of the cloth mesh in place or to another object. A clasp on a cloak could be created with a sewing spring. The spring could pull two corners of a cloak about a character’s neck. This could result in a more realistic simulation than pinning the cloak to the character’s neck since the cloak would be free to slide about the character’s neck and shoulders.
Sewing springs are created by adding extra edges to a cloth mesh that are not included in any faces. They should connect vertices in the mesh that should be pulled together. For example the corners of a cloak.
- Max Sewing Force
- Maximum force that can be applied by sewing springs. Zero means unbounded, but it is not recommended to leave the field at zero in most cases, as it can cause instability due to extreme forces in the initial frames where the ends of the sewing springs are far apart.
- Shrinking Factor
- Factor by which to shrink the cloth.
- Dynamic Mesh
Allows animating the rest shape of cloth using shape keys or modifiers (e.g. an Armature modifier or any deformation modifier) placed above the Cloth modifier. When it is enabled, the rest shape is recalculated every frame, allowing unpinned cloth to squash and stretch following the character with the help of shape keys or modifiers, but otherwise move freely under control of the physics simulation.
Normally cloth uses the state of the object in the first frame to compute the natural rest shape of the cloth, and keeps that constant throughout the simulation. This is reasonable for fully realistic scenes, but does not quite work for clothing on cartoon style characters that use a lot of squash and stretch. | https://docs.blender.org/manual/fr/dev/physics/cloth/settings/shape.html | 2019-09-15T11:57:11 | CC-MAIN-2019-39 | 1568514571360.41 | [array(['../../../_images/physics_cloth_settings_cloth-settings_pinning.png',
'../../../_images/physics_cloth_settings_cloth-settings_pinning.png'],
dtype=object) ] | docs.blender.org |
Contains layout and appearance options for a scale line.
Namespace: DevExpress.UI.Xaml.Gauges
Assembly: DevExpress.UI.Xaml.Gauges.v19.1.dll
public class ScaleLineOptions : GaugeOptionsBase
Public Class ScaleLineOptions Inherits GaugeOptionsBase
The options provided by a ScaleLineOptions instance can be accessed via the Scale.LineOptions property of a Scale object.
To define the layout of the line, use the GaugeOptionsBase.ZIndex and ScaleLineOptions.Offset property.
The appearance of the lines is set by the ScaleLineOptions.Thickness property.
For more information on lines, refer to the Line (Circular Scale) and Line (Linear Scale) documents. | https://docs.devexpress.com/Win10Apps/DevExpress.UI.Xaml.Gauges.ScaleLineOptions | 2019-09-15T12:19:07 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.devexpress.com |
Installing the Pulse Plugin
The Genesys Knowledge Center Plugin for Pulse provides access to Knowledge Center Server statistics such as KPI, user activity, trending topics, like and dislike trends, and activity types.
Install Genesys Knowledge Center Plugin for Pulse
Components required for Pulse plugin come pre-integrated into every deployment of Genesys Knowledge Center Server. So you do not need any additional steps to install them, please proceed directly to the configuration.
Configure Genesys Knowledge Center Plugin for Pulse
Start
- Log into Genesys Administrator.
- Go to Dashboard > Pulse.
- Click Add a Widget.
- Select the IFrame widget type.
- Set the name of the widget.
- Set the widget URL to: http://<host>:<es_port>/_plugin/gkc-kpi/?kbId=<knowledge_base_id>&lang=<chosen language>&tenantId=<tenatId>&timeframe=<timeframe> (see Knowledge Center Pulse Plugin Configuration Options for more information about parameters).
- Set the Maximized widget URL. You can set it to the Default Dashboard (http://<host>:<es_port>/_plugin/gkc-dashboard/#/dashboard/file/default.json) or the Performance Dashboard (http://<host>:<es_port>/_plugin/gkc-dashboard/#/dashboard/file/performance.json).
- Click Finish.
You have successfully added a widget for accessing Knowledge Center statistics.
End
Knowledge Center Pulse Plugin Configuration Options
You can customize the KPI widget by defining parameters in the URL:
http://<host>:<es_port>/_plugin/gkc-kpi/?kbId=<knowledge_base_id>&lang=<chosen language>&tenatId=<tenantId>&timeframe=<timeframe>
- kbId=<knowledge_base_id>— Set which knowledge base id to generate metrics for. If not defined, the metrics will be calculated for all accessible knowledge bases (within defined tenant, if provided).
- lang=<chosen language>— Set the language metrics will be generated for. If not defined, the metrics will be generated in all available languages within the knowledge base and/or tenant.
- tenantId=<tenantId> — Set which tenant to generate metrics for. If not defined, the metrics will be generated for all available tenants (not recommended for multi-tenant environments). Note: this option was added in the 8.5.303 release of the product.
- timeframe=<timeframe>— Timeframe to generate metrics (for example now-1M). If not defined, the metrics will be generated for the last hour (now-1h).
ImportantTimeframe expression must start with an “anchor” date - now and follow by a math expression starting from - and / (rounding). The units supported are y (year), M (month), w (week), d (day), h (hour), m (minute), and s (second). For example, now-1h, now-1h-1m, now-1h/d.
This page was last modified on June 21, 2017, at 06:59.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/KC/8.5.3/Deployment/InstallPulsePlugin | 2019-09-15T12:42:47 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.genesys.com |
Access the Profiler window in the Unity Editor via the toolbar: Window > Profiler.
See Profiler overview for a summary of how the Profiler works.
The Profiler controls are in the toolbar.
When running at a fixed framerate or running in sync with the vertical blank, Unity records the waiting time in “Wait For Target FPS”. area is currently selected.
The vertical scale of the timeline is managed automatically and will attempt to fill the vertical space of the window. Note that to get more detail in say the CPU Usage area you can remove the Memory and Rendering (menu: File > Build Settings…).
Check the Development Build option in the dialog box. From here you can also check Autoconnect Profiler to make the Editor and Player Autoconnect at startup.
Enable remote profiling on iOSB.
For WiFi profiling, follow these steps:
Note: The Android device and host computer (running the Unity Editor) must both be on the same subnet for the device detection to work.
For ADB profiling, follow these steps:
adb forward tcp:54 - Leave page feedback | https://docs.unity3d.com/560/Documentation/Manual/ProfilerWindow.html | 2019-09-15T12:25:18 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.unity3d.com |
Chaincode Tutorials.
Two Personas¶
We offer two different perspectives on chaincode. One, from the perspective of an application developer developing a blockchain application/solution entitled Chaincode for Developers, and the other, Chaincode for Operators oriented to the blockchain network operator who is responsible for managing a blockchain network, and who would leverage the Hyperledger Fabric API to install, instantiate, and upgrade chaincode, but would likely not be involved in the development of a chaincode application. | https://hyperledger-fabric-docs-zh-cn.readthedocs.io/zh_CN/latest/chaincode.html | 2019-09-15T13:34:52 | CC-MAIN-2019-39 | 1568514571360.41 | [] | hyperledger-fabric-docs-zh-cn.readthedocs.io |
Created: 14/01/2016
Latest update: 03/01/2019
By: Villatheme
Thank you for purchasing our plugin. If you have any questions that are beyond the scope of this documentation, please feel free to request support at our Support Forum. Thanks so much!
Woo Coupon Box plugin is a powerful, professional solution to show WooCommerce coupon with Subscribe for WordPress websites. You can custom design unlimited with WordPress Customizer.
It is recommended using
1. Plugin WooCommerce is installed and activated already.
2. Make sure that those limits to a minimum as follows in order for free-trouble while installing.
PHP Time Limit: 30
PHP Max Input Vars: 1000
Memory Limit: 256M
Get the plugin installation package from your account download page and save it to your desktop.
Go to Plugin/ Add New/ Upload Plugin/ Choose file/ select the plugin zip file woocommerce-coupon-box.zip/ click “Install Now“/ click “Active plugin“.
Video Install and Set up WooCommerce Coupon Box:
Done! Let’s start using the plugin.
Go to Dashboard/ Woo Coupon Box/ Settings/ General to configure common settings of Woo Coupon Box.
In Coupon tab, you can set up coupons you will send to visitors who subscribe emails in Woo Coupon Box.
In the Email tab, you can configure the email will send to subscribers after they subscribe emails.
In Email API tab, you can sync subscribing emails to your MailChimp, Active Campaign, SendGrid account.
In the Assign tab, you can select which page where will the Coupon Box pop-up appear.
In the Design tab, click on Go to design now to go the Design page of WooCommerce Coupon Box.
In the Design page
In Update tab, fill in your Envato purchase code to active the auto-update feature.
In Woo Coupon Box/ Email Subscribe, you can check subscribed emails with information for the email address, subscribing time, email campaign, given coupon, MailChimp list and Active Campaign list.
In Woo Coupon Box/ Email Campaign, you can create new/delete/edit email campaign. By default, you will have an Uncategorized campaign
In Woo Coupon Box/ Export Email, you can export emails by date or by campaigns. Emails addresses will be exported as an excel file. | http://docs.villatheme.com/woo-coupon-box/ | 2019-09-15T13:23:05 | CC-MAIN-2019-39 | 1568514571360.41 | [] | docs.villatheme.com |
Render Modes
RadSlider has different render modes that can change the actual HTML markup that is rendered. They are exposed via the RenderMode property that can have four possible values - Classic, Lightweight, Mobile and Auto. This functionality was introduced in the Q2 2014 version.
The possible options elastic capabilities of RadSlider are enabled in this mode.
Mobile—this mode is currently not supported. If you set it, the mode will fall back automatically to Lightweight.
Auto—this mode makes each control choose the appropriate rendering mode according to the used browser—Classic or Lightweight.
RadSlider,Slider </telerik:RadSlider>
RadSlider1.RenderMode = Telerik.Web.UI.RenderMode.Lightweight;
RadSlider1.RenderMode = Telerik.Web.UI.RenderMode.Lightweight
- A global setting in the web.config file that will affect the entire application, unless a concrete value is specified for a given control instance:
<appSettings> <add key="Telerik.Web.UI.Slider.RenderMode" value="lightweight" /> </appSettings> | https://docs.telerik.com/devtools/aspnet-ajax/controls/slider/mobile-support/render-modes | 2018-06-18T05:24:30 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.telerik.com |
aggregation
An InfluxQL function that returns an aggregated value across a set of points. See InfluxQL Functions for a complete list of the available and upcoming aggregations.
Related entries: function, selector, transformation
batch
A collection of points in line protocol format, separated by newlines (
0x0A).
A batch of points may be submitted to the database using a single HTTP request to the write endpoint.
This makes writes via the HTTP API much more performant by drastically reducing the HTTP overhead.
InfluxData recommends batch sizes of 5,000-10,000 points, although different use cases may be better served by significantly smaller or larger batches.
Related entries: line protocol, point
continuous query (CQ)
An InfluxQL query that runs automatically and periodically within a database.
Continuous queries require a function in the
SELECT clause and must include a
GROUP BY time() clause.
See Continuous Queries.
Related entries: function
line protocol
The text based format for writing points to InfluxDB. See Line Protocol.
measurement
The part of InfluxDB’s structure that describes the data stored in the associated fields. Measurements are strings.
Related entries: field, series
The part of InfluxDB’s data structure that consists of a single collection of fields in a series. Each point is uniquely identified by its series and timestamp.
You cannot store more than one point with the same timestamp in the same series. Instead, when you write a new point to the same series with the same timestamp as an existing point in that series, the field set becomes the union of the old field set and the new field set, where any ties go to the new field set. For an example, see Frequently Asked Questions.
Related entries: field set, series, timestamp
points per second
A deprecated measurement of the rate at which data are persisted to InfluxDB. The schema allows and even encourages the recording of multiple metric vales per point, rendering points per second ambiguous.
Write speeds are generally quoted in values per second, a more precise metric.
Related entries: point, schema, values per second
query
An operation that retrieves data from InfluxDB. See Data Exploration, Schema Exploration, Database Management.
replication factor
The attribute of the retention policy that determines how many copies of the data are stored in the cluster.
InfluxDB replicates data across
N data nodes, where
N is the replication factor.
Related entries: duration, node, retention policy
retention policy (RP).
See Database Management for
The collection of data in InfluxDB’s data structure that share a measurement, tag set, and retention policy.
Note: The field set is not part of the series identification!
Related entries: field set, measurement, retention policy, tag set
series cardinality
The number of unique database, measurement, and tag set Frequently Asked Questions for how to query InfluxDB for series cardinality.
Related entries: tag set, measurement, tag key two kinds of users in InfluxDB:
-. | http://docs.influxdata.com/influxdb/v1.3/concepts/glossary/ | 2018-06-18T05:43:06 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.influxdata.com |
Dashboard Widget :
This is the Google Analytics dashboard widget. In this widget you can see all the reports with time periods.
Reports like Sessions, Users, Organic, Page Views, Bounce Rate, Location, Pages, Referrers, Searches, Traffic, Technology, 404 Errors.
Time Periods like Today, Yesterday, Last 7 Days, Last 14 Days, Last 30 Days, Last 90 Days, One Year, Three Years. | http://docs.megaedzee.com/docs/google-analytics/special-features/dashboard/ | 2018-06-18T05:59:57 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.megaedzee.com |
Legacy Documentation
You’ve requested docs for a legacy version of Sensu.
Documentation covering Sensu versions up to 0.28 is no longer being hosted on the Sensu website.
If you are intentionally seeking documentation for a legacy version, please visit the sensu-docs project on GitHub.
Otherwise, please click here for the latest Sensu Core documentation. | https://docs.sensu.io/sensu-core/legacy/ | 2018-06-18T05:39:41 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.sensu.io |
In the Ready to complete page,. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.html.hostclient.doc/GUID-54771C6C-15E4-4C41-AC82-D4A52C537B0A.html | 2018-06-18T05:47:04 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.vmware.com |
Generates a set of credentials consisting of a user name and password that can be used to access the service specified in the request. These credentials are generated by IAM, and can be used only for the specified service.
You can have a maximum of two sets of service-specific credentials for each supported service per user.
The only supported service at this time is AWS CodeCommit.
You can reset the password to a new service-generated value by calling reset-service-specific-credential .
For more information about service-specific credentials, see Using IAM with AWS CodeCommit: Git Credentials, SSH Keys, and AWS (per its regex pattern ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
--service-name (string)
The name of the AWS service that is to be associated with the credentials. The service you specify here is the only service that can be accessed using these credentials.
-pecificCredential -> (structure)
A structure that contains information about the newly created service-specific credential.
Warning
This is the only time that the password for this credential set is available. It cannot be recovered later. Instead, you will have to reset the password with reset-service-specific-credential . AWS means that the key is valid for API calls, while Inactive means it is not. | https://docs.aws.amazon.com/cli/latest/reference/iam/create-service-specific-credential.html | 2018-06-18T05:46:06 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.aws.amazon.com |
Strategies for testing your code in Azure Functions
This topic demonstrates the various ways to test functions, including using the following general approaches:
- HTTP-based tools, such as cURL, Postman, and even a web browser for web-based triggers
- Azure Storage Explorer, to test Azure Storage-based triggers
- Test tab in the Azure Functions portal
- Timer-triggered function
- Testing application or framework
All these testing methods use an HTTP trigger function that accepts input through either a query string parameter or the request body. You create this function in the first section.
Create a function for testing
For most of this tutorial, we use a slightly modified version of the HttpTrigger JavaScript function template that is available when you create a function. If you need help creating a function, review this tutorial. Choose the HttpTrigger- JavaScript template when creating the test function in the Azure portal.
The default function template is basically a "hello world" function that echoes back the name from the request body or query string parameter,
name=<your name>. We'll update the code to also allow you to provide the name and an address as JSON content in the request body. Then the function echoes these back to the client when available.
Update the function with the following code, which we will use for testing:
module.exports = function (context, req) { context.log("HTTP trigger function processed a request. RequestUri=%s", req.originalUrl); context.log("Request Headers = " + JSON.stringify(req.headers)); var res; if (req.query.name || (req.body && req.body.name)) { if (typeof req.query.name != "undefined") { context.log("Name was provided as a query string param..."); res = ProcessNewUserInformation(context, req.query.name); } else { context.log("Processing user info from request body..."); res = ProcessNewUserInformation(context, req.body.name, req.body.address); } } else { res = { status: 400, body: "Please pass a name on the query string or in the request body" }; } context.done(null, res); }; function ProcessNewUserInformation(context, name, address) { context.log("Processing user information..."); context.log("name = " + name); var echoString = "Hello " + name; var res; if (typeof address != "undefined") { echoString += "\n" + "The address you provided is " + address; context.log("address = " + address); } res = { // status: 200, /* Defaults to 200 */ body: echoString }; return res; }
Test a function with tools
Outside the Azure portal, there are various tools that you can use to trigger your functions for testing. These include HTTP testing tools (both UI-based and command line), Azure Storage access tools, and even a simple web browser.
Test with a browser
The web browser is a simple way to trigger functions via HTTP. You can use a browser for GET requests that do not require a body payload, and that use only query string parameters.
To test the function we defined earlier, copy the Function Url from the portal. It has the following form:
https://<Your Function App>.azurewebsites.net/api/<Your Function Name>?code=<your access code>
Append the
name parameter to the query string. Use an actual name for the
<Enter a name here> placeholder.
https://<Your Function App>.azurewebsites.net/api/<Your Function Name>?code=<your access code>&name=<Enter a name here>
Paste the URL into your browser, and you should get a response similar to the following.
This example is the Chrome browser, which wraps the returned string in XML. Other browsers display just the string value.
In the portal Logs window, output similar to the following is logged in executing the function:
2016-03-23T07:34:59 Welcome, you are now connected to log-streaming service. 2016-03-23T07:35:09.195 Function started (Id=61a8c5a9-5e44-4da0-909d-91d293f20445) 2016-03-23T07:35:10.338 Node.js HTTP trigger function processed a request. RequestUri= from a browser 2016-03-23T07:35:10.338 Request Headers = {"cache-control":"max-age=0","connection":"Keep-Alive","accept":"text/html","accept-encoding":"gzip","accept-language":"en-US"} 2016-03-23T07:35:10.338 Name was provided as a query string param. 2016-03-23T07:35:10.338 Processing User Information... 2016-03-23T07:35:10.369 Function completed (Success, Id=61a8c5a9-5e44-4da0-909d-91d293f20445)
Test with Postman
The recommended tool to test most of your functions is Postman, which integrates with the Chrome browser. To install Postman, see Get Postman. Postman provides control over many more attributes of an HTTP request.
Tip
Use the HTTP testing tool that you are most comfortable with. Here are some alternatives to Postman:
To test the function with a request body in Postman:
- Start Postman from the Apps button in the upper-left corner of a Chrome browser window.
- Copy your Function Url, and paste it into Postman. It includes the access code query string parameter.
- Change the HTTP method to POST.
Click Body > raw, and add a JSON request body similar to the following:
{ "name" : "Wes testing with Postman", "address" : "Seattle, WA 98101" }
- Click Send.
The following image shows testing the simple echo function example in this tutorial.
In the portal Logs window, output similar to the following is logged in executing the function:
2016-03-23T08:04:51 Welcome, you are now connected to log-streaming service. 2016-03-23T08:04:57.107 Function started (Id=dc5db8b1-6f1c-4117-b5c4-f6b602d538f7) 2016-03-23T08:04:57.763 HTTP trigger function processed a request. RequestUri= 2016-03-23T08:04:57.763 Request Headers = {"cache-control":"no-cache","connection":"Keep-Alive","accept":"*/*","accept-encoding":"gzip","accept-language":"en-US"} 2016-03-23T08:04:57.763 Processing user info from request body... 2016-03-23T08:04:57.763 Processing User Information... 2016-03-23T08:04:57.763 name = Wes testing with Postman 2016-03-23T08:04:57.763 address = Seattle, W.A. 98101 2016-03-23T08:04:57.795 Function completed (Success, Id=dc5db8b1-6f1c-4117-b5c4-f6b602d538f7)
Test with cURL from the command line
Often when you're testing software, it's not necessary to look any further than the command line to help debug your application. This is no different with testing functions. Note that the cURL is available by default on Linux-based systems. On Windows, you must first download and install the cURL tool.
To test the function that we defined earlier, copy the Function URL from the portal. It has the following form:
https://<Your Function App>.azurewebsites.net/api/<Your Function Name>?code=<your access code>
This is the URL for triggering your function. Test this by using the cURL command on the command line to make a GET (
-G or
--get) request against the function:
curl -G https://<Your Function App>.azurewebsites.net/api/<Your Function Name>?code=<your access code>
This particular example requires a query string parameter, which can be passed as Data (
-d) in the cURL command:
curl -G https://<Your Function App>.azurewebsites.net/api/<Your Function Name>?code=<your access code> -d name=<Enter a name here>
Run the command, and you see the following output of the function on the command line:
In the portal Logs window, output similar to the following is logged in executing the function:
2016-04-05T21:55:09 Welcome, you are now connected to log-streaming service. 2016-04-05T21:55:30.738 Function started (Id=ae6955da-29db-401a-b706-482fcd1b8f7a) 2016-04-05T21:55:30.738 Node.js HTTP trigger function processed a request. RequestUri= Functions 2016-04-05T21:55:30.738 Function completed (Success, Id=ae6955da-29db-401a-b706-482fcd1b8f7a)
Test a blob trigger by using Storage Explorer
You can test a blob trigger function by using Azure Storage Explorer.
In the Azure portal for your function app, create a C#, F# or JavaScript blob trigger function. Set the path to monitor to the name of your blob container. For example:
files
- Click the + button to select or create the storage account you want to use. Then click Create.
Create a text file with the following text, and save it:
A text file for blob trigger function testing.
- Run Azure Storage Explorer, and connect to the blob container in the storage account being monitored.
Click Upload to upload the text file.
The default blob trigger function code reports the processing of the blob in the logs:
2016-03-24T11:30:10 Welcome, you are now connected to log-streaming service. 2016-03-24T11:30:34.472 Function started (Id=739ebc07-ff9e-4ec4-a444-e479cec2e460) 2016-03-24T11:30:34.472 C# Blob trigger function processed: A text file for blob trigger function testing. 2016-03-24T11:30:34.472 Function completed (Success, Id=739ebc07-ff9e-4ec4-a444-e479cec2e460)
Test a function within functions
The Azure Functions portal is designed to let you test HTTP and timer triggered functions. You can also create functions to trigger other functions that you are testing.
Test with the Functions portal Run button
The portal provides a Run button that you can use to do some limited testing. You can provide a request body by using the button, but you can't provide query string parameters or update request headers.
Test the HTTP trigger function we created earlier by adding a JSON string similar to the following in the Request body field. Then click the Run button.
{ "name" : "Wes testing Run button", "address" : "USA" }
In the portal Logs window, output similar to the following is logged in executing the function:
2016-03-23T08:03:12 Welcome, you are now connected to log-streaming service. 2016-03-23T08:03:17.357 Function started (Id=753a01b0-45a8-4125-a030-3ad543a89409) 2016-03-23T08:03:18.697 HTTP trigger function processed a request. RequestUri= 2016-03-23T08:03:18.697 Request Headers = {"connection":"Keep-Alive","accept":"*/*","accept-encoding":"gzip","accept-language":"en-US"} 2016-03-23T08:03:18.697 Processing user info from request body... 2016-03-23T08:03:18.697 Processing User Information... 2016-03-23T08:03:18.697 name = Wes testing Run button 2016-03-23T08:03:18.697 address = USA 2016-03-23T08:03:18.744 Function completed (Success, Id=753a01b0-45a8-4125-a030-3ad543a89409)
Test with a timer trigger
Some functions can't be adequately tested with the tools mentioned previously. For example, consider a queue trigger function that runs when a message is dropped into Azure Queue storage. You can always write code to drop a message into your queue, and an example of this in a console project is provided later in this article. However, there is another approach you can use that tests functions directly.
You can use a timer trigger configured with a queue output binding. That timer trigger code can then write the test messages to the queue. This section walks through an example.
For more in-depth information on using bindings with Azure Functions, see the Azure Functions developer reference.
Create a queue trigger for testing
To demonstrate this approach, we first create a queue trigger function that we want to test for a queue named
queue-newusers. This function processes name and address information dropped into Queue storage for a new user.
Note
If you use a different queue name, make sure the name you use conforms to the Naming Queues and MetaData rules. Otherwise, you get an error.
- In the Azure portal for your function app, click New Function > QueueTrigger - C#.
Enter the queue name to be monitored by the queue function:
queue-newusers
- Click the + button to select or create the storage account you want to use. Then click Create.
- Leave this portal browser window open, so you can monitor the log entries for the default queue function template code.
Create a timer trigger to drop a message in the queue
- Open the Azure portal in a new browser window, and navigate to your function app.
Click New Function > TimerTrigger - C#. Enter a cron expression to set how often the timer code tests your queue function. Then click Create. If you want the test to run every 30 seconds, you can use the following CRON expression:
*/30 * * * * *
- Click the Integrate tab for your new timer trigger.
- Under Output, click + New Output. Then click queue and Select.
Note the name you use for the queue message object. You use this in the timer function code.
myQueue
Enter the queue name where the message is sent:
queue-newusers
- Click the + button to select the storage account you used previously with the queue trigger. Then click Save.
- Click the Develop tab for your timer trigger.
You can use the following code for the C# timer function, as long as you used the same queue message object name shown earlier. Then click Save.
using System; public static void Run(TimerInfo myTimer, out String myQueue, TraceWriter log) { String newUser = "{\"name\":\"User testing from C# timer function\",\"address\":\"XYZ\"}"; log.Verbose($"C# Timer trigger function executed at: {DateTime.Now}"); log.Verbose($"{newUser}"); myQueue = newUser; }
At this point, the C# timer function executes every 30 seconds if you used the example cron expression. The logs for the timer function report each execution:
2016-03-24T10:27:02 Welcome, you are now connected to log-streaming service. 2016-03-24T10:27:30.004 Function started (Id=04061790-974f-4043-b851-48bd4ac424d1) 2016-03-24T10:27:30.004 C# Timer trigger function executed at: 3/24/2016 10:27:30 AM 2016-03-24T10:27:30.004 {"name":"User testing from C# timer function","address":"XYZ"} 2016-03-24T10:27:30.004 Function completed (Success, Id=04061790-974f-4043-b851-48bd4ac424d1)":"User testing from C# timer function","address":"XYZ"} 2016-03-24T10:27:30.607 Function completed (Success, Id=e304450c-ff48-44dc-ba2e-1df7209a9d22)
Test a function with code
You may need to create an external application or framework to test your functions.
Test an HTTP trigger function with code: Node.js
You can use a Node.js app to execute an HTTP request to test your function. Make sure to set:
- The
hostin the request options to your function app host.
- Your function name in the
path.
- Your access code (
<your code>) in the
path.
Code example:
var http = require("http"); var nameQueryString = "name=Wes%20Query%20String%20Test%20From%20Node.js"; var nameBodyJSON = { name : "Wes testing with Node.JS code", address : "Dallas, T.X. 75201" }; var bodyString = JSON.stringify(nameBodyJSON); var options = { host: "functions841def78.azurewebsites.net", /&" + nameQueryString,", method: "POST", headers : { "Content-Type":"application/json", "Content-Length": Buffer.byteLength(bodyString) } }; callback = function(response) { var str = "" response.on("data", function (chunk) { str += chunk; }); response.on("end", function () { console.log(str); }); } var req = http.request(options, callback); console.log("*** Sending name and address in body ***"); console.log(bodyString); req.end(bodyString);
Output:
C:\Users\Wesley\testing\Node.js>node testHttpTriggerExample.js *** Sending name and address in body *** {"name" : "Wes testing with Node.JS code","address" : "Dallas, T.X. 75201"} Hello Wes testing with Node.JS code The address you provided is Dallas, T.X. 75201
In the portal Logs window, output similar to the following is logged in executing the function:
2016-03-23T08:08:55 Welcome, you are now connected to log-streaming service. 2016-03-23T08:08:59.736 Function started (Id=607b891c-08a1-427f-910c-af64ae4f7f9c) 2016-03-23T08:09:01.153 HTTP trigger function processed a request. RequestUri= 2016-03-23T08:09:01.153 Request Headers = {"connection":"Keep-Alive","host":"functionsExample.azurewebsites.net"} 2016-03-23T08:09:01.153 Name not provided as query string param. Checking body... 2016-03-23T08:09:01.153 Request Body Type = object 2016-03-23T08:09:01.153 Request Body = [object Object] 2016-03-23T08:09:01.153 Processing User Information... 2016-03-23T08:09:01.215 Function completed (Success, Id=607b891c-08a1-427f-910c-af64ae4f7f9c)
Test a queue trigger function with code: C#
We mentioned earlier that you can test a queue trigger by using code to drop a message in your queue. The following example code is based on the C# code presented in the Getting started with Azure Queue storage tutorial. Code for other languages is also available from that link.
To test this code in a console app, you must:
- Configure your storage connection string in the app.config file.
- Pass a
nameand
addressas parameters to the app. For example,
C:\myQueueConsoleApp\test.exe "Wes testing queues" "in a console app". (This code accepts the name and address for a new user as command-line arguments during runtime.)
Example C# code:
static void Main(string[] args) { string name = null; string address = null; string queueName = "queue-newusers"; string JSON = null; if (args.Length > 0) { name = args[0]; } if (args.Length > 1) { address = args[1]; } // Retrieve storage account from connection string CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]); // Create the queue client CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient(); // Retrieve a reference to a queue CloudQueue queue = queueClient.GetQueueReference(queueName); // Create the queue if it doesn't already exist queue.CreateIfNotExists(); // Create a message and add it to the queue. if (name != null) { if (address != null) JSON = String.Format("{{\"name\":\"{0}\",\"address\":\"{1}\"}}", name, address); else JSON = String.Format("{{\"name\":\"{0}\"}}", name); } Console.WriteLine("Adding message to " + queueName + "..."); Console.WriteLine(JSON); CloudQueueMessage message = new CloudQueueMessage(JSON); queue.AddMessage(message); }":"Wes testing queues","address":"in a console app"} 2016-03-24T10:27:30.607 Function completed (Success, Id=e304450c-ff48-44dc-ba2e-1df7209a9d22) | https://docs.microsoft.com/en-us/azure/azure-functions/functions-test-a-function | 2018-06-18T06:09:02 | CC-MAIN-2018-26 | 1529267860089.11 | [array(['media/functions-test-a-function/browser-test.png',
'Screenshot of Chrome browser tab with test response'],
dtype=object)
array(['media/functions-test-a-function/postman-test.png',
'Screenshot of Postman user interface'], dtype=object)
array(['media/functions-test-a-function/curl-test.png',
'Screenshot of Command Prompt output'], dtype=object)] | docs.microsoft.com |
To access the Template Manager
From Joomla! Documentation
Joomla!
≥ 3.0
- Log in to the Administrator (Backend). If you are not sure how to do this see: To log in to the Administrator (Backend)
- Click on: Extensions → Templates
You will now see the Template Manager screen.
Note: If you do not see Templates listed as an option on the Extensions menu, then it is most likely because you are not logged in as a Super Administrator. Only Super Administrators will see this menu item. | https://docs.joomla.org/J3.x:To_access_the_Template_Manager | 2016-08-29T18:01:48 | CC-MAIN-2016-36 | 1471982290497.47 | [] | docs.joomla.org |
PUBLIC UTILITIES COMMISSION OF THE STATE OF CALIFORNIA
ID #8956
ENERGY DIVISION RESOLUTION E-4286
December 3, cost of. | http://docs.cpuc.ca.gov/PUBLISHED/AGENDA_RESOLUTION/110216.htm | 2016-08-29T17:58:09 | CC-MAIN-2016-36 | 1471982290497.47 | [] | docs.cpuc.ca.gov |
:
Optional property that, if used, defines the cache policy to use for this repository source. When not used, this source will not define a specific duration for caching information.
The JNDI name of the JDBC DataSource instance that should be used. If not specified, the other driver properties must be set.
Optional property that defines the name to use for the catalog name if the database does not support catalogs or the database has a catalog with the empty string as a name. The default value is "default".
Optional property that defines the name to use for the schema name if the database does not support schemas or the database has a schema with the empty string as a name. The default value is "default"..
Optional property that defines the number of seconds after a connection remains in the pool that the connection should be tested to ensure it is still valid. The default is 180 seconds (or 3 minutes).
Optional property that defines the maximum number of connections that may be in the connection pool. The default is "5".
Optional property that defines the maximum number of seconds that a connection should remain in the pool before being closed. The default is "600" seconds (or 10 minutes).
Optional property that defines the maximum number of statements that should be cached. The default value is "100", but statement caching can be disabled by setting to "0".
Advanced optional property that defines the name of a custom class to use for metadata collection, which is typically needed
for JDBC drivers that don't properly support the standard
DatabaseMetaData methods.
The specified class must implement the
MetadataCollector interface and must have a public no-argument constructor.
If an empty string (or null) value is specified for this property, a default
MetadataCollector implementation will be used
that relies on the driver's
DatabaseMetaData.
Optional property that defines the minimum number of connections that will be kept in the connection pool. The default is "0"., if used, defines the cache policy to use for caching nodes within the connector.
The number of connections that should be added to the pool when there are not enough to be used. The default is "1".
The password that should be used when creating JDBC connections using the JDBC driver class. This is not required if the DataSource is found in JNDI. new UUID is generated.
The URL that should be used when creating JDBC connections using the JDBC driver class. This is not required if the DataSource is found in JNDI.
The username that should be used when creating JDBC connections using the JDBC driver class. This is not required if the DataSource is found in JNDI..connector.meta.jdbc.JdbcMetadataSource"
mode:description="The database source for our content"
mode:dataSourceJndiName="java:/MyDataSource"
mode:
<!--"); | http://docs.jboss.org/modeshape/2.8.1.Final/manuals/reference/html/jdbc-metadata-connector.html | 2016-08-29T19:18:12 | CC-MAIN-2016-36 | 1471982290497.47 | [] | docs.jboss.org |
Details
PPPOE library operation is persistent -- once you start it with this call, the library will "persist" to connect to the ADSL modem. No matter how many times the connection fails, the library will keep trying. If the successfully established PPPOE link fails, the library will attempt to reestablish the link. | http://docs.tibbo.com/taiko/pppoe_start.htm | 2012-05-24T07:25:22 | crawl-003 | crawl-003-007 | [] | docs.tibbo.com |
0release: Customisation¶
0release can be used to create releases of your software from a version control system. It uses sensible defaults, allowing it to create releases for simple projects with very little configuration. For more complex projects, you can specify extra commands that should be run during the release process using the syntax described here.
Example¶
For example, imagine that our hello-world example program now prints out a banner with its version number when run.
hello.py now looks like this:
#!/usr/bin/env python version='0.1' print "Welcome to Hello World version %s" % version print "Hello World!"
We want to make sure that the number in the hello.py file is updated automatically when we make a new release. To do this, add a
<interface xmlns=" <name>HelloWorld</name> <summary>minimal demonstration package for 0release</summary> <description> This program outputs the message "Hello World". You can create new releases of it using 0release. </description> <release:management xmlns:sed -i "s/^version='.*'$/version='$RELEASE_VERSION'/" hello.py</release:action> </release:management> ... </interface>
This tells 0release that during the
commit-release phase (in which it updates the version number to the number chosen for the release) it should execute the given command, which updates the version line in the Python code. Of course, you can perform any action you want.
Phase: commit-release¶
- Current directory
- The working copy (under version control), as specified by the
idattribute in the feed.
$RELEASE_VERSION
- The version chosen for the new release.
These actions are run after the user has entered the version number for the new release. After the actions are run, 0release will update the local feed file with the new version number and commit all changes to the version control system.
Any changes made to the working copy will therefore appear in both the history and also in the release archive.
If your script fails (returns a non-zero exit status), 0release will abort but will not revert any changes made by the actions. You will have to manually revert any changes before 0release will allow you to restart the release process.
Phase: generate-archive¶
- Current directory
- A temporary directory created by unpacking the archive exported from the SCM.
$RELEASE_VERSION
- The version chosen for the new release.
Once the release version is committed to version control, 0release exports that revision to a temporary directory. After running all the actions in this phase, the release tarball is created from the final state of the directory. Use this phase to generate files that should be in the release archive but not in the tagged revision under version control. Typical actions here are:
- Running
autoconfto create a
configurescript.
- Building translations (
.mofiles) from source
.pofiles.
- Building documentation (e.g. HTML from DocBook sources).
Notice that all the above generate platform independent files. Do not compile to platform-specific binaries here (e.g. do not compile C source files to executables). For such programs, you need one source package and multiple binary packages (one for each architecture). See Releases with source and binary packages for that.
<add-toplevel-directory>¶
Adding this element causes 0release to put everything in a sub-directory, named after the feed. This is probably only useful for ROX applications, where the version control system contains e.g. just
AppRunbut the release should contain
archive-2.2/Archive/AppRun. This is done using:
<release:management xmlns:release=" <release:add-toplevel-directory/> </release:management> | https://docs.0install.net/tools/0release/customisation/ | 2022-05-16T21:25:19 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.0install.net |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
To register a new app, the developer selects the Add a new app button on the My Apps page.
The portal then displays the default app registration form:
By default, the developer only has to specify the app name, callback URL, and the list of API products to add to the app.
As an API provider, you have a complete control over the app registration process. For example, you can configure:
- The list of API products available on the portal
- Whether there is a default API product
- Whether the callback URL is required
- Whether the API key is manually or automatically approved for an API product
- Whether any other information is required on the Add App page to register the app
This topic describes how to configure the app registration process for your portal. However, this topic does not describe how to create API products. For more, see:
You can configure the portal to prohibit developers from being able to create, delete, or edit apps based on the role assigned to the developer. For example, you might configure the portal to create a single, default app for all developers when the developer registers. Then, you only allow some developers to add new apps, possibly based on a fee structure or other characteristics of the developer. Use roles and permissions to control which developers can create, delete, and edit apps. See Add and manage user accounts for more.
Specifying the API products available on the portal
There are two ways in which you can specify the API products that are available when a developer accesses the portal:
- Specifying the access level when creating an API product
- Restricting access to an API product based on roles
Specifying the access level when creating an API product
When you create an API product, you specify the access level option of the product, as shown below:
For more information about how the access level impacts the availability of the API product in the Drupal 7 developer portal, see Access level..
Configuring how a developer associates API products with an app
To register a new app, a developer selects the Add a new app button on the My Apps page to open the Add App form:
Based on how you configure the portal, the developer can select one or more API products to associate with the app at the time of app registration. Or, you can specify a default product that is assigned to all apps.
The following configuration options are available on the portal to control API product selection when registering an app:
- Do not associate apps with any API Product.
- Associate all apps with one or more Default API Products (configured below). Developers cannot add any other API products to the app.
- Allow selection of a single API product, but do not require it.
- Require selection of a single API product.
- Allow selection of multiple API Products, but do not require any.
- Allow selection of multiple API Products, and require at least one.
You can also control the HTML element that appear on the form that the developer uses to select the API product. Options include:
- Dropdown lists.
- Checkboxes or radio buttons. Checkboxes appear when the developer can select multiple API products and radio buttons appear when the developer can select only a single API product.
To set the option for API product selection:
- Log in to your portal as a user with admin or content creation privileges.
- Select Configuration > Dev Portal Settings > Application Settings in the Drupal administration menu.
- On the Application Settings page, expand the API Product settings area.
- Under API Product Handling, select the option that controls API product selection.
- If you specify the option "Associate all apps with one or more Default API Products (configured below)", set a default product under Default API Product.
- Under API Product Widget, select the HTML element used by developers to select the API products.
- Save the configuration.
Configuring callback URL handling
If an API proxy in your API product uses "three-legged OAuth" (authorization code grant type), developers need to specify a callback URL when they register their apps. The callback URL typically specifies the URL of an app that is designated to receive an authorization code on behalf of the client app. In addition, this URL string is used for validation. The client is required to send this URL to Apigee Edge when requesting authorization codes and access tokens, and the redirect_uri parameter must match the one that is registered. For more information, see Implementing the authorization code grant type.
To control the callback URL for API product selection:
- Log in to your portal as a user with admin or content creation privileges.
- Select Configuration > Dev Portal Settings > Application Attributes in the Drupal administration menu.
- On the Application Settings page, expand the Callback URL settings area.
- Under Callback URL Handling, select one of the following options.
- Callback URL is required for all developer apps.
- Callback URL is optional for all developer apps.
- Callback URL is neither required nor displayed.
- Save the configuration.
Displaying analytics for app usage
The portal can display analytical information about app usage. If the display of analytics is enabled, app developers can see the analytics on the My Apps page for each app. For example, a developer can display the following analytics for an app:
- Throughput
- Max response time
- Min response time
- Message count
- Error count:
Manually approving or revoking an API key for an API product
When a developer adds an API product to an app and then registers the app, the portal returns back to the developer the API key for that app. The developer then uses that API key to access the API proxies bundled by the API product associated with the app.
You control the key approval process for each API product when you create the API product:
The approval process can be:
- Automatic - An approved API key is returned by the portal for the API product when the developer registers the app. You can later revoke an automatically approved key.
- Manual - An API key is returned by the portal when the developer registers the app, but the key is not activated for any API products that use Manual key approval. An administrator has to manually approve the API key, either in the Edge management UI or API, before it can be used by the developer to access the API product. You can later revoke a manually approved key.
See Create API products for more information.
If your portal lets a developer add multiple API products to an app, the developer might add some products with Automatic key approval and some with Manual. The developer can use the returned API key for all automatically approved API products immediately while waiting for final approval for those products that require Manual approval.
To see the list of API products for an app, and the status of the key approval for the API product, a developer selects the name of the app on the My Apps page and then selects the Products link:
In this example, the Premium Weather API product uses Manual approval, and is waiting for an administrator to approve the key. The Free API Product uses Automatic approval and the use of the key to access it has been approved.
To manually approve or revoke a key:
- Log in to the Edge management UI as a user with administration privileges for your organization.
- Select API Platform in the dropdown box in the upper-right corner.
- Select Publish > Developer apps to open the list of developer apps.
- Select the Pending button to see the list of apps with pending key requests:
- Select the app name that you want to approve.
- On the app details page, select the Edit button in the upper-right corner.
- In the list of API products for the app, under Actions:
- To approve the key, select the Approve button for each API product that requires manual approval.
- To revoke an approved the key, select the Revoke button under Actions for an API product to revoke access.
- Save the app. The API key is now approved. > Application Settings.
-.
Customizing the form fields used to register an app
When the developer registers an app, the portal display the default form:
As an API provider, you might want to modify this form to prompt the developer to provide additional information such as a customer ID, the target platform of the app, or other information. The portal provides you with a the ability to add new fields to this form. These fields can be:
- Required or optional
- Displayed by different HTML elements, such as text boxes, radio buttons, check boxes, and more
- Can be set to appear anywhere on the form between the Callback URL field and the Product field
To learn how to customize the app registration form that is available from the developer portal, watch this video.
For example, the following form shows a required field for Customer ID and an optional field for target platform:
When you add new fields to the form, the field values are automatically uploaded to Edge, along with all the other fields, when the developer submits the form. That means you can view or modify those fields on Edge, or use the Edge management API to access those fields from a script.
For example, view the new form fields In the Edge management UI by going to Publish > Developer Apps, and then selecting the app name. The new field values appear under the Custom Attributes area of the page with a name that corresponds to the field's internal name:
The field values are also displayed in the Details area of the app on the developer's My Apps page:
The developer can also edit the values by selecting the Edit link for the app on the My Apps page..
To add a field to the app registration form:
- Log in to your portal as a user with admin or content creation privileges.
- Ensure that the DevConnect App Attribute Management module is enabled.
- Select Configuration > Dev Portal Settings > Dev Portal App Attributes in the Drupal administration menu.
- Select the Add Dev Portal App Attribute button at the top of the page.
- Configure the field. For example, for the Customer ID field shown above, use the following settings:
- Internal Name = cust_id. This is the name of the variable used to store the field value.
- Public Name = Customer ID
- Description = Enter your customer ID.
- Select the check box for Require this attribute
- Select the check box for Display this attribute.
- Widget = Text Box
- Select Save to return to the Dev Portal App Attributes page.
- Select Save Changes.
- Select the Home icon > Flush all caches from the Drupal menu.
You might have to clear your browser cache before the new field appears on the form.
To add an optional field for the developer to specify the platform for the app, set the field attributes as:
- Internal Name = intended_platforms
- Public Name = Platforms
- Description = Specify one or more platforms for your app.
- Clear the check boxes for Require this attribute
- Select the checkbox for Display this attribute.
- Widget = List of Checkboxes
- Select Save to return to the Dev Portal App Attributes page.
To reorder the attributes on the form:
- Log in to your portal as a user with admin or content creation privileges.
- Select Configuration > Dev Portal Settings > Dev Portal App Attributes in the Drupal administration menu.
- Select the plus, +, symbol under the Name column and drag the property to the location where you want to display it in the form.
- Save your changes. | https://docs.apigee.com/api-platform/publish/drupal/configuring-api-products?hl=es | 2022-05-16T22:20:55 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.apigee.com |
One of the key features of Sqoop is to manage and create the table metadata when importing into Hadoop.
HCatalog import jobs also provide this feature with the option
--create-hcatalog-table.
Furthermore, one of the important benefits of the HCatalog integration is to provide storage agnosticism to
Sqoop data movement jobs. To provide for that feature, HCatalog import jobs provide an option that lets a
user specify the storage format for the created table.
The option
--create-hcatalog-table is used as an indicator that a table has to be
created as part of the HCatalog import job.
The option
--hcatalog-storage-stanza can be used to specify the
storage format of the newly created table. The default value for this option is "stored as
rcfile". The value specified for this option is assumed to be a valid Hive storage format
expression. It will be appended to the CREATE TABLE command generated by the HCatalog import
job as part of automatic table creation. Any error in the storage stanza will cause the table
creation to fail and the import job will be aborted.
Any additional resources needed to support the storage format referenced in the option
--hcatalog-storage-stanza should be provided to the job either by placing
them in
$HIVE_HOME/lib or by providing them in
HADOOP_CLASSPATH
and
LIBJAR files.
If the option
--hive-partition-key is specified, then the value of this option is
used as the partitioning key for the newly created table. Only one partitioning key can be specified
with this option.
Object names are mapped to the lowercase equivalents as specified below when mapped to an HCatalog table. This includes the table name (which is the same as the external store table name converted to lower case) and field names. | https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.2/bk_data-movement-and-integration/content/sect_auto_table_creation.html | 2022-05-16T21:49:46 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.cloudera.com |
tone mapping (deprecated)
Please note that this module is deprecated from darktable 3.4 and should no longer be used for new edits. Please use the tone equalizer module instead.
Compress the tonal range of HDR images so that they fit into the limits of an LDR image, using Durand’s 2002 algorithm.
The underlying algorithm uses a bilateral filter to decompose an image into a coarse base layer and a detail layer. The contrast of the base layer is compressed, while the detail layer is preserved, then both layers are re-combined.
🔗module controls
- contrast compression
- The contrast compression level of the base layer. A higher compression will make the image fit a lower dynamic range.
- spatial extent
- The spatial extent of the bilateral filter. Lower values cause the contrast compression to have stronger effects on image details. | https://docs.darktable.org/usermanual/3.6/en/module-reference/processing-modules/tone-mapping/ | 2022-05-16T21:36:32 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.darktable.org |
Data Partitioning and Replication
Partitioning and replication are two common techniques used together in distributed systems to achieve scalable, available, and transparent data distribution.
By default, Hazelcast creates a single replica of each partition. You can configure Hazelcast so that each partition can have multiple replicas. One of these replicas is called primary and the others are called backups.
The cluster member that.
The amount of data entries that each partition can hold is limited by the physical capacity of your system.
When you start a second member in the same cluster, the partition replicas are distributed between both members.
The partition replicas with black text are primaries and the partition replicas with blue text are backups. The first member has primary replicas of 135 partitions and each of these partitions are backed up by the second member. are moved to scale out Hazelcast.
Hazelcast distributes the partitions' primary and backup replicas equally among cluster members. Backup replicas of the partitions are maintained for redundancy.
Lite Members Data is Partitioned
Hazelcast distributes data entries into the partitions using a hashing algorithm. Given an object key such as for a map or an object name such as for a topic: is always the same.
Partition Table
The partition table stores the partition IDs and the addresses of cluster members to which they belong. The purpose of this table is to make all members, including lite members, in the cluster aware of this information so that each member knows where the data is.
When you start your first member, a partition table is created within it. As you start additional members, that first member becomes the oldest member, also known as the master member and updates the partition table accordingly. This member periodically sends the partition table to all other members. This way, each member in the cluster is informed about any changes to partition ownership. The ownerships may be changed when, for example, a new member joins the cluster, or when a member leaves the cluster.
You can configure how often the member sends the partition table master member is updated with the new partition ownerships. If a lite member joins or leaves a cluster, repartitioning is not triggered since lite members do not own any partitions.
Replication Algorithm for AP Data Structures
For AP data structures, Hazelcast employs a combination of primary-copy and configurable lazy replication techniques. the primary replica of the corresponding partition is assigned. This way, each request hits the most up-to-date version of a particular data entry in a stable cluster. Backup replicas stay in standby mode until the primary replica fails. Upon failure of the primary replica, one of the backup replicas with no strong consistency but monotonic reads guarantee. See Making Your Map Data Safe., the response of the execution, including
the number of sync backup updates, is sent to the caller and after receiving
the response, the about.
the primary replica, then invocation fails with an
OperationTimeoutException.
This timeout is 2 minutes by default and defined by
the system property
hazelcast.operation.call.timeout.millis.
When the timeout is passed, the result of the invocation will be indeterminate.
Execution Guarantees
Hazelcast, as an AP product, does not provide the exactly-once guarantee. In general, Hazelcast tends to be an at-least-once solution.
In the following failure case, the exactly-once guarantee can be broken::
When an invocation does not receive a response in time,
invocation fails with an
OperationTimeoutException. This exception does not
say anything about the outcome of the operation, meaning the operation may not be
executed at all, or it may be executed once or twice.
Throwing an IndeterminateOperationStateException
If the
hazelcast.operation.fail.on.indeterminate.state system property is
enabled, a mutating operation throws an member crashes before
replying to a read-only operation, the operation is retried on the new owner of the primary replica.
Best-Effort Consistency
The replication algorithm for AP data structures enables Hazelcast clusters to offer high throughput. However, due to temporary situations in the system, such as network interruption, backup replicas can miss some updates and diverge from the primary. Backup replicas can also hit VM or long GC the. | https://docs.hazelcast.com/hazelcast/latest/architecture/data-partitioning | 2022-05-16T21:44:20 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.hazelcast.com |
Acknowledgment Types
Starting with Hazelcast 3.6, WAN replication supports different acknowledgment (ACK) types for each target cluster group. You can choose from 2 different ACK type depending on your consistency requirements. The following ACK types are supported:
ACK_ON_RECEIPT: A batch of replication events is considered successful as soon as it is received by the target cluster. This option does not guarantee that the received event is actually applied but it is faster.
ACK_ON_OPERATION_COMPLETE: This option guarantees that the event is received by the target cluster and it is applied. It is more time consuming. But it is the best way if you have strong consistency requirements.
The following is an example configuration:
<hazelcast> ... <wan-replication <wan-publisher <properties> <property name="ack.type">ACK_ON_OPERATION_COMPLETE</property> </properties> </wan-publisher> </wan-replication> ... </hazelcast> | https://docs.hazelcast.com/imdg/3.12/wan/ack-types | 2022-05-16T21:17:37 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.hazelcast.com |
File
The following is an example
cog.js file.
/* IAP Profile which can be configured within the Admin Essentials application.
- The
log_levelproperty determines the level for the log file.
- The
console_levelproperty determines the level for the console.
- In the IAP Profile example below, the
log_levelis set to
info, which means any logging that has an equal or higher severity value than info will be written to
/var/log/pronghorn/pronghorn.log(info, warn, error, debug, console, trace).
- If the
log_levelwere
debug, IAP would monitor all
log.debugmessages through cogs and adapters and log that information.
Properties File
The following is an example of the
loggerProps object of an IAP Profile:
... "loggerProps": { "description": "Logging", "log_max_files": 100, "log_max_file_size": 1048576, "log_level": "info", "log_directory": "/var/log/itential/", "log_filename": "itential.log", "log_timezone_offset": 0, "console_level": "info" }, ... | https://docs.itential.com/2021.2/developer/Itential%20Automation%20Platform/Log%20Class/ | 2022-05-16T21:28:55 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.itential.com |
AutoGL Model¶
In AutoGL, we use
model and
automodel to define the logic of graph nerual networks and make it compatible with hyper parameter optimization. Currently we support the following models for given tasks.
Lazy Initialization¶
In current AutoGL pipeline, some important hyper-parameters related with model cannot be set outside before the pipeline (e.g. input dimensions, which can only be caluclated during running after feature engineered). Therefore, in
automodel, we use lazy initialization to initialize the core
model. When the
automodel initialization method
__init__() is called with argument
init be
False, only (part of) the hyper-parameters will be set. The
automodel will have its core
model only after
initialize() is explicitly called, which will be done automatically in
solver and
from_hyper_parameter(), after all the hyper-parameters are set properly.
Define your own model and automodel¶
We highly recommend you to define both
model and
automodel, although you only need your
automodel to communicate with
solver and
trainer. The
model will be responsible for the parameters initialization and forward logic declaration, while the
automodel will be responsible for the hyper-parameter definiton and organization.
General customization¶
Let’s say you want to implement a simple MLP for node classification and want to let AutoGL find the best hyper-parameters for you. You can first define the logics assuming all the hyper-parameters are given.
import torch # define mlp model, need to inherit from torch.nn.Module class MyMLP(torch.nn.Module): # assume you already get all the hyper-parameters def __init__(self, in_channels, num_classes, layer_num, dim): super().__init__() if layer_num == 1: ops = [torch.nn.Linear(in_channels, num_classes)] else: ops = [torch.nn.Linear(in_channels, dim)] for i in range(layer_num - 2): ops.append(torch.nn.Linear(dim, dim)) ops.append(torch.nn.Linear(dim, num_classes)) self.core = torch.nn.Sequential(*ops) # this method is required def forward(self, data): # data: torch_geometric.data.Data assert hasattr(data, 'x'), 'MLP only support graph data with features' x = data.x return torch.nn.functional.log_softmax(self.core(x))
After you define the logic of
model, you can now define your
automodel to manage the hyper-parameters.
from autogl.module.model import BaseModel # define your automodel, need to inherit from BaseModel class MyAutoMLP(BaseModel): def __init__(self): # (required) make sure you call __init__ of super with init argument properly set. # if you do not want to initialize inside __init__, please pass False. super().__init__(init=False) # (required) define the search space self.space = [ {'parameterName': 'layer_num', 'type': 'INTEGER', 'minValue': 1, 'maxValue': 5, 'scalingType': 'LINEAR'}, {'parameterName': 'dim', 'type': 'INTEGER', 'minValue': 64, 'maxValue': 128, 'scalingType': 'LINEAR'} ] # set default hyper-parameters self.layer_num = 2 self.dim = 72 # for the hyper-parameters that are related with dataset, you can just set them to None self.num_classes = None self.num_features = None # (required) since we don't know the num_classes and num_features until we see the dataset, # we cannot initialize the models when instantiated. the initialized will be set to False. self.initialized = False # (required) set the device of current auto model self.device = torch.device('cuda') # (required) get current hyper-parameters of this automodel # need to return a dictionary whose keys are the same with self.space def get_hyper_parameter(self): return { 'layer_num': self.layer_num, 'dim': self.dim } # (required) override to interact with num_classes def get_num_classes(self): return self.num_classes # (required) override to interact with num_classes def set_num_classes(self, n_classes): self.num_classes = n_classes # (required) override to interact with num_features def get_num_features(self): return self.num_features # (required) override to interact with num_features def set_num_features(self, n_features): self.num_features = n_features # (required) instantiate the core MLP model using corresponding hyper-parameters def initialize(self): # (required) you need to make sure the core model is named as `self.model` self.model = MyMLP( in_channels = self.num_features, num_classes = self.num_classes, layer_num = self.layer_num, dim = self.dim ).to(self.device) self.initialized = True # (required) override to create a copy of model using provided hyper-parameters def from_hyper_parameter(self, hp): # hp is a dictionary that contains keys and values corrsponding to your self.space # in this case, it will be in form {'layer_num': XX, 'dim': XX} # create a new instance ret = self.__class__() # set the hyper-parameters related to dataset and device ret.num_classes = self.num_classes ret.num_features = self.num_features ret.device = self.device # set the hyper-parameters according to hp ret.layer_num = hp['layer_num'] ret.dim = hp['dim'] # initialize it before returning ret.initialize() return ret
Then, you can use this node classification model as part of AutoNodeClassifier
solver.
from autogl.solver import AutoNodeClassifier solver = AutoNodeClassifier(graph_models=(MyAutoMLP(),))
The model for graph classification is generally the same, except that you can now also receive the
num_graph_features (the dimension of the graph-level feature) through overriding
set_num_graph_features(self, n_graph_features) of
BaseModel. Also, please remember to return graph-level logits instead of node-level one in
forward() of
model.
Model for link prediction¶
For link prediction, the definition of model is a bit different with the common forward definition. You need to implement the
lp_encode(self, data) and
lp_decode(self, x, pos_edge_index, neg_edge_index) to interact with
LinkPredictionTrainer and
AutoLinkPredictor. Taking the class
MyMLP defined above for example, if you want to perform link prediction:
class MyMLPForLP(torch.nn.Module): # num_classes is removed since it is invalid for link prediction def __init__(self, in_channels, layer_num, dim): super().__init__() ops = [torch.nn.Linear(in_channels, dim)] for i in range(layer_num - 1): ops.append(torch.nn.Linear(dim, dim)) self.core = torch.nn.Sequential(*ops) # (required) for interaction with link prediction trainer and solver def lp_encode(self, data): return self.core(data.x) # (required) for interaction with link prediction trainer and solver def lp_decode(self, x, pos_edge_index, neg_edge_index): # first, get all the edge_index need calculated edge_index = torch.cat([pos_edge_index, neg_edge_index], dim=-1) # then, use dot-products to calculate logits, you can use whatever decode method you want logits = (x[edge_index[0]] * x[edge_index[1]]).sum(dim=-1) return logits class MyAutoMLPForLP(MyAutoMLP): def initialize(self): # init MyMLPForLP instead of MyMLP self.model = MyMLPForLP( in_channels = self.num_features, layer_num = self.layer_num, dim = self.dim ).to(self.device) self.initialized = True
Model with sampling support¶
Towards efficient representation learning on large-scale graph, AutoGL currently support node classification using sampling techniques including node-wise sampling, layer-wise sampling, and graph-wise sampling. See more about sampling in AutoGL Trainer.
In order to conduct node classification using sampling technique with your custom model, further adaptation and modification are generally required.
According to the Message Passing mechanism of Graph Neural Network (GNN), numerous nodes in the multi-hop neighborhood of evaluation set or test set are potentially involved to evaluate the GNN model on large-scale graph dataset.
As the representations for those numerous nodes are likely to occupy large amount of computational resource, the common forwarding process is generally infeasible for model evaluation on large-scale graph.
An iterative representation learning mechanism is a practical and feasible way to evaluate Sequential Model,
which only consists of multiple sequential layers, with each layer taking a
Data aggregate as input. The input
Data has the same functionality with
torch_geometric.data.Data, which conventionally provides properties
x,
edge_index, and optional
edge_weight.
If your custom model is composed of concatenated layers, you would better make your model inherit
ClassificationSupportedSequentialModel to utilize the layer-wise representation learning mechanism to efficiently conduct representation learning for your custom sequential model.
import autogl from autogl.module.model.base import ClassificationSupportedSequentialModel # override Linear so that it can take graph data as input class Linear(torch.nn.Linear): def forward(self, data): return super().forward(data.x) class MyMLPSampling(ClassificationSupportedSequentialModel): def __init__(self, in_channels, num_classes, layer_num, dim): super().__init__() if layer_num == 1: ops = [Linear(in_channels, num_classes)] else: ops = [Linear(in_channels, dim)] for i in range(layer_num - 2): ops.append(Linear(dim, dim)) ops.append(Linear(dim, num_classes)) self.core = torch.nn.ModuleList(ops) # (required) override sequential_encoding_layers property to interact with sampling @property def sequential_encoding_layers(self) -> torch.nn.ModuleList: return self.core # (required) define the encode logic of classification for sampling def cls_encode(self, data): # if you use sampling, the data will be passed in two possible ways, # you can judge it use following rules if hasattr(data, 'edge_indexes'): # the edge_indexes are a list of edge_index, one for each layer edge_indexes = data.edge_indexes edge_weights = [None] * len(self.core) if getattr(data, 'edge_weights', None) is None else data.edge_weights else: # the edge_index and edge_weight will stay the same as default edge_indexes = [data.edge_index] * len(self.core) edge_weights = [getattr(data, 'edge_weight', None)] * len(self.core) x = data.x for i in range(len(self.core)): data = autogl.data.Data(x=x, edge_index=edge_indexes[i]) data.edge_weight = edge_weights[i] x = self.sequential_encoding_layers[i](data) return x # (required) define the decode logic of classification for sampling def cls_decode(self, x): return torch.nn.functional.log_softmax(x) | https://autogl.readthedocs.io/en/latest/docfile/tutorial/t_model.html | 2022-05-16T22:17:48 | CC-MAIN-2022-21 | 1652662512249.16 | [] | autogl.readthedocs.io |
- List projects and attachments
- Migrate to hashed storage
- Rollback from hashed storage to legacy storage
- Troubleshooting
Repository storage Rake tasks
This is a collection of Rake tasks to help you list and migrate existing projects and their attachments to the new hashed storage that GitLab uses to organize the Git data.
List projects and attachments
The following Rake tasks lists the projects and attachments that are available on legacy and hashed storage.
On legacy storage
To have a summary and then a list of projects and their attachments using legacy storage:
Omnibus installation
# Projects sudo gitlab-rake gitlab:storage:legacy_projects sudo gitlab-rake gitlab:storage:list_legacy_projects # Attachments sudo gitlab-rake gitlab:storage:legacy_attachments sudo gitlab-rake gitlab:storage:list_legacy_attachments
Source installation
# Projects sudo -u git -H bundle exec rake gitlab:storage:legacy_projects RAILS_ENV=production sudo -u git -H bundle exec rake gitlab:storage:list_legacy_projects RAILS_ENV=production # Attachments sudo -u git -H bundle exec rake gitlab:storage:legacy_attachments RAILS_ENV=production sudo -u git -H bundle exec rake gitlab:storage:list_legacy_attachments RAILS_ENV=production
On hashed storage
To have a summary and then a list of projects and their attachments using hashed storage:
Omnibus installation
# Projects sudo gitlab-rake gitlab:storage:hashed_projects sudo gitlab-rake gitlab:storage:list_hashed_projects # Attachments sudo gitlab-rake gitlab:storage:hashed_attachments sudo gitlab-rake gitlab:storage:list_hashed_attachments
Source installation
# Projects sudo -u git -H bundle exec rake gitlab:storage:hashed_projects RAILS_ENV=production sudo -u git -H bundle exec rake gitlab:storage:list_hashed_projects RAILS_ENV=production # Attachments sudo -u git -H bundle exec rake gitlab:storage:hashed_attachments RAILS_ENV=production sudo -u git -H bundle exec rake gitlab:storage:list_hashed_attachments RAILS_ENV=production
Migrate to hashed storage
This task must be run on any machine that has Rails/Sidekiq configured, and the task schedules all your existing projects and attachments associated with it to be migrated to the Hashed storage type:
Omnibus installation
sudo gitlab-rake gitlab:storage:migrate_to_hashed
Source installation
sudo -u git -H bundle exec rake gitlab:storage:migrate_to_hashed RAILS_ENV=production
If you have any existing integration, you may want to do a small rollout:migrate_to_hashed ID_FROM=50 ID_TO=100
To monitor the progress in GitLab:
- On the top bar, select Menu > Admin.
- On the left sidebar, select Monitoring > Background Jobs.
- Watch how long the
hashed_storage:hashed_storage_project_migratequeue takes to finish. After it reaches zero, you can confirm every project has been migrated by running the commands above.
If you find it necessary, you can run the previous migration script again to schedule missing projects.
Any error or warning is logged in Sidekiq’s log file.
If Geo is enabled, each project that is successfully migrated generates an event to replicate the changes on any secondary nodes.
You only need the
gitlab:storage:migrate_to_hashed Rake task to migrate your repositories, but there are
additional commands to help you inspect projects and attachments in both legacy and hashed storage.
Rollback from hashed storage to legacy storage
This task schedules all your existing projects and associated attachments to be rolled back to the legacy storage type.
Omnibus installation
sudo gitlab-rake gitlab:storage:rollback_to_legacy
Source installation
sudo -u git -H bundle exec rake gitlab:storage:rollback_to_legacy RAILS_ENV=production
If you have any existing integration, you may want to do a small rollback:rollback_to_legacy ID_FROM=50 ID_TO=100
You can monitor the progress in the Admin Area > Monitoring > Background Jobs page.
On the Queues tab, you can watch the
hashed_storage:hashed_storage_project_rollback queue to see how long the process takes to finish.
After it reaches zero, you can confirm every project has been rolled back by running the commands above. If some projects weren’t rolled back, you can run this rollback script again to schedule further rollbacks. Any error or warning is logged in Sidekiq’s log file.
If you have a Geo setup, the rollback is not reflected automatically
on the secondary node. You may need to wait for a backfill operation to kick-in and remove
the remaining repositories from the special
@hashed/ folder manually.
Troubleshooting
The Rake task might not be able to complete the migration to hashed storage. Checks on the instance will continue to report that there is legacy data:
* Found 1 projects using Legacy Storage - janedoe/testproject (id: 1234)
If you have a subscription, raise a ticket with GitLab support as most of the fixes are relatively high risk, involving running code on the Rails console.
Read only projects
If you have set projects read only they might fail to migrate.
-
Check if the project is read only:
project = Project.find_by_full_path('janedoe/testproject') project.repository_read_only
If it returns
true(not
nilor
false), set it writable:
project.update!(repository_read_only: false)
Re-run the migration Rake task.
Set the project read-only again:
project.update!(repository_read_only: true)
Projects pending deletion
Check the project details in the Admin Area. If deleting the project failed
it will show as
Marked For Deletion At ..,
Scheduled Deletion At .. and
pending removal, but the dates will not be recent.
Delete the project using the Rails console:
-
With the following code, select the project to be deleted and account to action it:
project = Project.find_by_full_path('janedoe/testproject') user = User.find_by_username('admin_handle') puts "\nproject selected for deletion is:\nID: #{project.id}\nPATH: #{project.full_path}\nNAME: #{project.name}\n\n"
- Replace
janedoe/testprojectwith your project path from the Rake take output or from the Admin Area.
- Replace
admin_handlewith the handle of an instance administrator or with
root.
- Verify the output before proceeding. There are no other checks performed.
Destroy the project immediately:
Projects::DestroyService.new(project, user).execute
If destroying the project generates a stack trace relating to encryption or the error
OpenSSL::Cipher::CipherError:
Verify your GitLab secrets.
If the affected projects have secrets that cannot be decrypted it will be necessary to remove those specific secrets. Our documentation for dealing with lost secrets is for loss of all secrets, but it’s possible for specific projects to be affected. For example, to reset specific runner registration tokens for a specific project ID:
UPDATE projects SET runners_token = null, runners_token_encrypted = null where id = 1234;
Repository cannot be moved from errors in Sidekiq log
Projects might fail to migrate with errors in the Sidekiq log:
# grep 'Repository cannot be moved' /var/log/gitlab/sidekiq/current {"severity":"ERROR","time":"2021-02-29T02:29:02.021Z","message":"Repository cannot be moved from 'janedoe/testproject' to '@hashed<value>' (PROJECT_ID=1234)"}
This might be caused by a bug in the original code for hashed storage migration.
There is a workaround for projects still affected by this issue. | https://docs.gitlab.com/ee/administration/raketasks/storage.html | 2022-05-16T21:51:30 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.gitlab.com |
Creating enums
When creating a new enum, it should use the database type
SMALLINT.
The
SMALLINT type size is 2 bytes, which is sufficient for an enum.
This would help to save space in the database.
To use this type, add
limit: 2 to the migration that creates the column.
Example:
def change add_column :ci_job_artifacts, :file_format, :integer, limit: 2 end
All of the key/value pairs should be defined in FOSS
Summary: All enums needs to be defined in FOSS, if a model is also part of the FOSS.
class Model < ApplicationRecord enum platform: { aws: 0, gcp: 1 # EE-only } end
When you add a new key/value pair to a
enum and if it’s EE-specific, you might be
tempted to organize the
enum as the following:
# Define `failure_reason` enum in `Pipeline` model: class Pipeline < ApplicationRecord enum failure_reason: Enums::Pipeline.failure_reasons end
# Define key/value pairs that used in FOSS and EE: module Enums module Pipeline def self.failure_reasons { unknown_failure: 0, config_error: 1 } end end end Enums::Pipeline.prepend_mod_with('Enums::Pipeline')
# Define key/value pairs that used in EE only: module EE module Enums module Pipeline override :failure_reasons def failure_reasons super.merge(activity_limit_exceeded: 2) end end end end
This works as-is, however, it has a couple of downside that:
- Someone could define a key/value pair in EE that is conflicted with a value defined in FOSS. For example, define
activity_limit_exceeded: 1in
EE::Enums::Pipeline.
- When it happens, the feature works totally different. For example, we cannot figure out
failure_reasonis either
config_erroror
activity_limit_exceeded.
- When it happens, we have to ship a database migration to fix the data integrity, which might be impossible if you cannot recover the original value.
Also, you might observe a workaround for this concern by setting an offset in EE’s values.
For example, this example sets
1000 as the offset:
module EE module Enums module Pipeline override :failure_reasons def failure_reasons super.merge(activity_limit_exceeded: 1_000, size_limit_exceeded: 1_001) end end end end
This looks working as a workaround, however, this approach has some downsides that:
- Features could move from EE to FOSS or vice versa. Therefore, the offset might be mixed between FOSS and EE in the future. For example, when you move
activity_limit_exceededto FOSS, you’ll see
{ unknown_failure: 0, config_error: 1, activity_limit_exceeded: 1_000 }.
- The integer column for the
enumis likely created as
SMALLINT. Therefore, you need to be careful of that the offset doesn’t exceed the maximum value of 2 bytes integer.
As a conclusion, you should define all of the key/value pairs in FOSS. For example, you can simply write the following code in the above case:
class Pipeline < ApplicationRecord enum failure_reason: { unknown_failure: 0, config_error: 1, activity_limit_exceeded: 2 } end
Add new values in the gap
After merging some EE and FOSS enums, there might be a gap between the two groups of values:
module Enums module Ci module CommitStatus def self.failure_reasons { # ... data_integrity_failure: 12, forward_deployment_failure: 13, insufficient_bridge_permissions: 1_001, downstream_bridge_project_not_found: 1_002, # ... } end end end end
To add new values, you should fill the gap first.
In the example above add
14 instead of
1_003:
{ # ... data_integrity_failure: 12, forward_deployment_failure: 13, a_new_value: 14, insufficient_bridge_permissions: 1_001, downstream_bridge_project_not_found: 1_002, # ... } | https://docs.gitlab.com/ee/development/creating_enums.html | 2022-05-16T22:31:16 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.gitlab.com |
Telegram#
Telegram is a cloud-based instant messaging and voice-over-IP service.
Basic Operations#
- Chat
- Get up to date information about a chat.
- Leave a group, supergroup or channel.
- Get the member of a chat.
- Set the description of a chat.
- Set the title of a chat.
- Callback
- Send answer to callback query sent from inline keyboard.
- Send answer to callback query sent from inline bot.
- File
- Get a file.
- Delete a chat message
- Edit a text message
- Pin a chat message
- Send an animated file
- Send a audio file
- Send a chat action
- Send a document
- Send a location
- Send group of photos or videos to album
- Send a text message
- Send a photo
- Send a sticker
- Send a video
- Unpin a chat message
Example Usage#
This workflow allows you to send a cocktail recipe to a specified chat ID every day via a Telegram bot. You can also find the workflow on n8n.io. This example usage workflow uses the following nodes. - Cron - HTTP Request - Telegram
The final workflow should look like the following image.
1. Cron node#
The Cron node will trigger the workflow daily at 8 PM.
- Click on Add Cron Time.
- Set hours to
20in the Hour field.
- Click on Execute Node to run the node.
In the screenshot below, you will notice that the Cron node is configured to trigger the workflow every day at 8 PM.. Telegram node (sendPhoto: message)#
This node will send a message on Telegram with an image and the recipe of the cocktail that we got from the previous node.
First of all, you'll have to enter credentials for the Telegram node. You can find out how to do that here.
Select 'Send Photo' from the Operation dropdown list.
- Enter the target chat ID in the Chat ID field. Refer to the FAQs to learn how to get the chat ID.
- Click on the gears icon next to the Photo field and click on Add Expression.
- Select the following in the Variable Selector section: Nodes > HTTP Request > Output Data > JSON > drinks > [item: 0] > strDrinkThumb. You can also add the following expression:
{{$node["HTTP Request"].json["drinks"][0]["strDrinkThumb"]}}.
- Click on Add Field and select 'Caption' from the dropdown list.
- Click on the gears icon next to the Caption field and click on Add Expression.
- Select the following in the Variable Selector section: Nodes > HTTP Request > Output Data > JSON > drinks > [item: 0] > strInstructions. You can also add the following expression:
{{$node["HTTP Request"].json["drinks"][0]["strInstructions"]}}.
- Click on Execute Node to run the node.
In the screenshot below, you will notice that the node sends a message on Telegram with an image and the recipe of the cocktail..
FAQs#
How can I send more than 30 messages per second?#
The Telegram API has a limitation of sending only 30 messages per second. Follow the steps mentioned below to send more than 30 messages: 1. Split In Batches node: Use the Split in Batches node to get at most 30 chat IDs from your database. 2. Telegram node: Connect the Telegram node with the Split In Batches node. Use the Expression Editor to select the Chat IDs from the Split in Batches node. 3. Function node: Connect the Function node with the Telegram node. Use the Function node to wait for a few seconds before fetching the next batch of chat IDs. Connect this node with the Split In Batches node.
You can also use this workflow.
How do I add a bot to a Telegram channel?#
- In the Telegram app, access the target channel and tap on the channel name.
- Make sure that the channel name is labeled as "public channel".
- Tap on Administrators and then on Add Admin.
- Search for the username of the bot and select it.
- Tap on the checkmark on the top-right corner to add the bot to the channel.
How do I get the Chat ID?#
There are two ways to get the Chat ID in Telegram.
- Using the Telegram Trigger node: On successful execution, the Telegram Trigger node returns a Chat ID. You can use the Telegram Trigger node in your workflow to get a Chat ID.
- Using the
@RawDataBot: The
@RawDataBotreturns the raw data of the chat with a Chat ID. Invite the
@RawDataBotto your channel/group, and upon joining, it will output a Chat ID along with other information. Be sure to remove the
@RawDataBotfrom your group/channel afterwards. | https://docs.n8n.io/integrations/nodes/n8n-nodes-base.telegram/ | 2022-05-16T21:12:27 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.n8n.io |
gvm-tools: Remote Control of Your Greenbone Vulnerability Manager (GVM)¶
The Greenbone Vulnerability Management Tools, or gvm-tools in short, are a collection of tools that help with controlling a Greenbone Security Manager (GSM) appliance and its underlying Greenbone Vulnerability Manager (GVM) remotely.
Essentially, the tools aid accessing the communication protocols Greenbone Management Protocol (GMP) and Open Scanner Protocol (OSP).
Note
gvm-tools requires at least Python 3.7. Python 2 is not supported. | https://gvm-tools.readthedocs.io/en/latest/index.html | 2022-05-16T20:47:25 | CC-MAIN-2022-21 | 1652662512249.16 | [] | gvm-tools.readthedocs.io |
fMRI Short Course: Appendices¶
Note
This section is still under construction. Check back soon!
Introduction¶
Once you have finished analyzing the Flanker dataset in any of the software packages (FSL, SPM, or AFNI), you may still have a few questions about why the analyses were done the way that you did them. These appendices review some of these concepts. | https://andysbrainbook.readthedocs.io/en/latest/fMRI_Short_Course/fMRI_Appendices.html | 2022-05-16T22:11:49 | CC-MAIN-2022-21 | 1652662512249.16 | [] | andysbrainbook.readthedocs.io |
Production Checklist
The production checklist provides a set of best practices and recommendations to ensure a smooth transition to a production environment which runs a Hazelcast cluster. You should plan for and consider the following areas.
Network Recommendations
All Hazelcast members forming a cluster should be on a minimum 1Gbps Local Area Network (LAN).
Hardware Recommendations
We suggest at least 8 CPU cores or equivalent per member, as well as running a single Hazelcast member per host.
Operating System Recommendations
Hazelcast works in many operating environments and some environments have unique considerations. These are highlighted below.
As a general suggestion, we recommend turning off the swapping at operating system level.
Solaris
Hazelcast is certified for Solaris SPARC.
However, the following modules are not supported for the Solaris operating system:
hazelcast-jet-grpc
hazelcast-jet-protobuf
hazelcast-jet-python
VMWare ESX
Hazelcast is certified on VMWare VSphere 5.5/ESXi 6.0. Generally speaking, Hazelcast can use all of the resources on a full machine. Splitting a single physical machine into multiple virtual machines and thereby dividing resources are not required.
Consider the following for VMWare ESX:
Avoid restart it after the snapshot.
Network performance issues, including timeouts, might occur with LRO (Large Receive Offload) enabled on Linux virtual machines and ESXi/ESX hosts. We have specifically had this reported in VMware environments, but it could potentially impact other environments as well. We strongly recommend disabling LRO when running in virtualized environments, see
Windows
According to a reported rare case, I/O threads can consume a lot of CPU cycles
unexpectedly, even in an idle state. This can lead to CPU usage going up to 100%.
This is reported not only for Hazelcast but for other GitHub projects as.
General recommendations:
GC logs should be enabled
Minimum and maximum heap size should be equal
For Java 9+:
G1GC is the default recommended GC policy
No tuning is recommended unless needed
For Java 8:
Recommended GC policies are CMS and ParNewGC:
-XX:CMSInitiatingOccupancyFraction=65
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
For large heaps G1GC is recommended as above
Data Size Calculation Recommendations. | https://docs.hazelcast.com/hazelcast/5.2-snapshot/production-checklist | 2022-05-16T22:54:13 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.hazelcast.com |
Views
Writing Views
If your plugin will provide its own page or pages within the NetBox web UI, you'll need to define views. A view is a piece of business logic which performs an action and/or renders a page when a request is made to a particular URL. HTML content is rendered using a template. Views are typically defined in
views.py, and URL patterns in
urls.py.
As an example, let's write a view which displays a random animal and the sound it makes. We'll use Django's generic
View class to minimize the amount of boilerplate code needed. instance from the database and passes it as a context variable when rendering a template named
animal.html. HTTP
GET requests are handled by the view's
get() method, and
POST requests are handled by its
post() method.
Our example above is extremely simple, but views can do just about anything. They are generally where the core of your plugin's functionality will reside. Views also are not limited to returning HTML content: A view could return a CSV file or image, for instance. For more information on views, see the Django documentation.
URL Registration.
View Classes
NetBox provides several generic view classes (documented below) to facilitate common operations, such as creating, viewing, modifying, and deleting objects. Plugins can subclass these views for their own use.
Warning
Please note that only the classes which appear in this documentation are currently supported. Although other classes may be present within the
views.generic module, they are not yet supported for use by plugins.
Example Usage
# views.py from netbox.views.generic import ObjectEditView from .models import Thing class ThingEditView(ObjectEditView): queryset = Thing.objects.all() template_name = 'myplugin/thing.html' ...
Object Views
Below are the class definitions for NetBox's object views. These views handle CRUD actions for individual objects. The view, add/edit, and delete views each inherit from
BaseObjectView, which is not intended to be used directly.
BaseObjectView (ObjectPermissionRequiredMixin, View)
Base view class for reusable generic views.
Attributes:
get_object(self, **kwargs)
Return the object being viewed or modified. The object is identified by an arbitrary set of keyword arguments
gleaned from the URL, which are passed to
get_object_or_404(). (Typically, only a primary key is needed.)
If no matching object is found, return a 404 response.
get_extra_context(self, request, instance)
Return any additional context data to include when rendering the template.
Parameters:
ObjectView (BaseObjectView)
Retrieve a single object for display.
Note: If
template_name is not specified, it will be determined automatically based on the queryset model.
get_template_name(self)
Return self.template_name if defined. Otherwise, dynamically resolve the template name using the queryset
model's
app_label and
model_name.
ObjectEditView (GetReturnURLMixin, BaseObjectView)
Create or edit a single object.
Attributes:
get_object(self, **kwargs)
Return an object for editing. If no keyword arguments have been specified, this will be a new instance.
alter_object(self, obj, request, url_args, url_kwargs)
Provides a hook for views to modify an object before it is processed. For example, a parent object can be defined given some parameter from the request URL.
Parameters:
ObjectDeleteView (GetReturnURLMixin, BaseObjectView)
Delete a single object.
Multi-Object Views
Below are the class definitions for NetBox's multi-object views. These views handle simultaneous actions for sets objects. The list, import, edit, and delete views each inherit from
BaseMultiObjectView, which is not intended to be used directly.
BaseMultiObjectView (ObjectPermissionRequiredMixin, View)
Base view class for reusable generic views.
Attributes:
get_extra_context(self, request)
Return any additional context data to include when rendering the template.
Parameters:
ObjectListView (BaseMultiObjectView)
Display multiple objects, all of the same type, as a table.
Attributes:
get_table(self, request, bulk_actions=True)
Return the django-tables2 Table instance to be used for rendering the objects list.
Parameters:
export_table(self, table, columns=None, filename=None)
Export all table data in CSV format.
Parameters:
export_template(self, template, request)
Render an ExportTemplate using the current queryset.
Parameters:
BulkImportView (GetReturnURLMixin, BaseMultiObjectView)
Import objects in bulk (CSV format).
Attributes:
BulkEditView (GetReturnURLMixin, BaseMultiObjectView)
Edit objects in bulk.
Attributes:
BulkDeleteView (GetReturnURLMixin, BaseMultiObjectView)
Delete objects in bulk.
Attributes:
get_form(self)
Provide a standard bulk delete form if none has been specified for the view
Feature Views
These views are provided to enable or enhance certain NetBox model features, such as change logging or journaling. These typically do not need to be subclassed: They can be used directly e.g. in a URL path.
ObjectChangeLogView (View)
Present a history of changes made to a particular object. The model class must be passed as a keyword argument when referencing this view in a URL path. For example:
path('sites/<int:pk>/changelog/', ObjectChangeLogView.as_view(), name='site_changelog', kwargs={'model': Site}),
Attributes:
ObjectJournalView (View)
Show all journal entries for an object. The model class must be passed as a keyword argument when referencing this view in a URL path. For example:
path('sites/<int:pk>/journal/', ObjectJournalView.as_view(), name='site_journal', kwargs={'model': Site}),
Attributes:
Extending Core Views] | https://docs.netbox.dev/en/stable/plugins/development/views/ | 2022-05-16T22:39:51 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.netbox.dev |
Overview
Making Itential Automation Platform (IAP) accessible means making it usable by everyone and making it possible for everyone to seamlessly connect their IT systems with network technologies for end-to-end network configuration, compliance, and automation.
This guide provides accessibility information about IAP user interface elements along with information about views and controls that are intuitive and scalable. For simplicity, these guidelines are presented herein as UI/UX checklists to verify key accessibility requirements are met in the design and implementation of IAP.
Process and UI Framework
Itential has built a set of guidelines to follow when creating user interfaces and designs for IAP. These guidelines aim at targeting WCAG 2.1 conformance (as recommended by W3C).
- Compliance with WCAG 2 is managed primarily through the RodeoUI library, which controls color, font properties, and other visual details.
- Rodeo is built on Prime React, which claims to be fully accessible and in compliance with Section 508 standards.
- The Itential Style Guide, Flavor, provides an overview of all UI elements and guidelines for building interfaces with the RodeoUI library.
- Specific app designs are based on the Flavor style guide and Prime React component library.
- Informal accessibility audits of live applications are performed pre-release by the Itential Testing & Verification team.
Visual Accessibility
Accessibility begins with design. The following standards are met by all UI components and patterns in Figma, a vector graphics editor and prototyping tool used at Itential to design and build IAP.
- The AA (ideal) level of conformance is the standard for body text compared to background color. For contrast testing, Color Review is used to test compliance.
- Error, warning, and success states must use icons along with text and color. For colorblind and grayscale testing, the Color Oracle simulator is used.
- Text style properties (minimum requirements) are:
- Font size to at least 14px.
- Text enlargement not to exceed 200% (font magnification, not browser zoom).
- Line height (line spacing) to at least 1.5 times the font size.
- Paragraph spacing to at least 2 times the font size.
- Letter spacing to at least 0.12 times the font size.
- Word spacing to at least 0.16 times the font size.
Functional Accessibility
Every functional test requirement references a WCAG 2.1 Success Criterion target.
- A project checklist is used to cover most of the WCAG requirements for accessibility and compliance.
- Accessibility is tested across four major browsers: IE11, Edge, Chrome, and Firefox.
- Where possible, built-in accessibility checkers are used to inspect a page. For example, in Chrome, select Audit → Accessibility → Generate Report.
- If a page is rendered without CSS, it should still be in a logical order and navigable.
Keyboard Control (No Mouse)
Itential users can use their keyboard like a mouse to navigate and interact with items onscreen.
- If it can be clicked, selected, or modified (on input) it must be available from the keyboard (tabbing).
- For drag and drop functionality, a keyboard-based cut and paste alternative can be offered, or a separate UI for accessibility can be enabled.
- No keyboard traps. User must always be able to leave a component with the keyboard.
- Tabbing must be in a logical top-down, left-right order. A tab-index is used to enforce a certain tabbing order, where needed.
- If a button or link triggers a dialog or modal window, when the user closes the dialog, they should not be forced back to the top of the page. The element that had focus when the dialog was launched should regain focus when the dialog is closed.
The table below presents a list of keyboard shortcuts and best practices for assistive technologies and accessibility.
Design Examples
The intent of this section is to present examples of the IAP user interface.
Figure 1: : Advanced Search and Collections View
Figure 2: IAP Gen 2 Workflow
Accessibility at Itential
The most important aspects of any user interface are navigation and consistent use of components to predict where things are on each page. Itential dedicates extra attention to these areas to inclusively improve the product experience for all types of users.
We welcome your feedback on the accessibility of Itential Automation Platform. Please let us know if you encounter accessibility limitations.
Existing customers, please use your support portal.
Non-customers and all other inquiries, please contact us via email or phone.
- Phone: 1-800-404-5617
We try to respond to feedback and reported issues within 5-7 business days. | https://docs.itential.com/2021.2/product/Accessibility/ | 2022-05-16T21:30:14 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.itential.com |
Extent Reports
Contents:
1. Overview
2. Plugin installation
3. Usage
3.1 Report Script
3.2 Set Script
3.3 Flush Script
4. Klov Integration
5. ExtentX Integration
5.1 ExtentX Setup
5.2 ExtentX Customization
6. Change Log
1. Overview
The Extent Reports plugin provides integration with the Extent library to deliver comprehensive HTML reports from T-Plan Robot automation.
The plugin also supports ExtentX/Klov report servers as follows:
- Plugin version 0.4 is packaged with Extent v3.1.5 and supports Klov report server v0.1.1.
- Plugin version 0.3 is based on Extent v2.41.0 and supports the now obsoleted ExtentX 0.2 server.
- The obsoleted ExtentX v1 is not supported at all.
The Extent libraries are published under the BSD license which is packaged inside the plugin release. The plugin is published with the source code and serves as a reference example of integration with 3rd party reporting frameworks. Customers of T-Plan Ltd. are free to reuse it for their own solutions. The source code is packaged inside the plugin archive.
Should you have any questions or suggestions contact the T-Plan support. For the list of other plugins provided by T-Plan go here.
2. Plugin installation
The plugin requires T-Plan Robot 4.0 or higher. To install the plugin download it from the following location:
- Version 0.4:
- Support of Extent 3.1.5 and the Klov 0.1.1 report server
- Version 0.3:
- Support of the ExtentX 0.2 report server (obsoleted)
- Version 0.2:
- Minor fixes
- Version 0.1:
- Initial release
For details on the version differences see the Change Log. Should you need to upgrade the Extent libraries replace all JAR files except for the
extent-plugin.jar one in the plugin folder and the
lib/ subfolder with the new ones. Names of the JAR files don't matter because the plugin loads all
.jar files located in these two folders.
OPTION 1:
IMPORTANT: As Java 9 broke support of dynamic JAR loading you must use this option if you are running Java 9 or higher.
- Unzip the file to a location on your hard drive.
- Add all the JAR files in the folder and the
lib/subfolder to the class path of the Robot start command. For details see the Robot release notes.
- Start or restart Robot. The test scripts will be exposed to Robot. When you create a Run command in your TPR script the property window will list the plugin scripts.
OPTION 2 (JAVA 8 AND LOWER):
- Unzip the file to the
plugins/directory under the Robot installation directory. This will make Robot load the classes on the start up. Make sure to remove any older versions of the plugin.
- Start or restart Robot. The test scripts will be exposed to Robot. When you create a Run command in your TPR script the property window will list the plugin scripts.
OPTION 2 (JAVA 8 AND LOWER):
- Unzip the file to a location on your hard drive.
- If you plan on using the plugin in the TPR scripts put the following command to the beginning of each test script:
Include "<location>/extent-plugin.jar"
- If you plan on using the plugin in Java test scripts put all the JAR files in the plugin archive onto the Java class path.
- To create a portable Robot project extract the archive to the project folder and reference it from scripts using the _PROJECT_DIR variable:
Include "{_PROJECT_DIR}/extent-plugin.jar"
Alternatively use a relative path to the calling script:
Include "../extent-plugin.jar"
To uninstall the plugin simply delete the files and remove the Include references.
3. Usage
The plugin contains three Java test scripts:
The plugin scripts are to be called from TPR test scripts using the Run command. The commands may be easily created using the Command Wizard tool. To edit an existing Run command right click it and select Properties in the context menu.
The following picture shows how the parameters and artefacts created by a Robot test script get propagated into the Extent report:
Example TPR script:
// Load the plugin saved to the project dir
Include "../extent-plugin.jar"
// Start the Extent report
Run "com.tplan.extent.Report" desc="Extent reports demo" file="extentreport.html" name="Demo Test"
// Set the author, category, target system and Robot version
Run "com.tplan.extent.Set" author="John Doe" category="Regression" param="System" value="Windows 7 64-bit"
Run "com.tplan.extent.Set" param="Robot" value="{_PRODUCT_VERSION_LONG}"
// Start also the Robot report to show how it gets linked
Report "results.xml"
// Produce some reportable objects
// Screenshot
Screenshot "myscreen.png" desc="Test screenshot"
// Record one PASS and one FAIL test step
Step "Successful test step" pass
Step "Unsuccessful test step" fail
// Exit with the code of 0 which means the overall PASS result
Exit 0
Resulting Extent report:
The system info parameters are located in the test details:
3.1 Report Script
DESCRIPTION
The Report script provides functionality similar to the Robot's Report command. Key features:
- To get the accurate data the report must be started at the beginning of your test script. If you start it later or even at the end of your script it will contain all the artefacts but the duration and result time stamps will be incorrect. This is a limitation of the Extent API.
- Unless an absolute file path is specified the report gets saved to the script's report directory, i.e. to the path specified by the _REPORT_DIR variable. See the docs on the report paths for details. You may create the Robot report simultaneously in the same folder through another Report command call.
- The plugin is not ready for parallel testing, i.e. when two scripts running in parallel write to the same report. The plugin can be eventually updated to support parallel reports following this example.
- The report will be written to the target file on the script termination. To save it at any time during the script execution call Flush.
The report will record the following Robot artefacts:
- Screenshots created by the Screenshot command,
- Test steps created by Step,
- Warnings created by Warning,
- Logs written to the Robot's execution log by the T-Plan framework and calls of the Log command (optional, off by default).
Extent by default makes a test fail if there's at least one failed step. This plugin however follows the Robot model and derives the PASS/FAIL result from the test script exit code.
SYNOPSIS (TPR SCRIPTS)
Run com.tplan.extent.Report [file=<HTML_file_path>] [name=<name>] [desc=<description>] [logs=<true|false>] [project=<project_name>] [extentx=<address[:port]>] [kolv=<address[:port]>]
SYNOPSIS (JAVA SCRIPTS)
run ("com.tplan.extent.Report", "file", "<HTML_file_path>", [, "name", "<name>"] [, "desc", "<description>"] [, "logs", "true|false"] [, "project", "<project_name>"] [, "extentx", "<address[:port]>"] [, "klov", "<address[:port]>"] );
* Red color indicates obligatory parameters
OPTIONS
file=<HTML_file_path>
- Path to the HTML file to save the Extent report to. If the file is relative or just a file name it will be resolved against the current report directory of the calling Robot script.
name=<name>
- The test script name.
desc=<description>
- The test script description.
logs=<true|false>
- The value of true will record also logs written to the Robot's execution log. These are created by Robot and by calls of the Log command. The default value is false (do not record logs).
extentx=<address[:port]>
- Optional address (known network host name or IP address) of the machine hosting the Mongo database which serves as a back up of the ExtentX or Klov report server, for example "192.168.100.3" or "mymachine.mynetwork.com". The port doesn't have to be specified as long as the Mongo DB runs on the default port of 27017. When a valid address is provided the script will creates a local HTML report and uploads a copy to the DB to make it visible in the ExtentX or Klov report server. The latter one requires the klov parameter to be populated as well. For details read the Klov Integration or ExtentX Integration chapter. Supported since v0.3.
klov=<address[:port]>
- Optional address (known network host name or IP address) of the machine hosting the Klov report server, for example "192.168.100.3" or "mymachine.mynetwork.com". The port doesn't have to be specified as long as Klov runs on the default HTTP port of 80. The extentx parameter must be populated with the MongoDB address as well. For details read the Klov Integration chapter. Supported since v0.4.
project=<project_name>
- Optional project name. It is not displayed by the report but allows for categorization in the ExtentX dashboard. For details read the Klov or ExtentX Integration chapter. Supported since v0.3.
RETURNS
The command always returns 0 (zero).
EXAMPLES
- Start an Extent report in the script's report folder:
Run "com.tplan.extent.Report" file="extent.html" name="MyApplication test"
desc="This test opens MyApplication and tests its functionality."
- Start an Extent report and upload it to the Klov report server. We presume that both Klov and MongoDB are installed at default ports of the local machine:
Run "com.tplan.extent.Report" file="extent.html" name="MyApplication test"
project="MyApplication"
extentx="localhost"
klov="localhost"
- Start an Extent report and upload it to the ExtentX dashboard whose Mongo DB back end runs on machine mymongo.mynetwork.com :
Run "com.tplan.extent.Report" file="extent.html" name="MyApplication test"
project="MyApplication"
extentx="mymongo.mynetwork.com"
- Create both the Extent and Robot XML reports in the script's report folder:
Run "com.tplan.extent.Report" file="extent.html" name="MyApplication test"
desc="This test opens MyApplication and tests its functionality."
Report "results.xml"
desc="This test opens MyApplication and tests its functionality."
3.2 Set Script
DESCRIPTION
The Set script sets attributes of the previously started Extent report such as:
- The test autor name, such as "John Doe".
- The test category, such as "Regression", "Black box" etc.
- The system info in form of a param-value pair, such as "System", "Windows 7 x64" etc.
- The report configuration file (XML) which allows you to customize the report appearance. See the example.
To populate multiple values (multiple authors, system info entries or categories) call the script repeatedly. If no parameters are specified or the script is called while there are no running Extent reports it will do nothing.
SYNOPSIS (TPR SCRIPTS)
Run com.tplan.extent.Set [author=<author_name>] [category=<category>] [param=<sysinfo_param>] [value=<sysinfo_value>]
SYNOPSIS (JAVA SCRIPTS)
run ("com.tplan.extent.Set" [, "author", "<author_name>"] [, "category", "<category>"] [, "param", "<sysinfo_param>"] [, "value", "<sysinfo_value>"] );
* Red color indicates obligatory parameters
OPTIONS
author=<author_name>
- The test author name, for example "John Doe".
category=<category>
- The category name, for example "Regression" or "Black box".
param=<sysinfo_param>
- System info parameter name, for example "System".
value=<sysinfo_value>
- System info parameter value, for example "Windows 7 x64".
config=<configuration_file>
- Extent configuration file (.xml). This is obsoleted and it is supported only by the old Extent 2.41.0.
RETURNS
The command always returns 0 (zero).
EXAMPLES
- Start an Extent report in the script's report folder and set its author, category and system type:
Run "com.tplan.extent.Report" file="extent.html"
Run "com.tplan.extent.Set" author="John Doe" category="Regression"
param="System"
value="Windows 7 x64"
- Set multiple authors:
Run "com.tplan.extent.Set" author="John Doe"
Run "com.tplan.extent.Set" author="Jane Doe"
3.3 Flush Script
DESCRIPTION
OPTIONS
The Extent reports started through Report are for performance reasons flushed (saved to file) after the test script finishes. To flush the running report(s) at any time during the script execution use Flush.
If the script is called while there are no running Extent reports it will do nothing.
SYNOPSIS (TPR SCRIPTS)
Run com.tplan.extent.Flush
SYNOPSIS (JAVA SCRIPTS)
run("com.tplan.extent.Flush");
* Red color indicates obligatory parameters
No options.
RETURNS
The command always returns 0 (zero).
EXAMPLES
- Start an Extent report and flush it immediately to make the report file available:
Run "com.tplan.extent.Report" file="extent.html" name="MyApplication test"
desc="This test opens MyApplication and tests its functionality."
Run "com.tplan.extent.Flush"
4. Klov Integration
Klov is a report server delivering a test dashboard on top of a set of Extent reports. It features basic test report statistics and analysis. For a demo see the Klov site.
- Follow the instructions at the Klov site to install and start the server.
- Once up and running call the Report script with the properly specified extentx (the Mongo DB host and port) and klov (Klov server host and port) parameters. The Extent report will be both saved to a local HTML file and uploaded to the Klov server.
NOTE: The Klov report server fails to upload and display report screen shots. See bug #45: Problems attaching screenshots to the log. This issue does not affect Extent reports saved to local HTML files.
5. ExtentX Integration
IMPORTANT: ExtentX has been obsoleted by Klov. We provide the information below only for a backward reference.
ExtentX is an older product which has been obsoleted by Klov.
The plugin v0.3 and newer integrates with ExtentX v0.2 featuring:
- ExtentX v0.2 is supported. Version 0.2.1 is NOT supported because it requires Extent v3.0.0 which was unstable at the time of writing of this document (August 2016). Should you be interested in ExtentX 0.2.1+ integration please contact the T-Plan support for information on the latest staus.
- The plugin integrates with ExtentX through the "extentx" and "project" parameters of the Report script:
- The extentx parameter specifies the address (host name or IP) and optional port of the machine hosting the Mongo DB.
- The project specifies the project name. The dashboard supports filtering by the project through the Select Project button in the top left menu:
- When the Mongo DB address is specified the plugin creates both the local report and a copy in the ExtentX dashboard.
- Due to a limitation of ExtentX 0.2 no external resources are uploaded to the server. This includes screen shots, logs and any external files linked to the Extent HTML report. This is allegedly being addressed in ExtentX v0.2.1 which as of Aug 2016 has not been officially released yet.
5.1 ExtentX Setup
This document contains steps to install the ExtentX server (dashboard) on Windows 10 Home 64-bit. For an installation on 32-bit Windows please download the 32-bit installers. The steps basically follows instructions for the MongoDB & NodeJS on same host scenario published at the ExtentX site but it is more detailed and deals with a few issues experienced during the installation process. For other systems or the scenario with MongoDB and ExtentX running on separate machines see the official installation instructions.
1. Install NodeJS using the 64-bit MSI installer.
2. Install MongoDB using the Windows Server 2008 R2 64-bit or later with SSL support installer.
3. Create the default DB data folder and start MongoDB from a command prompt (instructions taken from the MongoDB docs)
md \data\db
"C:\Program Files\MongoDB\Server\3.2\bin\mongod.exe"
Leave the command prompt open. The output must say something like:
2016-08-26T10:36:37.496+0200 I NETWORK [initandlisten] waiting for connections on port 27017
TIP: For info on how to set up MongoDB as a system service see the product documentation.
4. Download ExtentX 0.2-alpha. Do NOT download 0.3 pre-Alpha because it requires Extent 0.3 while our Extent Reports plugin v0.3 for Robot is based on Extent 0.2.41.
5. Unzip the file, open a command prompt and change to the "extentx" folder (the one that contains the
package.json file):
npm install
node_modules\.bin\sails lift
Leave the command prompt open. The output must say something like:
info: To see your app, visit
info: To shut down Sails, press <CTRL> + C at any time.
6. Open in the web browser. You should see the empty ExtentX dashboard. To view the dashboard from another machine replace "localhost" with the host name or IP address of the machine hosting the ExtentX installation.
7. To kill the dashboard press Ctrl+C in the Extent app command prompt and eventually kill the database the same way too. To start both components again switch to the ExtentX home folder and execute:
Command prompt #1:
"C:\Program Files\MongoDB\Server\3.2\bin\mongod.exe"
Command prompt #2:
node_modules\.bin\sails lift
5.2 ExtentX Customization
This chapter provides steps on how to modify the ExtentX source code to display your company links and/or graphics. As the product is licensed under the BSD license there are little legal considerations.
The instructions are based on a review of the ExtentX 0.2-alpha source code. The files are a mixture of JavaScript code and HTML elements and are easy to modify in any plain text editor (Notepad, JEdit,..). To apply your changes simply restart the ExtentX application ("sails lift").
5. Change Log
Version 0.4 released on 5 June 2018
- Support of Extent 3.1.5
- Integration with the Klov 0.1.1 report server through the "extentx", "klov" and "project" parameters of the Report script.
Version 0.3 released on 26 August 2016
- Integration with the ExtentX dashboard through the "extentx" and "project" parameters of the Report script.
- ExtentX description, setup and customization instructions introduced to this document.
Version 0.2 released on 1 August 2016
- Image (screenshot) links changed from absolute to relative ones to make the images display correctly under a web root (Jenkins).
- When running in the CLI mode (with the -n/--nodisplay option) the plugin prints out a console log with the report file location (same as the default XML/HTML report generator).
Version 0.1 released on 16 June 2016
- Initial version. | https://docs.t-plan.com/robot/robot-plugins/extent-reports | 2022-05-16T21:57:13 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.t-plan.com |
Date: Mon, 16 May 2022 23:18:07 +0000 (GMT) Message-ID: <65348507.20742.1652743087602@c7585db71e40> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_20741_1059954948.1652743087602" ------=_Part_20741_1059954948.1652743087602 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
ReturnsReturns
trueif both arguments evaluate to=20
true. Equivalent to the=20
&&operator.
Since the
AND function returns a Boolean value, i=.&= nbsp;
and(finalScoreEnglish >=3D 60, finalSco= reMath >=3D60)
Output: Returns
true if the valu=
es in the
finalScoreEnglish and
finalScor=
eMath columns are greater than or equal to
60.&nbs=
p;Otherwise, the value is
false.
and(value1, value2)
For more information on syntax standards, see Language Documentation Syntax Notes= .
Expressions, column references or literals to compare as Boolean values.=
Usage Notes:
Tip: For additional examples, see Common Tasks.
This example demonstrate the
AND,
OR=
, and
NOT logical functions.
In this example, the dataset contains results from survey data on two qu= estions about customers. The yes/no answers to each question determine if t= he customer is 1) still active, and 2) interested in a new offering.
Functions:
=20=20 | https://docs.trifacta.com/exportword?pageId=160408408 | 2022-05-16T23:18:07 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.trifacta.com |
merchant to update the customers’ points with the remark.
- Customers can apply points on the cart as well as on the checkout page.
2.Installation
- Automatic Installation:
Automatic installation is the easiest option handled by WordPress. Follow these steps for automatic installation:
1) Go to the Admin panel navigate to the sidebar click on the ‘Plugins’ menu > ‘Add New’.
2) On the ‘Add Plugins’ page go to the search bar type ‘Points and Rewards for WooCommerce’.
Once you find ‘Points and Rewards for WooCommerce’ by WP Swings, you can view the details about it such as the points release, rating, and description. One can install it, simply by clicking “Install Now”.
- Manual Installation:
The manual installation of the plugin is another option to install the plugin in the seller’s WordPress-environment. The manual installation method involves downloading our ‘Points and Rewards for WooCommerce’ Extension and uploading it to the webserver via their favorite FTP application. The steps for manual installation are as follows:
1) Upload the ‘Points and Rewards for WooCommerce’ folder to the /WP-content/plugins/ directory.
2) Activate the plugin through the ‘Plugins’ menu in WordPress.
3.General Setting
After the successful installation of the plugin, the admin can perform all the plugins’ settings one by one.
- After the successful installation of the plugin, first of all, enable the plugin.
- Go to admin panel click on the WooCommerce > Points and Rewards > General Setting.
- Enable the points and Rewards settings by clicking on the checkbox.
- Click on Save Changes.
3.1.Signup Setting
Enable the Signup Points for the user. Through this setting when the user sign up on your site then they will get some signup points as the reward.
- Go steps:
- Go to My Account Page.
- Click on the “Points” tab from the sidebar.
- And last click in View Points log link.
3.2.Referral Setting
Enable the Referral Setting for the customers. Through this setting customers will get the points on the referee(the user invited by the customer) purchase.
- Go to admin panel click on the WooCommerce > Points and Rewards > General Setting.
- Enable the Referral Points settings and enter the Referral Points.
- Click on the “Save Changes” button.
After successfully enabling settings, customers can share the referral link with other users.
From the My Account Page, they can copy the referral link and share it to other users.
3.5.Redemption Settings
Enable this setting if you want to allow your customers to redeem their earned points over the Cart page and Checkout page to get the discount.
- Redemption Over Cart Sub-Total: Enable this setting if you want to allow your customers to redeem their earned points over the cart Sub-total.
- Conversion Rate: Enter the conversion rate of points redemption.
The customer can redeem their points based on the conversion rate set by the admin. For example, the worth of 10 points is equal to $1.
- Enable Apply Points during Checkout: Enable this setting if you want to allow your customers to redeem their earned points over the Checkout page.
Customers can apply their points over the cart subtotal.
Customers can apply their points over the checkout page.
Your customers can see their total points on “Points Log Table”.
4.Per Currency Points Settings
Through this setting, the customers will get the points based on the per currency points conversion whenever the customer spent some amount on the site then they will get some points as a reward.
- Enable Per Currency Points Conversion: Enable per currency conversion.This setting allows your customers to earn points based on the per currency points conversion. For example, the customer can earn points for the purchase based on the per currency points conversion.
- Per $ Points Conversion: Enter the points for currency conversion. According to conversion rate whenever the customer spent some defined dollar on the site then they will get some defined points as a reward.
Customers can see this notification on site.
Customers can see their points from My Account > Points > Points Log Table page.
5.Points Table
6.Enable Points Notification Settings
Through this setting, you can notify your users about their points through the Sub-total, Points On Order Total Range.
7.Enable Membership
This feature allows your customers to get the membership level by the required points and keep the benefits on the selected categories or products fulfilled by that level.
You can create the level for the membership by the following steps:
- Enable Membership: Enable Membership setting.
- Exclude Sales Products: Exclude sale products from the membership benefits.
- Create Membership
- Enter Level: Enter the name’ product.
8.Assign Product Points by Global Setting
Through this setting, you can assign equal points for all products at once by the global setting. After that, your customers will get the same points on purchasing any product.
- Global Assign Product Points: Enable the setting.
- Enter Assign Global Product Points: Enter the points that you want to assign on all products.
9.Shortcodes
Use shortcodes for displaying the notification anywhere on site. We have provided few shortcodes.
- Points: Entered text will get displayed along with [MYCURRENTPOINT] shortcode.
- [MYCURRENTUSERLEVEL]: This shortcode use for displaying the current Membership Level of Users.
For example, If you want to show the current Membership Level for the customers on the shop page. simple Go to the “Shop page” paste the Shortcode of Membership Level and click on the update button. The current Membership Level of the customer will get displayed along with the text. Enter text for Current User Level: Entered text will get displayed along with [MYCURRENTUSERLEVEL] shortcode.
- SIGNUPNOTIFICATION]: This shortcode use for displaying signup notification”.
Other Setting: Select a color for the notification bar.
10.Enable the Settings for Orders Total Points
This setting allows your customers to get the points by fulfilling the order amount range.
The customer will get some points whenever their order amount varied between the maximum and minimum amount of the Order Range.
To enable the ‘Order Total Points’ setting first you have to click on the checkbox to enable the setting and then set the points within the order amount range and the last click on the ‘Save Changes’ button to save the settings.
11.WPML Compatibility
The Points and Rewards for WooCommerce plugin is compatible with the WordPress Multilingual (WPML) plugin for localization of the Points and Rewards for WooCommerce..
14.Feedback and Suggestions
Don’t see a feature in Point and Rewards for WooCommerce plugin that you think would be useful?
We’d love to hear it: Reach out to our Support query and we’ll consider adding it in a future release.
15.FAQs
"Points" tab is not displaying under My Account. Is there any reason?
The points tab will always get the display to the customer user role only not the admin. So please make sure you are logged in as a customer user role.
How can customers use the earn points?
For redeeming the points, We have provided the option to apply points on the cart/checkout page. Customers can apply the earn points and get a discount.
Can I set a different conversion rate for earning the points when customers spent money and a different conversion rate when customers redeem the points and get the discount?
Yes, both are different features and we have provided a separate setting for each feature. You earning points you can set the conversion rate under "Points and Rewards > Earn Points Per Currency Settings" For redemption the points you can set the conversion rate under "Points and Rewards > General > Redemption Settings" from here you can set the conversion rate.
How the customer will know for which events they will earn points?
We have provided "Ways to gain points" here you can enter the message that you want to display to your customer. The entered message will get display on the My Account > Points tab. Customers can see the message and how they can earn points.
Can I provide a point to the customers on spending points on the site?
Yes, we have this feature, you can allow your customers to earn points on the spending money.
Can I update any of the customer's points?
Yes, We have provided this feature to update the customer points manually under WooCommerce > Points and Rewards > Points Table. From here you can add or subtract any customer points.
Can admin see the customer's total points and their log?
Yes, admin can see the all customers points as well point log. On which event they have earned or redeemed points.
Can I create multi-level membership and provide a different discount on each level?
In our org version, you can create only one level membership only. But yes in the pro version you can create multiple levels of the membership and assign different discounts on each level.
Will, the membership level gets upgraded automatically if the customer has required points in their account?
No, the membership level will not get upgraded automatically. After earning the points customer need to update their membership level manually by redeeming some points and get the advantages of that level.
How can I display customer's total points on other pages and menus?
We have provided a shortcode [MYCURRENTPOINT]. You can use this shortcode on your site to display customer's total points.
3.3.Social Sharing Setting
Enable this setting to allow their customers to share the referral link to other users through social media channels.
After successfully enabling the setting, your customers can share referral links to other users with social media platforms. | https://docs.wpswings.com/points-and-rewards-for-woocommerce/?utm_source=mwb-docs&utm_medium=redirected&utm_campaign=par-doc-migration | 2022-05-16T21:01:34 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.wpswings.com |
Setting up Cyberduck is very simple!
Start by downloading our preconfigured Filebase Profile for Cyberduck/Mountain Duck - (thanks to Filebase for hosting this!)
The Server name is preconfigured. All you will need is your Filebase S3 API Access Key ID and S3 API Secret Key from your user dashboard under "Settings".
Enter the credentials in Cyberduck - Hit Connect, and your Filebase profile is ready to go.
Your entire bucket list should now appear after connecting, ready for use with Cyberduck. | https://docs.filebase.com/client-configurations/cyberduck | 2020-11-23T19:38:28 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.filebase.com |
$ oc get ns
TLS certificates for registry or HTTPS endpoints must be added to a ConfigMap in order to import data from these sources. This ConfigMap must be present in the namespace of the destination DataVolume.
Create the ConfigMap by referencing the relative file path for the TLS certificate.
Ensure you are in the correct namespace. The ConfigMap can only be referenced by DataVolumes if it is in the same namespace.
$ oc get ns
Create the ConfigMap:
$ oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem> | https://docs.openshift.com/container-platform/4.3/cnv/cnv_virtual_machines/cnv_importing_vms/cnv-tls-certificates-for-dv-imports.html | 2020-11-23T20:05:34 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.openshift.com |
Configuration
Amazon EMR releases 4.x or later.
An optional configuration specification to be used when provisioning cluster instances, which can include configurations for applications and software bundled with Amazon EMR. A configuration consists of a classification, properties, and optional nested configurations. A classification refers to an application-specific configuration file. Properties are the settings you want to change in that file. For more information, see Configuring Applications.
Contents
- Classification
The classification within a configuration.
Type: String
Required: No
- Configurations
A list of additional configurations to apply within a configuration object.
Type: Array of Configuration objects
Required: No
- Properties
A set of properties specified within a configuration classification.
Type: String to string map
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/emr/latest/APIReference/API_Configuration.html | 2020-11-23T19:49:34 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.aws.amazon.com |
High Level Support for Multigrid with KSPSetDM() and SNESSetDM()¶
This chapter needs to be written. For now, see the manual pages (and
linked examples) for
KSPSetDM() and
SNESSetDM().
Smoothing on each level of the hierarchy is handled by a
KSP held by the
PCMG, or in the nonlinear case, a
SNES held by
SNESFAS. The
DM for each level is associated with the smoother using
KSPSetDM() and
SNESSetDM().
The linear operators which carry out interpolation and restriction (usually of type
MATMAIJ) are held by the
PCMG/
SNESFAS, and generated automatically by the
DM using information about the discretization. Below we briefly discuss the different operations:
Interpolation transfers a function from the coarse space to the fine space. We would like this process to be accurate for the functions resolved by the coarse grid, in particular the approximate solution computed there. By default, we create these matrices using local interpolation of the fine grid dual basis functions in the coarse basis. However, an adaptive procedure can optimize the coefficients of the interpolator to reproduce pairs of coarse/fine functions which should approximate the lowest modes of the generalized eigenproblem
where \(A\) is the system matrix and \(M\) is the smoother. Note that for defect-correction MG, the interpolated solution from the coarse space need not be as accurate as the fine solution, for the same reason that updates in iterative refinement can be less accurate. However, in FAS or in the final interpolation step for each level of Full Multigrid, we must have interpolation as accurate as the fine solution since we are moving the entire solution itself.
Injection should accurately transfer the fine solution to the coarse grid. Accuracy here means that the action of a coarse dual function on either should produce approximately the same result. In the structured grid case, this means that we just use the same values on coarse points. This can result in aliasing.
Restriction is intended to transfer the fine residual to the coarse space. Here we use averaging (often the transpose of the interpolation operation) to damp out the fine space contributions. Thus, it is less accurate than injection, but avoids aliasing of the high modes.
Adaptive Interpolation¶
For a multigrid cycle, the interpolator \(P\) is intended to accurately reproduce “smooth” functions from the coarse space in the fine space, keeping the energy of the interpolant about the same. For the Laplacian on a structured mesh, it is easy to determine what these low-frequency functions are. They are the Fourier modes. However an arbitrary operator \(A\) will have different coarse modes that we want to resolve accurately on the fine grid, so that our coarse solve produces a good guess for the fine problem. How do we make sure that our interpolator \(P\) can do this?
We first must decide what we mean by accurate interpolation of some functions. Suppose we know the continuum function \(f\) that we care about, and we are only interested in a finite element description of discrete functions. Then the coarse function representing \(f\) is given by
and similarly the fine grid form is
Now we would like the interpolant of the coarse representer to the fine grid to be as close as possible to the fine representer in a least squares sense, meaning we want to solve the minimization problem
Now we can express \(P\) as a matrix by looking at the matrix elements \(P_{ij} = \phi^F_i P \phi^C_j\). Then we have
so that our discrete optimization problem is
and we will treat each row of the interpolator as a separate optimization problem. We could allow an arbitrary sparsity pattern, or try to determine adaptively, as is done in sparse approximate inverse preconditioning. However, we know the supports of the basis functions in finite elements, and thus the naive sparsity pattern from local interpolation can be used.
We note here that the BAMG framework of Brannick, et. al. [BBKL11] does not use fine and coarse functions spaces, but rather a fine point/coarse point division which we will not employ here. Our general PETSc routine should work for both since the input would be the checking set (fine basis coefficients or fine space points) and the approximation set (coarse basis coefficients in the support or coarse points in the sparsity pattern).
We can easily solve the above problem using QR factorization. However, there are many smooth functions from the coarse space that we want interpolated accurately, and a single \(f\) would not constrain the values \(P_{ij}\) well. Therefore, we will use several functions \(\{f_k\}\) in our minimization,
where
or alternatively
We thus have a standard least-squares problem
where
which can be solved using LAPACK.
We will typically perform this optimization on a multigrid level \(l\) when the change in eigenvalue from level \(l+1\) is relatively large, meaning
This indicates that the generalized eigenvector associated with that eigenvalue was not adequately represented by \(P^l_{l+1}\), and the interpolator should be recomputed. | https://docs.petsc.org/en/latest/manual/high_level_mg/ | 2020-11-23T18:32:32 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.petsc.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.