content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
If you want to assign custom properties using the code, you can use component.dispatch as follows:
function changeMargin(): void {this.marginTop = 5; // Wrong. The property will be set but the UI will not be updated.this.dispatch({type: "updateUserStyle",userStyle: {marginTop: 5}});// Perform this when position related properties to be changedthis.layout.applyLayout();}
To learn more about dispatch and general usage, refer here:
For more technical information about dispatch and applylayout, refer the contx and styler repositories made by Smartface. | https://docs.smartface.io/smartface-cloud-development/cloud-ide/changing-ui-properties-on-runtime | 2021-07-24T07:37:54 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.smartface.io |
Manage Access Levels
Change a team member’s access level.
Only the company account owner or a team member with administrator permission can edit a user’s access level.
Edit Access Level
- Go to Settings » Project Configuration and click Manage for Team Access.
- Click edit next to the user’s current access level.
- Select a new access level.
- Click Save.
Permissions per Access Level
Feedback
Was this page helpful?
Thank you
Thanks for your feedback!Tell Us More
Thank you
We will try harder!Tell Us More
Categories | https://docs.airship.com/guides/messaging/user-guide/admin/team/access-levels/ | 2021-07-24T07:36:07 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.airship.com |
compile¶
Purpose¶
Compiles a source file to a compiled code file. See also the Compiler chapter.
Examples¶
compile qxy.e;
In this example, the
source path would be searched for qxy.e, which
would be compiled to a file called
qxy.gcg on the same subdirectory qxy.e was found.
compile qxy.e xy;
In this example, the
source path would be searched for qxy.e which
would be compiled to a file called
xy.gcg on the current subdirectory.
Remarks¶
- The source file will be searched for in the
source pathif the full path is not specified and it is not present in the current directory.
- The source file is a regular text file containing a GAUSS program. There can be references to global symbols, Run-Time Library references, etc.
- If there are library statements in source, they will be used during the compilation to locate various procedures and symbols used in the program. Since all of these library references are resolved at compile time, the library statements are not transferred to the compiled file. The compiled file can be run without activating any libraries.
- If you do not want extraneous stuff saved in the compiled image, put a new at the top of the source file or execute a new in interactive mode before compiling.
- The program saved in the compiled file can be run with the run command. If no extension is given, the run command will look for a file with the correct extension for the version of GAUSS. The
source pathwill be used to locate the file if the full path name is not given and it is not located on the current directory.
- When the compiled file is run, all previous symbols and procedures are deleted before the program is loaded. It is therefore unnecessary to execute a new before running a compiled file.
- If you want line number records in the compiled file you can put a
#linesonstatement in the source file or turn line tracking on from the main GAUSS menu, .
- Don’t try to include compiled files with
#include.
- GAUSS compiled files are platform and bit-size specific. For example, a file compiled with GAUSS for Windows 64-bit will not run under GAUSS for Windows 32-bit or on Linux 64-bit | https://docs.aptech.com/gauss/compile.html | 2021-07-24T07:49:14 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.aptech.com |
Assignar Forms are one of the most popular tools for capturing any information in a digital format from the field. With Assignar Webhooks module, you can push data captured by Assignar Forms into Assignar GPS Tracking via Zapier webhooks.
In this example, we'll show you how you can capture your vehicle fuel receipts, and push it into Assignar GPS Tracking for analysis.
Step 1.
Create a form in Assignar that has the following fields:
Vehicle name (required)
Date and Time of the refuel event (required)
Number of litres/gallons of fuel that was put in (required)
Optionally, any other field that you want to capture. For example, a photo of the fuel receipt?
Those 3 fields are mandatory to create a Refuel event in Assignar GPS Tracking:
Since Assignar GPS Tracking knows the current odometer reading of the vehicle, it can calculate the fuel consumption for that vehicle by looking at the distance travelled and fuel consumed since the last refuel event.
Once you have created your form in Assignar, you need to create a Webhook notification when that form is submitted.
Step 2
Use Assignar GPS Tracking Zapier app, to capture Assignar Refuel webhook notifications and push them into Assignar GPS Tracking platform. Your Zapier steps should look like this:
Of course, you can also customize this workflow to suit your business processes.
Step 3
After you have added a Webhook Step in Zapier, you will be given a Webhook URL, to which you will need to post your Assignar fuel data to. You can do this by adding a new Webhook in Assignar. Go to the Main Menu => Settings => Webhooks. Then select "Forms" in the tab and create a new Webhook:
Step 4
Test your workflow by submitting the form and capturing the webhook event in Zapier. Map the fields with Assignar GPS Tracking platform and activate the Zap.
This example assumes that you have mapped your vehicle IDs from Assignar GPS Tracking to Assignar Asset external IDs.
Now your workflow is live and you can manage your vehicle's Fuel consumption in Assignar Tracking:
| https://docs.assignar.com/en/articles/4939889-send-fuel-receipts-from-assignar-forms-into-assignar-gps-tracking | 2021-07-24T08:26:44 | CC-MAIN-2021-31 | 1627046150134.86 | [array(['https://downloads.intercomcdn.com/i/o/302696336/979317985b73a54a1f0f122a/Screen+Shot+2021-02-22+at+1.01.55+am.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/302695837/fed9fe0f4014f30427ce1dac/Screen+Shot+2021-02-22+at+1.07.33+am.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/302697316/104bc727de7c4cd6f43a1203/Screen+Shot+2021-02-22+at+1.14.43+am.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/302698336/a5a565e770d9c18752517256/Screen+Shot+2021-02-22+at+1.22.49+am.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/302698957/55291684f2777ced33e841f8/Screen+Shot+2021-02-22+at+1.27.00+am.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/302699015/04990ddf2e6f1f8493944d2c/Screen+Shot+2021-02-22+at+1.27.21+am.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/302699276/bd48d8633e20b9371d0d209f/Screen+Shot+2021-02-22+at+1.02.38+am.png',
None], dtype=object) ] | docs.assignar.com |
Introduction
Apache NiFi Registry-a subproject of Apache NiFi-is a complementary application that provides a central location for storage and management of shared resources across one or more instances of NiFi and/or MiNiFi.
The first implementation of the Registry supports versioned flows. Process group level dataflows created in NiFi can be placed under version control and stored in a registry. The registry organizes where flows are stored and manages the permissions to access, create, modify or delete them.
See the System Administrator’s Guide for information about Registry system requirements, installation, and configuration. Once NiFi Registry is installed, use a supported web browser to view the UI. | https://docs.cloudera.com/cdf-datahub/7.2.10/nifi-reg-user-guide/topics/nifi-reg-introduction.html | 2021-07-24T08:56:16 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.cloudera.com |
Reserving the Master Host for.
- Log into the Cloudera Manager Admin Console.
- Go to the CDSW service.
- Click the Configuration tab.
- Search for the following property: Reserve Master Host., then select the checkbox to enable it.
- Click Save Changes.
- Restart the CDSW service to have this change go into effect. | https://docs.cloudera.com/cdsw/1.9.2/manage-hosts/topics/cdsw-reserving-the-master-host-for-cds-deployments.html | 2021-07-24T08:28:33 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.cloudera.com |
Please refer to the numbered sections below for information about each numbered component in the following screen:
The “Project Panel” contains all of the files and folders in your project in a tree-based file navigation structure.
"Run on Device" allows you to deploy and run your application instantly on Android or iOS devices wirelessly.
"Share" button on the top right corner of your workspace allows you to share your project and manage the read/write permissions of each collaborator.
"Debugger" allows you to use debugging features for Android devices.
"Properties" allows you to change properties of selected object in the visual design editor.
Project coding is done in the Script Editor. For more information, please refer to the guide about the Script Editor.
With the drag & drop visual editor, you design your applications without coding and view the design on different operating systems and devices. The UI editor creates JavaScript source code.
Each application project is actually a standalone Linux system and almost any Linux commands are available in the Terminal. You can also use the Terminal for source control for Git, Mercurial and SVN repositories. | https://docs.smartface.io/smartface-cloud-development/cloud-ide/cloud-ide-basics | 2021-07-24T08:04:23 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.smartface.io |
Contents:
Contents:
- If an input value is missing or null, it is not factored in the computation. For example, for the first row in the dataset, the rolling mode MODE Function.
rollingmode. | https://docs.trifacta.com/display/r068/ROLLINGMODE+Function | 2021-07-24T08:29:54 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.trifacta.com |
Actions
Actions determine what happens when a user interacts with your App and Web messages.
About actions
With actions, you gain much greater control of your users' experience with your app, and can not only provide them with more relevant, specific content, you can learn from their usage and continue to do so over time.
You set an action for all Push NotificationsA message that can appear on any screen on a mobile device. Push notifications appear as banners.
, In-App MessagesA message that appears inside of your app. You can send in-app messages to your entire app audience, not just users who have opted-in to push notifications.
, and Web Push NotificationsA message that slides into the top right or bottom left corner of your audience’s web browser (depending on the browser). On a mobile device, web push notifications appear similar to a push notification.
, and you associate actions with your buttons and images in Rich PageA landing page or Message Center message in your app that can include HTML, video, etc. content. After selecting an action, you can set up adding or removing TagsMetadata that you can associate with channels or named users for audience segmentation. Tags are generally descriptive terms indicating user preferences or other categorizations, e.g.,
wine_enthusiast or
weather_alerts_los_angeles. Tags are case-sensitive. when the notification, button, or image is pressed or clicked..
Deep are example template URLs:{Product Id} yourapp://products/{Product Id}
When you enter this URL in the Airship interface, the form parses it and previews the form your users see in the composer. It automatically identifies “Product Id” as the parameter name, and provides a field to substitute in the actual identifier. So if you had previously entered a product ID of 1872983490 for the above Product ID, the generated URL would be: yourapp://products/1872983490
The interface treats all values for each field as a string.
Dismiss Message
Dismiss Message closes the notification.
Home opens your app’s home screen. For web push notifications it opens your Default Action URL. You can override the default URL by selecting the Web Page action and entering a different URL.
Landing Page
Landing Page opens a Landing page..
See UAActionArguments for more detail on the methods involved with this display behavior.
Message Center
Message Center opens a Message Center message. See: Message Center content..
Availability
For app and web messages, you set an action in the Content step of a composer.
Actions for push notifications and in-app messages:
- Dismiss Message
- Message Center
- Landing Page
- Deep Link
- Adaptive Link
- Web Page
Actions for web push notifications:
- Adaptive Link
- Web Page
When creating landing page and Message Center content, you can assign an action that occurs when a user taps a button or image in the message. Options vary between the Interactive and Visual editors.
Actions in the Interactive editor — See: WYSIWYG editor: Rich Page Actions
- Adaptive Link
- Deep Link
- Web Page
- App Rating
Actions in the Visual editor:
- Deep Link
- URL (Web Page)
Disable actions
You can disable actions (except for Home) from appearing in the composers: Go to Settings » Project Configuration and click Manage for Dashboard Settings, then disable Landing Page, Deep Link, URL, and Add Tags (UA Actions Framework).
Categories | https://docs.airship.com/guides/messaging/user-guide/messages/content/actions/ | 2021-07-24T07:05:20 | CC-MAIN-2021-31 | 1627046150134.86 | [array(['https://docs.airship.com/images/app-id-connect.png', None],
dtype=object) ] | docs.airship.com |
Running your CorDapp
Now that you’ve written a CorDapp, it’s time to test it by running it on some real Corda nodes.
Deploying your CorDapp
Let’s take a look at the nodes']) { nodeDefaults { projectCordapp { deploy = false } cordapp project(':contracts') cordapp project(':workflows') runSchemaMigration = true } node { name "O=Notary,L=London,C=GB" notary = [validating : false] p2pPort 10002 rpcSettings { address("localhost:10003") adminAddress("localhost:10043") } } node { name "O=PartyA,L=London,C=GB" p2pPort 10005 rpcSettings { address("localhost:10006") adminAddress("localhost:10046") } rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]] } node { name "O=PartyB,L=New York,C=US" p2pPort 10008 rpcSettings { address("localhost:10009") adminAddress("localhost:10049") } rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]] } }
You can run this
deployNodes task using Gradle. For each node definition, Gradle will:
- Package the project’s source files into a CorDapp jar.
- Create a new node in
build/nodeswith your CorDapp already installed.
To do this, run the command that corresponds to your operating system from the root of your project:
- Mac OSX:
./gradlew clean deployNodes
- Windows:
gradlew clean deployNodes
Running the nodes
Running
deployNodes will build the nodes under
build/nodes. If you navigate to one of these folders, you’ll see
the three node folders. Each node folder has the following structure:
. |____additional-node-infos |____certificates |____corda.jar // The runnable node. |____cordapps |____djvm |____drivers |____logs |____network-parameters |____node.conf // The node's configuration file. |____nodeInfo |____persistence.mv.db |____persistence.trace.db
Start the nodes by running the following commands from the root of the project:
- Mac OSX:
build/nodes/runnodes
- Windows:
build/nodes/runnodes.bat
This will start a terminal window for each node., you’d generally provide a web API sitting on top of our node. Here, for simplicity, you’ll be interacting with the
node via its built-in CRaSH shell.
Go to the terminal window displaying the CRaSH shell of
PartyA. Typing
help will display a list of the available
commands.
You want to create an IOU of 99 with
PartyB. To start the
IOUFlow, type the following syntax:.
You can check the contents of each node’s vault by running:
run vaultQuery contractStateType: com.template.states.IOUState
The vaults of
PartyA and
PartyB should both display the following output:
states: - state: data: !<com.template.states.IOUState> value: "99" lender: "O=PartyA, L=London, C=GB" borrower: "O=PartyB, L=New York, C=US" contract: "com.template.contracts.TemplateContract" notary: "O=Notary, L=London, C=GB" encumbrance: null constraint: !<net.corda.core.contracts.SignatureAttachmentConstraint>" ref: txhash: "D189448F05D39C32AAAAE7A40A35F4C96529680A41542576D136AEE0D6A80926" index: 0 statesMetadata: - ref: txhash: "D189448F05D39C32AAAAE7A40A35F4C96529680A41542576D136AEE0D6A80926" index: 0 contractStateClassName: "com.template.states.IOUState" recordedTime: "2020-10-19T11:09:58.183Z" consumedTime: null status: "UNCONSUMED" notary: "O=Notary, L=London, C=GB" lockId: null lockUpdateTime: null relevancyStatus: "RELEVANT" constraintInfo: constraint:" totalStatesAvailable: -1 stateTypes: "UNCONSUMED" otherResults: []
This is the transaction issuing our
IOUState onto a ledger.
However, if you run the same command on the other node (the notary),
You have written a simple CorDapp that allows IOUs to be issued onto the ledger. This CorDapp is made up of two key parts:
- The
IOUState, which represents IOUs on the blockchain.
- The
IOUFlow, which orchestrates the process of agreeing the creation of an IOU on-ledger.
After completing this tutorial, your CorDapp should look like this:
- Java:
- Kotlin:
Next steps
There are a number of improvements you could make to this CorDapp:
- You could add unit tests, using the contract-test and flow-test frameworks.
- You could change
IOUState.valuefrom an integer to a proper amount of a given currency.
- You could add an API, to make it easier to interact with the CorDapp.
But for now, the biggest priority is to add an
IOUContract imposing constraints on the evolution of each
IOUState over time - see
Applying contract constraints. | https://docs.corda.net/docs/corda-os/4.6/hello-world-running.html | 2021-07-24T07:58:29 | CC-MAIN-2021-31 | 1627046150134.86 | [array(['/en/images/running_node.png', 'running node'], dtype=object)] | docs.corda.net |
By default time series creates samples only if the associated data source is changed. Optionally you can add behavior where the new sample will be created if the data source is not changed within RepeatLastSampleFactor ✕ DataSource.UpdateRate time interval (in milliseconds). The new sample time will differ RepeatLastSampleFactor ✕ DataSource.UpdateRate from the current one, and the value will be equal to the current sample value. Thus the combined mode of writing data to the store is implemented:
- generation of time series sample in case data source changes.
- cycling of the current sample in case data source doesn’t change. | https://docs.monokot.io/hc/en-us/articles/360034374912-Last-Sample-Repeat | 2021-07-24T06:58:24 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.monokot.io |
Web 3.0 has opened an entirely new technological field for developers. A question we often get from developers looking to begin building for the decentralized web is, “Where do I start?”. There are many ways to answer this question. We’ll be exploring one: decentralizing your existing static site.
This guide will highlight two core pieces of the decentralized stack: naming, and storage. In this guide, we’ll be working with the Handshake, and Skynet blockchain protocols to handle these two parts respectively.
The first part of creating a decentralized application involves creating the content you hope for others to access in the first place. Most of the time, this process maps quite closely with traditional development approaches.
If you're just starting off, we recommend choosing a static site generator, and building a single page application that you can then upload to Skynet. We suggest this Netlify guide if you're looking for steps on creating a simple static blog.
If you're just looking to go through the motions of this guide and confirm content is accessible, download the 'public' folder from this Github repo.
Skynet is able to host static websites natively. As long as the directory you’re uploading has an
index.html file, you should be able to upload your site directly to Skynet, and access it via any Skynet portal.
To upload our content, we can use Sia’s user-friendly siasky.net webportal (alternatively, we can use the command-line interface). Once on siasky.net:
Scroll down the page to the upload box, and select “Do you want to upload the entire directory?”
Click “Browse”
Select the folder that contains the contents of your already built app from your computer (this may be a 'dist', ‘public’, or 'out' folder - any will work as long as the folder contains an
index.html file)
The link that's output is now a link to your application, stored on Skynet, which can be accessed through Sia's gateway.
If you used the UI, grab the Skylink to your content. The Skylink is the 46 character string at the end of the link that was returned. The
siasky.net/ in front of the string is one of many portals you can use to access the content living at the Skylink Note this down because you’ll need it later to connect your content to Handshake.
It's important to note that the Skylink is what lets other computers find your content. However, you can use any portal to access that content. For example, the Skynet docs are stored on Skynet, but can be accessed using any of the below links.
Your app is now living on the decentralized Internet! However, given the complexities of the links, they're not yet quite human-readable.
Continue on to understand how your application can be fully decentralized beyond just this storage layer.
When using Skynet, each time you redeploy your application, you’ll be returned a new Skylink. A constantly updating link may present challenges as you attempt to share your content online. Additionally, while the application itself is stored on a decentralized storage network, when users go to access your app, the IP address it lives at is still resolved using the traditional Domain Name System.
This can present issues for those looking for a site that's truly censorship-resistant. The first way users interact with any application is via its domain. Handshake allows you to obtain your own Top Level Domain that no one can block access to.
Before continuing, you’ll need a Handshake domain. Namebase makes it easy to purchase your own Handshake domain.
To access the content you have on Skynet via a simple, human-readable URL, other computers must be able to perform two distinct actions:
Find out where the content is (performed via a DNS Lookup)
Access the content once it’s told where the content lives (in our case, the content lives on Skynet which traditional computers can’t access without prior configuration)
To accomplish this, we will:
Set a TXT record with the Skylink for your app
Set an ALIAS or CNAME record (depending on whether you're accessing the app via a bare TLD, or a secondary domain respectively) pointing to a Skynet gateway
When working with Handshake domains, domain records operate similarly to traditional DNS domains. However, rather than using the A record to point a domain to an IP address, we’ll use the nameserver .TXT record to point the domain to our Skylink.
Navigate to the Namebase domain manager for the domain you’re putting your project on at:
Scroll down to the “Namebase nameserver DNS records” section, and click, “Add new record”
Select “TXT” from the dropdown, and input
You're done with Part 1! The good news is: any computer that makes a request to your Handshake domain will be directed to the provided Skylink! Continue on to Part 2 to learn how to deal with the next step.
Because most computers aren’t currently setup to natively access files on Skynet, they don't know how to find your app using just a Skylink. To fix this, we’ll need to set either an
ALIAS or
CNAME record pointing to the gateway that computers can use to access your application.
Namebase currently hosts a public gateway for anyone to use at
sia.namebase.io.. However, keep in mind this is experimental and won’t be thoroughly maintained. If you want to create a more reliable, secure site, hosting your own gateway is highly recommended.
If you want to create a site that is truly decentralized, hosting your own gateway is the next step to complete decentralization.
If you're setting content on the bare TLD, create an ALIAS type record under the "Namebase nameserver DNS records" section and set:
Name: @
Value/Data:
sia.namebase.io. (note the trailing period)
TTL: 60 mins
If you're setting content on a subdomain, start by double-checking that the TXT record name you set above is of the format,
_contenthash.YOUR_SUBDOMAIN. Then create a CNAME type record with the following values:
Name:
YOUR_SUBDOMAIN
Value/Data:
sia.namebase.io. (again, note the trailing period)
TTL: 60 mins
Now as long as your device can resolve Handshake names, you can go to a web browser and view your blog at or! | https://docs.namebase.io/guides-1/building-a-decentralized-blog-on-handshake | 2021-07-24T07:40:14 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.namebase.io |
Custom templates help you save time and standardize meetings. Now you can create your own templates and reuse it in your team. Here is how you can start using custom templates.
Saving the Template
To save a template create the draft notes in MeetNotes Editor and click the vertical ellipses in the meeting header.
Select "Save as Template".
Once saved, this template will be accessible to everyone in your Team.
Inserting the Template
Saved templates can be accessed from the left menu. The insert templates icon displays the list of all available templates. Select the saved template that you would like to insert.
The selected template will be inserted at the cursor position in the meeting. | http://docs.meetnotes.co/overview-and-getting-started/custom-templates | 2018-10-15T11:06:14 | CC-MAIN-2018-43 | 1539583509170.2 | [array(['https://downloads.intercomcdn.com/i/o/35978530/cd09fbb04829283cc1a93174/Screen+Shot+2017-10-09+at+7.52.03+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/35979956/968e72c365d9c4bdf46c8ee2/Screen+Shot+2017-10-09+at+8.04.13+PM.png',
None], dtype=object) ] | docs.meetnotes.co |
New in version 2.0.
SAX parsers implement the
XMLReader interface. They are implemented in a Python module, which must provide a function
create_parser(). This function is invoked by
xml.sax.make_parser() with no arguments to create a new parser object.
class xml.sax.xmlreader.XMLReader
Base class which can be inherited by SAX parsers.
class xml.sax.xmlreader.IncrementalParser.
class xml.sax.xmlreader.Locator
Interface for associating a SAX event with a document location. A locator object will return valid results only during calls to DocumentHandler methods; at any other time, the results are unpredictable. If information is not available, methods may return
None.
class xml.sax.xmlreader.InputSource([systemId]).
class xml.sax.xmlreader.AttributesImpl(attrs).
class xml.sax.xmlreader.AttributesNSImpl(attrs, qnames):
XMLReader.parse(source)
Process an input source, producing SAX events. The source object can be a system identifier (a string identifying the input source – typically a file name or a URL), a file-like object, or an
InputSource object. When
parse() returns, the input is completely processed, and the parser object can be discarded or reset. As a limitation, the current implementation only accepts byte streams; processing of character streams is for further study.
XMLReader.getContentHandler()
Return the current
ContentHandler.
XMLReader.setContentHandler(handler)
Set the current
ContentHandler. If no
ContentHandler is set, content events will be discarded.
XMLReader.getDTDHandler()
Return the current
DTDHandler.
XMLReader.setDTDHandler(handler)
Set the current
DTDHandler. If no
DTDHandler is set, DTD events will be discarded.
XMLReader.getEntityResolver()
Return the current
EntityResolver.
XMLReader.setEntityResolver(handler)
Set the current
EntityResolver. If no
EntityResolver is set, attempts to resolve an external entity will result in opening the system identifier for the entity, and fail if it is not available.
XMLReader.getErrorHandler()
Return the current
ErrorHandler.
XMLReader.setErrorHandler(handler)
Set the current error handler. If no
ErrorHandler is set, errors will be raised as exceptions, and warnings will be printed.
XMLReader.setLocale(locale).
XMLReader.getFeature(featurename)
Return the current setting for feature featurename. If the feature is not recognized,
SAXNotRecognizedException is raised. The well-known featurenames are listed in the module
xml.sax.handler.
XMLReader.setFeature(featurename, value)
Set the featurename to value. If the feature is not recognized,
SAXNotRecognizedException is raised. If the feature or its setting is not supported by the parser, SAXNotSupportedException is raised.
XMLReader.getProperty(propertyname)
Return the current setting for property propertyname. If the property is not recognized, a
SAXNotRecognizedException is raised. The well-known propertynames are listed in the module
xml.sax.handler.
XMLReader.setProperty(propertyname, value)
Set the propertyname to value. If the property is not recognized,
SAXNotRecognizedException is raised. If the property or its setting is not supported by the parser, SAXNotSupportedException is raised.
Instances of
IncrementalParser offer the following additional methods:
IncrementalParser.feed(data)
Process a chunk of data.
IncrementalParser.close()
Assume the end of the document. That will check well-formedness conditions that can be checked only at the end, invoke handlers, and may clean up resources allocated during parsing.
IncrementalParser.reset()
This method is called after close has been called to reset the parser so that it is ready to parse new documents. The results of calling parse or feed after close without calling reset are undefined.
Instances of
Locator provide these methods:
Locator.getColumnNumber()
Return the column number where the current event begins.
Locator.getLineNumber()
Return the line number where the current event begins.
Locator.getPublicId()
Return the public identifier for the current event.
Locator.getSystemId()
Return the system identifier for the current event.
InputSource.setPublicId(id)
Sets the public identifier of this
InputSource.
InputSource.getPublicId()
Returns the public identifier of this
InputSource.
InputSource.setSystemId(id)
Sets the system identifier of this
InputSource.
InputSource.getSystemId()
Returns the system identifier of this
InputSource.
InputSource.setEncoding(encoding)
Sets the character encoding of this
InputSource.
The encoding must be a string acceptable for an XML encoding declaration (see section 4.3.3 of the XML recommendation).
The encoding attribute of the
InputSource is ignored if the
InputSource also contains a character stream.
InputSource.getEncoding()
Get the character encoding of this InputSource.
InputSource.setByteStream(bytefile).
InputSource.getByteStream()
Get the byte stream for this input source.
The getEncoding method will return the character encoding for this byte stream, or
None if unknown.
InputSource.setCharacterStream(charfile)
Set the character stream for this input source. (The stream must be a Python 1.6 Unicode-wrapped file-like that performs conversion to Unicode strings.)
If there is a character stream specified, the SAX parser will ignore any byte stream and will not attempt to open a URI connection to the system identifier.
InputSource.getCharacterStream()
Get the character stream for this input source.
AttributesInterface
Attributes objects implement a portion of the mapping protocol, including the methods
copy(),
get(),
has_key(),
items(),
keys(), and
values(). The following methods are also provided:
Attributes.getLength()
Return the number of attributes.
Attributes.getNames()
Return the names of the attributes.
Attributes.getType(name)
Returns the type of the attribute name, which is normally
'CDATA'.
Attributes.getValue(name)
Return the value of attribute name.
AttributesNSInterface
This interface is a subtype of the
Attributes interface (see section The Attributes Interface). All methods supported by that interface are also available on
AttributesNS objects.
The following methods are also available:
AttributesNS.getValueByQName(name)
Return the value for a qualified name.
AttributesNS.getNameByQName(name)
Return the
(namespace, localname) pair for a qualified name.
AttributesNS.getQNameByName(name)
Return the qualified name for a
(namespace, localname) pair.
AttributesNS.getQNames()
Return the qualified names of all attributes.
© 2001–2017 Python Software Foundation
Licensed under the PSF License. | http://docs.w3cub.com/python~2.7/library/xml.sax.reader/ | 2018-10-15T10:57:46 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.w3cub.com |
Set.
Next Steps
- Create an HTML version of the standard response.
- Add an attachment to the standard response.
- Maintain multiple versions of the standard response.
- Create Field Codes to use in your standard responses.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/Administrator/SRProperties | 2018-10-15T11:31:44 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.genesys.com |
Expressions are used within constraints to generate a value that is true.
There are three types of expressions usable for constraints:
- Comparisons with operators
- Functions
- Exist-expressions
Comparisons
A comparison expression consists of two attributes or values, separated by a comparison operator, like ‘=’, ‘<=’ and ‘>’.
//Sales.Customer[Name = 'Jansen']
This query retrieves all customers whose name is ‘Jansen’.
//Sales.Order[TotalPrice < 50.00]
This query retrieves all orders for which the total price is less than 50.00 euros.
//Sales.Customer[Sales.Customer_Order/Sales.Order/HasPayed = false()]
This query retrieves all customers who have at least one unpaid order.
//Sales.Customer[Name = City]
This query retrieves all customers who have the same name as the city they live in.
//Sales.Customer[Sales.Customer_Order = 'ID_124123512341']
This query retrieves the customer who placed the order with the given unique identification number.
The same result can be retrieved by doing the following query:
//Sales.Customer[Sales.Customer_Order/Sales.Order/ID = 'ID_124123512341']
However, it is strongly recommended not to use this notation. This is because its execution is inefficient and results in a lower performance due to manner in which it is processed by the database.
Functions
See this page for information on the available functions.
Exist-expressions
The last type of expression, the exist-expression, can be used to check whether a specific association is filled or not.
//Sales.Customer[Sales.Customer_Order/Sales.Order]
This query retrieves all customers who have placed at least one order.
//Sales.Customer[not(Sales.Customer_Order/Sales.Order)]
This query retrieves all customers who have not placed any orders. | https://docs.mendix.com/refguide/xpath-expressions | 2018-10-15T10:21:57 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.mendix.com |
is not supported for MacOS. get a prompt about every 90 to 120 minutes by default when they leave the set of approved IP address ranges. The exact timing depends on the access token expiry duration (60 minutes by default), when their computer last obtained a new access token, and any specific conditional access timeouts put in place. | https://docs.microsoft.com/en-us/onedrive/enable-conditional-access?redirectSourcePath=%252fbg-bg%252farticle%252f%2525D0%2525A0%2525D0%2525B0%2525D0%2525B7%2525D1%252580%2525D0%2525B5%2525D1%252588%2525D0%2525B0%2525D0%2525B2%2525D0%2525B0%2525D0%2525BD%2525D0%2525B5-%2525D0%2525BD%2525D0%2525B0-%2525D1%252583%2525D1%252581%2525D0%2525BB%2525D0%2525BE%2525D0%2525B2%2525D0%2525B5%2525D0%2525BD-%2525D0%2525B4%2525D0%2525BE%2525D1%252581%2525D1%252582%2525D1%25258A%2525D0%2525BF-%2525D0%2525BF%2525D0%2525BE%2525D0%2525B4%2525D0%2525B4%2525D1%252580%2525D1%25258A%2525D0%2525B6%2525D0%2525BA%2525D0%2525B0-%2525D0%2525B2-%2525D0%2525BA%2525D0%2525BB%2525D0%2525B8%2525D0%2525B5%2525D0%2525BD%2525D1%252582%2525D0%2525B0-%2525D0%2525B7%2525D0%2525B0-%2525D1%252581%2525D0%2525B8%2525D0%2525BD%2525D1%252585%2525D1%252580%2525D0%2525BE%2525D0%2525BD%2525D0%2525B8%2525D0%2525B7%2525D0%2525B8%2525D1%252580%2525D0%2525B0%2525D0%2525BD%2525D0%2525B5-%2525D0%2525BD%2525D0%2525B0-onedrive-%2525D0%2525B7%2525D0%2525B0-windows-028d73d7-4b86-4ee0-8fb7-9a209434b04e | 2018-10-15T11:28:59 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.microsoft.com |
Changelog for Falcon 0.2.0¶
Breaking Changes¶
- The deprecated util.misc.percent_escape and util.misc.percent_unescape functions were removed. Please use the functions in the util.uri module instead.
- The deprecated function, API.set_default_route, was removed. Please use sinks instead.
- HTTPRangeNotSatisfiable no longer accepts a media_type parameter.
- When using the comma-delimited list convention, req.get_param_as_list(…) will no longer insert placeholders, using the None type, for empty elements. For example, where previously the query string “foo=1,,3” would result in [‘1’, None, ‘3’], it will now result in [‘1’, ‘3’].
New & Improved¶
- Since 0.1 we’ve added proper RTD docs to make it easier for everyone to get started with the framework. Over time we will continue adding content, and we would love your help!
- Falcon now supports “wsgi.filewrapper”. You can assign any file-like object to resp.stream and Falcon will use “wsgi.filewrapper” to more efficiently pipe the data to the WSGI server.
- Support was added for automatically parsing requests containing “application/x-www-form-urlencoded” content. Form fields are now folded into req.params.
- Custom Request and Response classes are now supported. You can specify custom types when instantiating falcon.API.
- A new middleware feature was added to the framework. Middleware deprecates global hooks, and we encourage everyone to migrate as soon as possible.
- A general-purpose dict attribute was added to Request. Middleware, hooks, and responders can now use req.context to share contextual information about the current request.
- A new method, append_header, was added to falcon.API to allow setting multiple values for the same header using comma separation. Note that this will not work for setting cookies, but we plan to address this in the next release (0.3).
- A new “resource” attribute was added to hooks. Old hooks that do not accept this new attribute are shimmed so that they will continue to function. While we have worked hard to minimize the performance impact, we recommend migrating to the new function signature to avoid any overhead.
- Error response bodies now support XML in addition to JSON. In addition, the HTTPError serialization code was refactored to make it easier to implement a custom error serializer.
- A new method, “set_error_serializer” was added to falcon.API. You can use this method to override Falcon’s default HTTPError serializer if you need to support custom media types.
- Falcon’s testing base class, testing.TestBase was improved to facilitate Py3k testing. Notably, TestBase.simulate_request now takes an additional “decode” kwarg that can be used to automatically decode byte-string PEP-3333 response bodies.
- An “add_link” method was added to the Response class. Apps can use this method to add one or more Link header values to a response.
- Added two new properties, req.host and req.subdomain, to make it easier to get at the hostname info in the request.
- Allow a wider variety of characters to be used in query string params.
- Internal APIs have been refactored to allow overriding the default routing mechanism. Further modularization is planned for the next release (0.3).
- Changed req.get_param so that it behaves the same whether a list was specified in the query string using the HTML form style (in which each element is listed in a separate ‘key=val’ field) or in the more compact API style (in which each element is comma-separated and assigned to a single param instance, as in ‘key=val1,val2,val3’)
- Added a convenience method, set_stream(…), to the Response class for setting the stream and its length at the same time, which should help people not forget to set both (and save a few keystrokes along the way).
- Added several new error classes, including HTTPRequestEntityTooLarge, HTTPInvalidParam, HTTPMissingParam, HTTPInvalidHeader and HTTPMissingHeader.
- Python 3.4 is now fully supported.
- Various minor performance improvements
Fixed¶
- Ensure 100% test coverage and fix any bugs identified in the process.
- Fix not recognizing the “bytes=” prefix in Range headers.
- Make HTTPNotFound and HTTPMethodNotAllowed fully compliant, according to RFC 7231.
- Fixed the default on_options responder causing a Cython type error.
- URI template strings can now be of type unicode under Python 2.
- When SCRIPT_NAME is not present in the WSGI environ, return an empty string for the req.app property.
- Global “after” hooks will now be executed even when a responder raises an error.
- Fixed several minor issues regarding testing.create_environ(…)
- Work around a wsgiref quirk, where if no content-length header is submitted by the client, wsgiref will set the value of that header to an empty string in the WSGI environ.
- Resolved an issue causing several source files to not be Cythonized.
- Docstrings have been edited for clarity and correctness. | https://falcon.readthedocs.io/en/stable/changes/0.2.0.html | 2018-10-15T11:13:31 | CC-MAIN-2018-43 | 1539583509170.2 | [] | falcon.readthedocs.io |
Starting with System Manager 9.4, you can update single-node clusters. Updating single-node clusters is disruptive, and client data will not be available while the update is in progress.
Obtaining Data ONTAP software images
If you try to perform other tasks from System Manager while updating the node that hosts the cluster management LIF, an error message might be displayed. You must wait for the update to finish before performing any operations.
When the validation is complete and the update is in progress, the update might be paused because of errors. You can click the error message to view the details, and then perform the remedial actions before resuming the update. | http://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-930/GUID-0BA0F395-B771-496D-B019-294DB1035907.html | 2018-10-15T10:40:05 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.netapp.com |
Infinispan ships several server modules, some of which can be started via calling startServer.sh or startServer.bat scripts from command line. These currently include Hot Rod, Memcached and Web Socket servers. Please find below the set of common command line parameters that can be passed to these servers:
Note that starting with Infinispan 4.2.0.CR1, default Hot Rod port has changed from 11311 to 11222. | https://docs.jboss.org/author/display/ISPN53/Server+Command+Line+Options | 2018-10-15T11:16:34 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.jboss.org |
All content with label load+mod_cluster.
Related Labels:
high, wildfly, clustering, cluster, jboss, tutorial, mod_jk, domain, l, httpd, standalone, eap, eap6, ha, high-availability, modcluster, getting_started, balancing, as7,
availability
more »
( - load, - mod_cluster )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/load+mod_cluster | 2018-10-15T11:21:50 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.jboss.org |
Identifies the selected item type within the component view
Member of Explorer (PRIM_DCBX)
Data Type - Enumeration
The PathType property returns the type of the current focus folder in the Path property.
PathType has many available values. See the Feature Viewer (F2) or autocomplete in the IDE for additional information.
All Component Classes
Technical Reference
Febuary 18 V14SP2 | https://docs.lansa.com/14/en/lansa016/prim_dcbx_pathtype.htm | 2018-10-15T11:20:55 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.lansa.com |
Create a custom template for dotnet new
This tutorial shows you how to:
- Create a basic template from an existing project or a new console app project.
- Pack the template for distribution at nuget.org or from a local nupkg file.
- Install the template from nuget.org, a local nupkg file, or the local file system.
- Uninstall the template.
If you prefer to proceed through the tutorial with a complete sample, download the sample project template. The sample template is configured for NuGet distribution.
If you wish to use the downloaded sample with file system distribution, do the following:
- Move the contents of the content folder of the sample up one level into the GarciaSoftware.ConsoleTemplate.CSharp folder.
- Delete the empty content folder.
- Delete the nuspec file.
Prerequisites
- Install the .NET Core 2.0 SDK or later versions.
- Read the reference topic Custom templates for dotnet new.
Create a template from a project
Use an existing project that you've confirmed compiles and runs, or create a new console app project in a folder on your hard drive. This tutorial assumes that the name of the project folder is GarciaSoftware.ConsoleTemplate.CSharp stored at Documents\Templates in the user's profile. The tutorial project template name is in the format <Company Name>.<Template Type>.<Programming Language>, but you're free to name your project and template anything you wish.
- Add a folder to the root of the project named .template.config.
- Inside the .template.config folder, create a template.json file to configure your template. For more information and member definitions for the template.json file, see the Custom templates for dotnet new topic and the template.json schema at the JSON Schema Store.
{ "$schema": "", "author": "Catalina Garcia", "classifications": [ "Common", "Console" ], "identity": "GarciaSoftware.ConsoleTemplate.CSharp", "name": "Garcia Software Console Application", "shortName": "garciaconsole" }
The template is finished. At this point, you have two options for template distribution. To continue this tutorial, choose one path or the other:
- NuGet distribution: install the template from NuGet or from the local nupkg file, and use the installed template.
- File system distribution.
Use NuGet Distribution
Pack the template into a NuGet package
- Create a folder for the NuGet package. For the tutorial, the folder name GarciaSoftware.ConsoleTemplate.CSharp is used, and the folder is created inside a Documents\NuGetTemplates folder in the user's profile. Create a folder named content inside of the new template folder to hold the project files.
- Copy the contents of your project folder, together with its .template.config/template.json file, into the content folder you created.
Next to the content folder, add a nuspec file. The nuspec file is an XML manifest file that describes a package's contents and drives the process of creating the NuGet package.
Inside of a <packageTypes> element in the nuspec file, include a <packageType> element with a
nameattribute value of
Template. Both the content folder and the nuspec file should reside in the same directory. The table shows the minimum nuspec file elements required to produce a template as a NuGet package.
See the .nuspec reference for the complete nuspec file schema.
The nuspec file for the tutorial is named GarciaSoftware.ConsoleTemplate.CSharp.nuspec and contains the following content:
<?xml version="1.0" encoding="utf-8"?> <package xmlns=""> <metadata> <id>GarciaSoftware.ConsoleTemplate.CSharp</id> <version>1.0.0</version> <description> Creates the Garcia Software console app. </description> <authors>Catalina Garcia</authors> <packageTypes> <packageType name="Template" /> </packageTypes> </metadata> </package>
Create the package using the
nuget pack <PATH_TO_NUSPEC_FILE>command. The following command assumes that the folder that holds the NuGet assets is at C:\Users\<USER>\Documents\Templates\GarciaSoftware.ConsoleTemplate.CSharp. But wherever you place the folder on your system, the
nuget packcommand accepts the path to the nuspec file:
nuget pack C:\Users\<USER>\Documents\NuGetTemplates\GarciaSoftware.ConsoleTemplate.CSharp\GarciaSoftware.ConsoleTemplate.CSharp.nuspec
Publishing the package to nuget.org
To publish a NuGet package, follow the instructions in the Create and publish a package topic. However, we recommend that you don't publish the tutorial template to NuGet as it can never be deleted once published, only delisted. Now that you have the NuGet package in the form of a nupkg file, we suggest that you follow the instructions below to install the template directly from the local nupkg file.
Install the template from a NuGet package
Install the template from the local nupkg file
To install the template from the nupkg file that you produced, use the
dotnet new command with the
-i|--install option and provide the path to the nupkg file:
dotnet new -i C:\Users\<USER>\GarciaSoftware.ConsoleTemplate.CSharp.1.0.0.nupkg
Install the template from a NuGet package stored at nuget.org
If you wish to install a template from a NuGet package stored at nuget.org, use the
dotnet new command with the
-i|--install option and supply the name of the NuGet package:
dotnet new -i GarciaSoftware.ConsoleTemplate.CSharp
Note
The example is for demonstration purposes only. There isn't a
GarciaSoftware.ConsoleTemplate.CSharp NuGet package at nuget.org, and we don't recommend that you publish and consume test templates from NuGet. If you run the command, no template is installed. However, you can install a template that hasn't been published to nuget.org by referencing the nupkg file directly on your local file system as shown in the previous section Install the template from the local nupkg file.
If you'd like a live example of how to install a template from a package at nuget.org, you can use the NUnit 3 template for dotnet-new. This template sets up a project to use NUnit unit testing. Use the following command to install it:
dotnet new -i NUnit3.DotNetNew.Template
When you list the templates with
dotnet new -l, you see the NUnit 3 Test Project with a short name of nunit in the template list. You're ready to use the template in the next section.
Create a project from the template
After the template is installed from NuGet,. To create a project from the NUnit template, run the following command:
dotnet new nunit
The console shows that the project is created and that the project's packages are restored. After the command is run, the project is ready for use.
To uninstall a template from a NuGet package stored at nuget.org
dotnet new -u GarciaSoftware.ConsoleTemplate.CSharp
Note
The example is for demonstration purposes only. There isn't a
GarciaSoftware.ConsoleTemplate.CSharp NuGet package at nuget.org or installed with the .NET Core SDK. If you run the command, no package/template is uninstalled and you receive the following exception:
Could not find something to uninstall called 'GarciaSoftware.ConsoleTemplate.CSharp'.
If you installed the NUnit 3 template for dotnet-new and wish to uninstall it, use the following command:
dotnet new -u NUnit3.DotNetNew.Template
Uninstall the template from a local nupkg file
When you wish to uninstall the template, don't attempt to use the path to the nupkg file. Attempting to uninstall a template using
dotnet new -u <PATH_TO_NUPKG_FILE> fails. Reference the package by its
id:
dotnet new -u GarciaSoftware.ConsoleTemplate.CSharp.1.0.0
Use file system distribution
To distribute the template, place the project template folder in a location accessible to users on your network. Use the
dotnet new command with the
-i|--install option and specify the path to the template folder (the project folder containing the project and the .template.config folder).
The tutorial assumes the project template is stored in the Documents/Templates folder of the user's profile. From that location, install the template with the following command replacing <USER> with the user's profile name:
dotnet new -i C:\Users\<USER>\Documents\Templates\GarciaSoftware.ConsoleTemplate.CSharp
Create a project from the template
After the template is installed from the file system,.
From a new project folder created at C:\Users\<USER>\Documents\Projects\MyConsoleApp, create a project from the
garciaconsole template:
dotnet new garciaconsole
Uninstall the template
If you created the template on your local file system at C:\Users\<USER>\Documents\Templates\GarciaSoftware.ConsoleTemplate.CSharp, uninstall it with the
-u|--uninstall switch and the path to the template folder:
dotnet new -u C:\Users\<USER>\Documents\Templates\GarciaSoftware.ConsoleTemplate.CSharp
Note
To uninstall the template from your local file system, you need to fully qualify the path. For example, C:\Users\<USER>\Documents\Templates\GarciaSoftware.ConsoleTemplate.CSharp will work, but ./GarciaSoftware.ConsoleTemplate.CSharp from the containing folder will not. Additionally, do not include a final terminating directory slash on your template path. | https://docs.microsoft.com/en-us/dotnet/core/tutorials/create-custom-template | 2018-10-15T11:45:37 | CC-MAIN-2018-43 | 1539583509170.2 | [array(['media/create-custom-template/nunit1.png',
'Console window showing the NUnit template listed with other installed templates'],
dtype=object)
array(['media/create-custom-template/nunit2.png',
'Console window showing the output of the dotnet new command as it creates the NUnit project and restores the project dependencies'],
dtype=object) ] | docs.microsoft.com |
Using secrets¶
This page shows how to use secrets within your functions for API tokens, passwords and similar.
Using secrets is a two step process. First we need to define the secret in your cluster and then you need to 'use' the secret to your function. You can find a simple example function ApiKeyProtected in the OpenFaaS repo. When we deploy this function we provide a secret key that it uses to authenticate requests.
Creating the secret¶
It is generally easiest to read your secret values from files. For our examples we have created a simple text file
~/secrets/secret_api_key.txt that looks like
R^YqzKzSJw51K9zPpQ3R3N
Now we need to define the secret in the cluster.
Define a secret in Kubernetes¶
In Kubernetes we can leverage the secrets api to safely store our secret values
From the commandline use
kubectl create secret generic secret-api-key --from-file=secret-api-key=~/secrets/secret_api_key.txt --namespace openfaas-fn
Here we have explicitly named the key of the secret value so that when it is mounted into the function container, it will be named exactly
secret-api-key instead of
secret_api_key.txt.
Define a secret in Docker Swarm¶
For sensitive value we can leverage the Docker Swarm Secrets feature to safely store our secret values.
From the command line use
docker secret create secret-api-key ~/secrets/secret_api_key.txt
Use the secret in your function¶
Secrets are mounted as files to
/var/openfaas/secrets inside your function. Using secrets is as simple as adding code to read the value from
/var/openfaas/secrets/secret-api-key.
Note: prior to version
0.8.2 secrets were mounted to
/run/secrets. The example functions demonstrate a smooth upgrade implementation.
A simple
go implementation could look like this
func getAPISecret(secretName string) (secretBytes []byte, err error) { // read from the openfaas secrets folder secretBytes, err = ioutil.ReadFile("/var/openfaas/secrets/" + secretName) if err != nil { // read from the original location for backwards compatibility with openfaas <= 0.8.2 secretBytes, err = ioutil.ReadFile("/run/secrets/" + secretName) } return secretBytes, err }
This example comes from the
ApiKeyProtected sample function.
Deploy a function with secrets¶
Now, update your stack file to include the secret:
provider: name: faas gateway: functions: protectedapi: lang: Dockerfile skip_build: true image: functions/api-key-protected:latest secrets: - secret-api-key
and then deploy
faas-cli deploy -f ./stack.yaml
Once the deploy is done you can test the function using the cli. The function is very simple, it reads the secret value that is mounted into the container for you and then returns a success or failure message based on if your header matches that secret value. For example,
faas-cli invoke protectedapi -H "X-Api-Key=R^YqzKzSJw51K9zPpQ3R3N"
Resulting in
Unlocked the function!
When you use the wrong api key,
faas-cli invoke protectedapi -H "X-Api-Key=thisiswrong"
You get
Access denied! | https://docs.openfaas.com/reference/secrets/ | 2018-10-15T11:05:25 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.openfaas.com |
You can use vSphere Fault Tolerance (FT) for most mission critical virtual machines. FT provides continuous availability for such a virtual machine by creating and maintaining another VM that is identical and continuously available to replace it in the event of a failover situation.
The protected virtual machine is called the Primary VM. The duplicate virtual machine, the Secondary VM, is created and runs on another host. The Secondary VM's execution is identical to that of the Primary VM and it can take over at any point without interruption, thereby providing fault tolerant protection.
The Primary and Secondary VMs continuously monitor the status of one another to ensure that Fault Tolerance is maintained. VMs...
vSphere Fault Tolerance). If compatibility with these earlier requirements is necessary, you can instead use legacy FT. However, this involves the setting of an advanced option for each VM. See Legacy Fault Tolerance for more information. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.avail.doc/GUID-623812E6-D253-4FBC-B3E1-6FBFDF82ED21.html | 2018-10-15T11:19:40 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.vmware.com |
_FACT
Description
This fact table contains a record of the attribute values that applications attach to SDR for reporting purposes. A new row is added for each attribute that is attached (for example, DNIS of the destination phone number). A row is updated when a new value is reported for an existing attribute.
Note that the word "attribute" is misspelled in the database table name.
Hint: For easiest viewing, open the downloaded CSV file in Excel and adjust settings for column widths, text wrapping, and so on as desired. Depending on your browser and other system settings, you might need to save the file to your desktop first.
Column List.
SESSION_ID
The ID of the session assigned by Orchestration Server. This is the primary key of this table. You can use the SESSION_ID to link the SDR_CUST_ATRIBUTES_FACT record with an SDR_SESSION_FACT.
START_DATE_TIME_KEY
Identifies the start of a 15-minute interval in which the activity started. Use this value as a key to join the fact tables to any configured DATE_TIME dimension, in order to group the facts that are related to the same interval and/or convert the START_TS timestamp to an appropriate time zone.
ATRIBUTE_VALUE
The value(s) of the attribute, as provided by the application.
SDR_CUST_ATRIBUTES_KEY
The surrogate key that is used to join the SDR_CUST_ATRIBUTES dimension to the fact tables._CUST_ATRIBUTES_FACT_SDT
No subject area information available.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/RPRT/Table-SDR_CUST_ATRIBUTES_FACT | 2018-10-15T10:23:23 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.genesys.com |
imply 'hscroll and 'vscroll, respectively, but they cause the corresponding scrollbar to disappear when no scrolling is needed in the corresponding direction; the 'auto-vscroll and 'auto-hscroll modes assume that children subareas are placed using the default algorithm for a panel%, vertical-panel%, or horizontal-panel%. The 'hide-hscroll and 'hide-vscroll styles imply 'auto-hscroll and 'auto-vscroll, respectively, but the corresponding scroll bar is never made visible (while still allowing the panel content to exceed its own<%>.
Changed in version 1.25 of package gui-lib: Added 'hide-vscroll and 'hide-hscroll. | https://docs.racket-lang.org/gui/panel_.html | 2018-10-15T10:17:17 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.racket-lang.org |
Running Tests¶
Test make targets, invoked as
$ make <target>, subject to which
environment variables are set (see Test Environment Variables).
Run all tests (including slow tests):
$ make test
Run only quick tests (as of Sep 18, 2017, this was < 30 minutes):
$ export TOIL_TEST_QUICK=True; make test
Run an individual test with:
$ make test tests=src/toil/test/sort/sortTest.py::SortTest::testSort
The default value for
tests is
"src" which includes all tests in the
src/ subdirectory of the project root. Tests that require a particular
feature will be skipped implicitly. If you want to explicitly skip tests that
depend on a currently installed feature, use:
$ make test tests="-m 'not azure' src"
This will run only the tests that don’t depend on the
azure extra, even if
that extra is currently installed. Note the distinction between the terms
feature and extra. Every extra is a feature but there are features that are
not extras, such as the
gridengine and
parasol features. To skip tests
involving both the Parasol feature and the Azure extra, use the following
$ make test tests="-m 'not azure and not parasol' src"
Running Tests (pytest)¶
Often it is simpler to use pytest directly, instead of calling the
make wrapper.
This usually works as expected, but some tests need some manual preparation.
-
Running tests that make use of Docker (e.g. autoscaling tests and Docker tests) require an appliance image to be hosted. This process first requires Using Docker with Quay. Then to build and host the appliance image run the
maketargets
dockerand
push_dockerrespectively.
-
Running integration tests require setting the environment variableexport TOIL_TEST_INTEGRATIVE=True
To run a specific test with pytest
python -m pytest src/toil/test/sort/sortTest.py::SortTest::testSort
For more information, see the pytest documentation. installation instructions for your system on their website to get started.
When running
make test you might still get the following error:
$ make test Please set TOIL_DOCKER_REGISTRY, e.g. to quay.io/USER.
To solve, make an account with Quay and specify it like so:
$ TOIL_DOCKER_REGISTRY=quay.io/USER make test
where
USER is your Quay username.
For convenience you may want to add this variable to your bashrc by running
$ echo 'export TOIL_DOCKER_REGISTRY=quay.io/USER' >> $HOME/.bashrc
Running Mesos Tests¶
If you’re running Toil’s Mesos tests, be sure to create the virtualenv with
--system-site-packages to include the Mesos Python bindings. Verify this by
activating the virtualenv and running
pip list | grep mesos. On macOS,
this may come up empty. To fix it, run the following:
for i in /usr/local/lib/python2.7/site-packages/*mesos*; do ln -snf $i venv/lib/python2.7/site-packages/; done
Developing with Docker¶
To develop on features reliant on the Toil Appliance (the docker image toil uses for AWS autoscaling), you should consider setting up a personal registry on Quay or Docker Hub. Because the Toil Appliance images are tagged with the Git commit they are based on and because only commits on our master branch trigger an appliance build on Quay, as soon as a developer makes a commit or dirties the working copy they will no longer be able to rely on Toil to automatically detect the proper Toil Appliance image. Instead, developers wishing to test any appliance changes in autoscaling should build and push their own appliance image to a personal Docker registry. This is described in the next section.
Making Your Own Toil Docker Image¶
Here is a general workflow (similar instructions apply when using Docker Hub):
Make some changes to the provisioner of your local version of Toil.
Go to the location where you installed the Toil source code and run:
$ make docker
to automatically build a docker image that can now be uploaded to your personal Quay account. If you have not installed Toil source code yet see Building from Source.
If it’s not already you will need Docker installed and need to log into Quay. Also you will want to make sure that your Quay account is public.
Set the environment variable
TOIL_DOCKER_REGISTRYto your Quay account. If you find yourself doing this often you may want to add:
export TOIL_DOCKER_REGISTRY=quay.io/<MY_QUAY_USERNAME>
to your
.bashrcor equivalent.
Now you can run:
$ make push_docker
which will upload the docker image to your Quay account. Take note of the image’s tag for the next step.
Finally you will need to tell Toil from where to pull the Appliance image you’ve created (it uses the Toil release you have installed by default). To do this set the environment variable
TOIL_APPLIANCE_SELFto the url of your image. For more info see Environment Variables.
Now you can launch your cluster! For more information see Running a Workflow with Autoscaling.
Running a Cluster Locally¶
The Toil Appliance container can also be useful as a test environment since it can simulate a Toil cluster locally. An important caveat for this is autoscaling, since autoscaling will only work on an EC2 instance and cannot (at this time) be run on a local machine.
To spin up a local cluster, start by using the following Docker run command to launch a Toil leader container:
docker run --entrypoint=mesos-master --net=host -d --name=leader --volume=/home/jobStoreParentDir:/jobStoreParentDir quay.io/ucsc_cgl/toil:3.6.0 --registry=in_memory --ip=127.0.0.1 --port=5050 --allocation_interval=500ms
A couple notes on this command: the
-d flag tells Docker to run in daemon mode so
the container will run in the background. To verify that the container is running you
can run
docker ps to see all containers. If you want to run your own container
rather than the official UCSC container you can simply replace the
quay.io/ucsc_cgl/toil:3.6.0 parameter with your own container name.
Also note that we are not mounting the job store directory itself, but rather the location where the job store will be written. Due to complications with running Docker on MacOS, I recommend only mounting directories within your home directory. The next command will launch the Toil worker container with similar parameters:
docker run --entrypoint=mesos-slave --net=host -d --name=worker --volume=/home/jobStoreParentDir:/jobStoreParentDir quay.io/ucsc_cgl/toil:3.6.0 --work_dir=/var/lib/mesos --master=127.0.0.1:5050 --ip=127.0.0.1 —-attributes=preemptable:False --resources=cpus:2
Note here that we are specifying 2 CPUs and a non-preemptable worker. We can
easily change either or both of these in a logical way. To change the number
of cores we can change the 2 to whatever number you like, and to
change the worker to be preemptable we change
preemptable:False to
preemptable:True. Also note that the same volume is mounted into the
worker. This is needed since both the leader and worker write and read
from the job store. Now that your cluster is running, you can run:
docker exec -it leader bash
to get a shell in your leader ‘node’. You can also replace the
leader parameter
with
worker to get shell access in your worker.
Docker-in-Docker issues
If you want to run Docker inside this Docker cluster (Dockerized tools, perhaps),
you should also mount in the Docker socket via
-v /var/run/docker.sock:/var/run/docker.sock.
This will give the Docker client inside the Toil Appliance access to the Docker engine
on the host. Client/engine version mismatches have been known to cause issues, so we
recommend using Docker version 1.12.3 on the host to be compatible with the Docker
client installed in the Appliance. Finally, be careful where you write files inside
the Toil Appliance - ‘child’ Docker containers launched in the Appliance will actually
be siblings to the Appliance since the Docker engine is located on the host. This
means that the ‘child’ container can only mount in files from the Appliance if
the files are located in a directory that was originally mounted into the Appliance
from the host - that way the files are accessible to the sibling container. Note:
if Docker can’t find the file/directory on the host it will silently fail and mount
in an empty directory.
Maintainer’s Guidelines¶
In general, as developers and maintainers of the code, we adhere to the following guidelines:
- We strive to never break the build on master.
-). | https://toil.readthedocs.io/en/3.15.0/contributing/contributing.html | 2018-10-15T10:16:22 | CC-MAIN-2018-43 | 1539583509170.2 | [] | toil.readthedocs.io |
In the Gantry framework, we use the term Feature to mean a specific bit of functionality. Features are flexible enough that they can be used to perform almost any type of logic you would need. The base GantryFeature class contains methods that can be implemented to control how your feature functions. Those methods are:
For WordPress, the equivalent tab is called Gizmos. Many of the features used in Joomla are represented by Widgets on WordPress. You can find out more about WordPress gizmos by visiting the Creating a New Gizmo guide.
isEnabled()
boolean(true / false)
getPosition()
string[current position name]
isInPosition([string $position])
string(position name to get compared with the current position of the feature).
boolean(true / false) if the current position is the same as the argument.
isOrderable()
boolean(true / false)
setPrefix(string $prefix)
string(prefix name - usually the name of the main chain param)
get($param [, $prefixed = true])
string(field name)
boolean(true / false)
mixed(the current value of the field)
init()
render()
finalize()
All core features, and any custom feature you create, should extend this GantryFeature class. To create a new feature of your own, you would just have to create a new file in your
features/ folder that extended the
libraries/gantry/core/gantryfeature.class.php class. It will automatically get picked up by the Gantry framework and be processed. The best way to see what a feature can do for you is to examine a few of the core features located in the
libraries/gantry/features/ folder.
First, let's take a look at
totop.php, one of the core features. As you can imagine, the TopTop feature is intended to display a link at the bottom of your page which provides a smooth-scroll back to the top of the page. The most important part of a feature is the actual feature PHP file. The core features are located in the
libraries/gantry/features/ folder. These should never be touched or changed.
If you want to override the behavior of a core feature, simply copy the core feature in your
/templates/[YOUR_TEMPLATE]/features folder. Gantry will automatically pick up your version of the file and use it rather than the default version if you have created one with the same name. The other part of a feature, and one that is totally optional, is the configuration section. As with other parts of Gantry, the configuration is handled in
template-options.xml.
For the totop feature the section in the
template-options.xml looks like:
<fields name="totop" type="chain" label="TOTOP" description="TOTOP_DESC"> <field name="enabled" type="toggle" default="0" label="SHOW"/> <field name="position" type="position" default="copyright-b" label="POSITION"/> <field name="text" type="text" default="Back to Top" label="TEXT" class="text-long" /> </fields>
This means that there are going to be three fields rendered in the administrator interface. One is a toggle element that will control the 'enabled' state, and the second is position element which controls the position the feature is rendered in. The third field is a text field which allows you to enter custom text. By exposing these elements in the XML, we allow interaction with the user. If you wanted to add new elements in this XML section, you could. They would be available for you to use in your feature's PHP definition.
Next, let's look at the PHP for this feature:
<?php /** * @version $Id: totop.php 2487 2012-08-17 22:04:06Z btowles $ * @author RocketTheme * @copyright Copyright (C) 2007 - ${copyright_year} RocketTheme, LLC * @license GNU/GPLv2 only * * Gantry uses the Joomla Framework (), a GNU/GPLv2 content management system * */ defined('JPATH_BASE') or die(); gantry_import('core.gantryfeature'); /** * @package gantry * @subpackage features */ class GantryFeatureToTop extends GantryFeature { var $_feature_name = 'totop'; function init() { /** @var $gantry Gantry */ global $gantry; if ($this->get('enabled')) { $gantry->addScript('gantry-totop.js'); } } function render($position) { ob_start(); ?> <div class="clear"></div> <div class="rt-block"> <a href="#" id="gantry-totop" rel="nofollow"><?php echo $this->get('text'); ?></a> </div> <?php return ob_get_clean(); } }
As you can see, the there are two methods implemented in the PHP definition of the feature. The first overrides the default
init() method. This is used to setup the feature. In this case, we are simply using it to added some JavaScript that will provide the smooth scrolling. The second method that is used is
render().
This method actually renders out the link and the custom text field that was defined in the administrator interface. The other methods from the base GantryFeature class are not overridden. That means the standard methods to get the enabled state, position, etc. are being used, and are pulling that data from the XML and admin settings. You can see how custom XML fields like text are easily available and are prefixed by the feature name, so you can just use
get->("text") to retrieve the value of of the chained field.
Have a look through all the default features that come with Gantry to see how we achieved a wide variety of functionality with these features. | http://docs.gantry.org/gantry4/advanced/creating-a-new-feature | 2018-10-15T11:40:47 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.gantry.org |
Tracing Calls to Downstream HTTP Web Services Using the X-Ray SDK for Python
When your application makes calls to microservices or public HTTP APIs, you can use the X-Ray SDK for Python to instrument those calls and add the API to the service graph as a downstream service.
To instrument HTTP clients, patch the library that you use to
make outgoing calls. If you use
requests or Python's built in HTTP client, that's all you need to do.
For
aiohttp, also configure the recorder with an async
context.
If you use
aiohttp 3's client API, you also need to configure the
ClientSession's with
an instance of the tracing configuration provided by the SDK.
Example
aiohttp 3
Client API
from aws_xray_sdk.ext.aiohttp.client import aws_xray_trace_config async def foo(): trace_config = aws_xray_trace_config() async with ClientSession(loop=loop, trace_configs=[trace_config]) as session: async with session.get(url) as resp await resp.read()
When you instrument a call to a downstream web API, the X-Ray SDK for Python records a subsegment that contains information about the HTTP request and response. X-Ray uses the subsegment to generate an inferred segment for the remote API.
Example Subsegment for a Downstream HTTP Call
{ "id": "004f72be19cddc2a", "start_time": 1484786387.131, "end_time": 1484786387.501, "name": "names.example.com", "namespace": "remote", "http": { "request": { "method": "GET", "url": "" }, "response": { "content_length": -1, "status": 200 } } }
Example Inferred Segment for a Downstream HTTP Call
{ "id": "168416dc2ea97781", "name": "names.example.com", "trace_id": "1-5880168b-fd5153bb58284b67678aa78c", "start_time": 1484786387.131, "end_time": 1484786387.501, "parent_id": "004f72be19cddc2a", "http": { "request": { "method": "GET", "url": "" }, "response": { "content_length": -1, "status": 200 } }, "inferred": true } | https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python-httpclients.html | 2018-10-15T10:46:06 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.aws.amazon.com |
. information, see Assess your environment and requirements for deploying Office 365 ProPlus., see Plan your enterprise deployment of,. | https://docs.microsoft.com/en-us/deployoffice/office-2010-end-support-roadmap?redirectSourcePath=%252ffi-fi%252farticle%252foffice-2010-n-tuen-p%2525C3%2525A4%2525C3%2525A4ttymisen-ohje-2a58999c-4d83-4e67-9fde-bc96d487105e | 2018-10-15T10:58:09 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.microsoft.com |
User Guide
Local Navigation
Search This Document
Copy, move, rename, or delete a file
- Do one of the following:
- Find and highlight a file.
- Press the
key.
Next topic: Open a password-protected .pdf file
Previous topic: View properties for a file
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/23782/Move_rename_or_delete_a_file_60_1048889_11.jsp | 2013-12-05T03:28:34 | CC-MAIN-2013-48 | 1386163039002 | [] | docs.blackberry.com |
Inheritance Mapping
===================
Doctrine currently offers two supported methods of inheritance
which are Single Collection Inheritance and Collection Per Class
Inheritance.
Mapped Superclasses
-------------------
An mapped superclass is an abstract or concrete class that provides
persistent document state and mapping information for its
subclasses, but which is not itself a document. Typically, the
purpose of such a mapped superclass is to define state and mapping
information that is common to multiple document classes.
Mapped superclasses, just as regular, non-mapped classes, can
appear in the middle of an otherwise mapped inheritance hierarchy
(through Single Collection Inheritance or Collection Per Class
Inheritance).
.. note::
A mapped superclass cannot be a document and is not query able.
Example:
.. configuration-block::
.. code-block:: php | http://docs.doctrine-project.org/projects/doctrine-mongodb-odm/en/latest/_sources/reference/inheritance-mapping.txt | 2013-12-05T03:28:14 | CC-MAIN-2013-48 | 1386163039002 | [] | docs.doctrine-project.org |
Welcome to the Actors Migration Kit project! This project aims to help users migrate their code from Scala Actors to Akka. This project consists from the code in the Scala 2.10.0 release, Akka, and code in this project. The Actor Migration Guide will explain how to migrate from Scala Actors to Akka. Scaladoc API can also be helpful.
You can find the library on the maven central repository.
groupId: org.scala-lang artifactId: scala-actors-migration_${scala-version} version: 1.0.0 | http://docs.scala-lang.org/actors-migration/ | 2013-12-05T03:39:16 | CC-MAIN-2013-48 | 1386163039002 | [] | docs.scala-lang.org |
User Guide
Local Navigation
Search This Document
Change the size of a column
In a spreadsheet, do one of the following:
-.
- To change the column size for all spreadsheets, press the
key > Options. Change the Column Width field. Press the
key > Save.
Next topic: Set display options for a spreadsheet
Previous topic: View the content of a cell
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/36023/Change_the_size_of_a_column_60_1049127_11.jsp | 2013-12-05T03:40:02 | CC-MAIN-2013-48 | 1386163039002 | [] | docs.blackberry.com |
Select Menus → [name of the menu] from the drop-down menu on the back-end of your Joomla! installation (for example, Menus → Main Menu). Then click New to create a new menu item or click on an existing item to edit.: | http://docs.joomla.org/index.php?title=Help25:Menus_Menu_Item_Article_Archived&diff=66367&oldid=66360 | 2013-12-05T03:46:45 | CC-MAIN-2013-48 | 1386163039002 | [] | docs.joomla.org |
scipy.special.betainc¶
- scipy.special.betainc(a, b, x) = <ufunc 'betainc'>¶
Compute the incomplete beta integral of the arguments, evaluated from zero to x:
gamma(a+b) / (gamma(a)*gamma(b)) * integral(t**(a-1) (1-t)**(b-1), t=0..x).
Notes
The incomplete beta is also sometimes defined without the terms in gamma, in which case the above definition is the so-called regularized incomplete beta. Under this definition, you can get the incomplete beta by multiplying the result of the scipy function by beta(a, b). | http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.special.betainc.html | 2013-12-05T03:26:51 | CC-MAIN-2013-48 | 1386163039002 | [] | docs.scipy.org |
An Act to renumber and amend 74.11 (4), 74.11 (7), 74.11 (8), 74.11 (10) (a), 74.12 (6), 74.12 (7) and 74.12 (8); to amend 74.12 (9) (a) and 74.69 (1); and to create 74.11 (4) (b), 74.11 (7) (b), 74.11 (8) (b), 74.11 (10) (a) 2., 74.12 (6) (b), 74.12 (7) (b), 74.12 (8) (b) and 74.12 (9) (am) of the statutes; Relating to: due dates for paying property taxes. (FE) | http://docs.legis.wisconsin.gov/2019/proposals/ab141 | 2019-05-19T14:24:06 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.legis.wisconsin.gov |
GemFire XD provides the capability to store partitioned table data in a Hadoop Distributed File System (HDFS). Using HDFS as a persistent store enables you to capture high rates of table updates in HDFS for later processing in Hadoop. Or, you can choose to manage very large tables in GemFire XD--tables much larger than can be managed in memory, even when overflowing to GemFire XD disk stores.. | http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/disk_storage/persist-hdfs.html | 2019-05-19T15:43:49 | CC-MAIN-2019-22 | 1558232254889.43 | [] | gemfirexd.docs.pivotal.io |
System configuration / Sending the data / Sending from Windows operating systems / ProxyServerContainerDownload as PDF
ProxyServerContainer
ProxyServerContainer is an application that receives logs from local applications and sends them to Devo, by opening a TCP and UDP port in the local direction (localhost).
The applications send the events to one of the ports and then the ProxiServerContainer handles the sending to Devo servers.
If the connection with Devo is not possible, the received events would be stored and re-sent when the connection is available.
ProxyServerContainer sends every 30 seconds a trail with the following information:
- eventdate - date when the event occurs
- environment - the environment that the event is referring to (production, pre-production,etc...)
- eventProcessing
- applications - name of the application's executable that it is executing for the tracking
- seq - identifier for each execution of ProxyServerContainer (an unique identifier)
Requirements
There are two versions of ProxyServerContainer:
- ProxyServerContainer20 needs .NET Framework 2.0.
- ProxyServerContainer needs .NET Frameworl 4.0.
ProxyServerContainer executes in Windows versions where .NET Framework 4 executes.
At the moment, ProxyServerContainer has been tested in the following Windows versions:
- Windows XP SP3
- Windows 7 SP1
- Windows 8
- Windows Server 2003
- Windows Server 2008
Downloads
ProxyServer is part of the package Devo agents for Windows.
Configuration
- Unzip the file DevoAgentsAutumn14.7z. The password is 1234.
Modify the following lines in the ProxyServerContainer.Settings file.
<add key="SendingIpAddress" value="eu.public.relay.logtrust.net" /> <add key="SendingPort" value="443" /> <add key="SendingSecure" value="true"/> <add key="CertiticateSubjectDistinguishedName" value="CN=XXXX, O=LogTrust, L=Madrid, S=Madrid, C=SP"/>
- Replace the XXXX in the CN value with the name of the Devo account.
- Install the ProxyServerContainer.
- Execute the command prompt as an Administrator - ProxyServerContainer.exe -i
- Open services.msc and check if the service ProxyServerService@10010 has started.
In case you want to use the proxy, apart from the above configuration procedure, you should also add the following lines in the ProxyServerContainer.Settings file:
<!-- Proxy --> <add key="ProxyHost" value="" /> <add key="ProxyPort" value="" /> <add key="ProxyUserName" value="" /> <add key="ProxyPassword" value="" /> <add key="ProxyType" value="" />
- The ProxyType can have one of the following values:
- "none"
- "http"
- "socks4"
- "socks4a"
- "socks5"
Please note that the supported authentication is the basic authentication.
Related articles | https://docs.devo.com/confluence/docs/system-configuration/sending-the-data/sending-from-windows-operating-systems/proxyservercontainer | 2019-05-19T15:25:32 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.devo.com |
Grove LCD¶
Overview¶
This sample displays an incrementing counter through the Grove LCD, with changing backlight.
Requirements¶
To use this sample, the following hardware is required:
- Arduino 101 or Quark D2000 Devboard
- Grove LCD module
- Grove Base Shield [Optional]
Wiring¶
You will need to connect the Grove LCD via the Grove shield onto a board that supports Arduino shields.
On some boards you will need to use 2 pull-up resistors (10k Ohm) between the SCL/SDA lines and 3.3V.
Note
The I2C lines on Quark SE Sensor Subsystem does not have internal pull-up, so external one is
This sample should work on any board that has I2C enabled and has an Arduino shield interface. For example, it can be run on the Quark D2000 DevBoard as described below:
# On Linux/macOS cd $ZEPHYR_BASE/samples/grove/lcd mkdir build && cd build # On Windows cd %ZEPHYR_BASE%\samples\grove\lcd mkdir build & cd build cmake -GNinja -DBOARD=quark_d2000_crb .. ninja flash | https://docs.zephyrproject.org/latest/samples/display/grove_display/README.html | 2019-05-19T15:08:51 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.zephyrproject.org |
How to create a Payment backend¶
Payment backends must be listed in settings.SHOP_PAYMENT_BACKENDS
Shop interface¶
While we could solve this by defining a superclass for all payment backends, the better approach to plugins is to implement inversion-of-control, and let the backends hold a reference to the shop instead.
The reference interface for payment backends is located at
Currently, the shop interface defines the following methods:
Common with shipping¶
PaymentAPI.
get_order_for_id(id)¶
Returns an
Orderobject given a unique identifier (this is the reverse of
get_order_unique_id())
Specific to payment¶
PaymentAPI.
confirm_payment(order, amount, transaction_id, save=True)¶
This should be called when the confirmation from the payment processor was called and that the payment was confirmed for a given amount. The processor’s transaction identifier should be passed too, along with an instruction to save the object or not. For instance, if you expect many small confirmations you might want to save all of them at the end in one go (?). Finally the payment method keeps track of what backend was used for this specific payment.
Backend interface¶
The payment backend should define the following interface for the shop to be able do to anything sensible with it:
Attributes¶
Methods¶
PaymentBackend.
__init__(shop)¶
must accept a “shop” argument (to let the shop system inject a reference to it) | https://django-shop.readthedocs.io/en/latest/howto/how-to-payment.html | 2019-05-19T14:38:33 | CC-MAIN-2019-22 | 1558232254889.43 | [] | django-shop.readthedocs.io |
What are the limitations of PPTP in pfSense¶
Warning
PPTP in general has many limitations, especially from a security standpoint. It should not be used no matter how strongly a client pushes to have it enabled.
There are limitations of PPTP in pfSense, due to limitations in the NAT
capabilities of
pf.
Only one client can connect to a given PPTP server on the Internet simultaneously. 10 clients can connect to 10 different servers, but only a single simultaneous connection can exist to a single remote server. | https://docs.netgate.com/pfsense/en/latest/vpn/what-are-the-limitations-of-pptp-in-pfsense.html | 2019-05-19T14:27:32 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.netgate.com |
Available only in PRO Edition
Webix DataTable features Excel-like area selection. You can click on any cell in the DataTable and move the mouse pointer over the grid, a block of cells will be selected and colored in gray.
When you release the mouse pointer, the selection will remain and marked with bold border with a handle, like this:
To enable area selection in DataTable, you should specify the arease, data:small_film_set }
Area selection will work only with other selection types disabled. So, the select property shouldn't be set.
In order to refresh selected area, you can use the refreshSelectArea method.
$$("dtable").refreshSelectArea();
Related sample: Area Selection
You can apply custom area selection in the DataTable.
For this purpose, you need to use the addSelectArea method. This method allows creating a custom select area.
$$("dtable").addSelectArea(start,end,preserve);
The parameters are:
The first three parameters are mandatory, all others are optional.
You can easily remove an unnecessary select area by using the removeSelectArea method.
$$("dtable").removeSelectArea();
To remove some particular select area, you need to pass its name as a parameter of the removeSelectArea(). If the name isn't passed to the method, it will remove the last unnamed select area.
To get a select area, you should make use of the getSelectArea method. The method returns the object of the select area.
var area = $$("dtable").getSelectArea();
The object of a certain select area can be received by passing the name of the area as a parameter. Without parameters, the method returns the object of the last select area.
The returned object will contain the mandatory parameters: start, end and preserve. It can also include the optional parameters: area_name, css and handle. The details on the parameters are given here.
Several areas can be selected in the DataTable at once. The image below illustrates this feature:
To enable multiple selection, you need to define the multise, multiselect:true, data:small_film_set }
While having several select areas in the datatable, you can get all of them at once. For this purpose, apply the getAllSelectAreas method:
var areas = $$("dtable").getAllSelectAreas();
The method returns an object that contains configuration objects of all select areas in the datatable. The parameters of area objects are described above.
Related sample: Area Selection
There are several useful keyboard shortcuts that you can use for area selection. | https://docs.webix.com/datatable__area_selection.html | 2019-05-19T15:05:25 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.webix.com |
smp_svr enables support for the following command groups:
-
fs_mgmt
-
img_mgmt
-
os_mgmt
-
stat_mgmt
Caveats¶
- The Zephyr port of
smp_svris configured to run on a Nordic nRF52x MCU. The application should build and run for other platforms without modification, but the file system management commands will not work. To enable file system management for a different platform, adjust the
CONFIG_FS_NFFS_FLASH_DEV_NAMEsetting in
prj.confaccordingly.
-.
Building a BLE Controller (optional)¶ Running¶
The below steps describe how to build and run the
smp_svr sample in
Zephyr. Where examples are given, they assume the sample is being built for
the Nordic nRF52 Development Kit (
BOARD=nrf52_pca10040).
If you would like to use a more constrained platform, such as the nRF51 DK, you
should use the
prj_tiny.conf configuration file rather than the default
prj.conf.
Step 1: Build MCUboot¶
Build MCUboot by following the instructions in the MCUboot documentation page.
Step 2: Flash MCUboot¶
Flash the resulting image file to address 0x0 of flash memory. This can be done in multiple ways.
Using make or ninja:
make flash # or ninja flash
Using GDB:
restore <path-to-mcuboot-zephyr.bin> binary 0
Step 3: Build smp_svr¶
smp_svr can be built for the nRF52 as follows:
# On Linux/macOS cd $ZEPHYR_BASE/samples/subsys/mgmt/mcumgr/smp_svr mkdir -p build/nrf52_pca10040 && cd build/nrf52_pca10040 # On Windows cd %ZEPHYR_BASE%\samples\subsys\mgmt\mcumgr\smp_svr mkdir build\nrf52_pca10040 & cd build\nrf52_pca10040 # Use cmake to configure a Ninja-based build system: cmake -GNinja -DBOARD=nrf52_pca10040 ../.. # Now run ninja on the generated build system: ninja
Step 4: Sign the image¶
Note
From this section onwards you can use either a binary (
.bin) or an
Intel Hex (
.hex) image format. This is written as
(bin|hex) in this
document.
Using MCUboot’s
imgtool.py script, sign the
zephyr.(bin|hex)
file you built in Step 3. In the below example, the MCUboot repo is located at
~/src/mcuboot.
~/src/mcuboot/scripts/imgtool.py sign \ --key ~/src/mcuboot/root-rsa-2048.pem \ --header-size 0x200 \ --align 8 \ --version 1.0 \ --slot-size <image-slot-size> \ <path-to-zephyr.(bin|hex)> signed.(bin|hex)
The above command creates an image file called
signed.(bin|hex) in the
current directory.
Step 5: Flash the smp_svr image¶
Upload the
signed.(bin|hex) file from Step 4 to image slot-0 of your
board. The location of image slot-0 varies by board, as described in
MCUboot Partitions. For the nRF52 DK, slot-0 is located at address
0xc000.
Using
nrfjprog you don’t need to specify the slot-0 starting address,
since
.hex files already contain that information:
nrfjprog --program <path-to-signed.hex>
Using GDB:
restore <path-to-signed.bin> binary 0xc000
Step 6: Run it!¶
Note
If you haven’t installed
mcumgr yet, then do so by following the
instructions in the Command-line Tool section of the Management subsystem
documentation.
Note
The
mcumgr command-line tool requires a connection string in order
to identify the remote target device. In this sample we use a BLE-based
connection string, and you might need to modify it depending on the
BLE controller you are using.
Step 7: Device Firmware Upgrade¶
Now that the SMP server is running on your board and you are able to communicate
with it using
mcumgr, you might want to test what is commonly called
“OTA DFU”, or Over-The-Air Device Firmware Upgrade.
To do this, build a second sample (following the steps below) to verify it is sent over the air and properly flashed into slot-1, and then swapped into slot-0 by MCUboot.
Build a second sample¶
Perhaps the easiest sample to test with is the samples/hello_world sample provided by Zephyr, documented in the Hello World section.
Edit samples/hello_world/prj.conf and enable the required MCUboot Kconfig option as described in MCUboot by adding the following line to it:
CONFIG_BOOTLOADER_MCUBOOT=y
Then build the sample as usual (see Hello World).
Sign the second sample¶
Next you will need to sign the sample just like you did for
smp_svr,
since it needs to be loaded by MCUboot.
Follow the same instructions described in Step 4: Sign the image,
but this time you must use a
.bin image, since
mcumgr does not
yet support
.hex files.
Upload the image over BLE¶
Now we are ready to send or upload the image over BLE to the target remote device.
sudo mcumgr --conntype ble --connstring ctlr_name=hci0,peer_name='Zephyr' image upload signed.bin
If all goes well the image will now be stored in slot-1, ready to be swapped into slot-0 and executed. --conntype ble --connstring ctlr_name=hci0,peer_name='Zephyr' image list
This should print the status and hash values of each of the images present.
Test the image¶
In order to instruct MCUboot to swap the images we need to test the image first, making sure it boots:
sudo mcumgr --conntype ble --connstring ctlr_name=hci0,peer_name='Zephyr' image test <hash of slot-1 image>
Now MCUBoot will swap the image on the next reset.
Reset remotely¶
We can reset the device remotely to observe (use the console output) how MCUboot swaps the images:
sudo mcumgr --conntype ble --connstring ctlr_name=hci0,peer_name='Zephyr' reset
Upon reset MCUboot will swap slot-0 and slot-1.
The new image is the basic
hello_world sample that does not contain
SMP or BLE functionality, so we cannot communicate with it using
mcumgr. Instead simply reset the board manually to force MCUboot
to revert (i.e. swap back the images) due to the fact that the new image has
not been confirmed.
If you had instead built and uploaded a new image based on
smp_svr
(or another BLE and SMP enabled sample), you could confirm the
new image and make the swap permanent by using this command:
sudo mcumgr --conntype ble --connstring ctlr_name=hci0,peer_name='Zephyr' image confirm
Note that if you try to send the very same image that is already flashed in slot-0 then the procedure will not complete successfully since the hash values for both slots will be identical. | https://docs.zephyrproject.org/latest/samples/subsys/mgmt/mcumgr/smp_svr/README.html | 2019-05-19T14:36:00 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.zephyrproject.org |
Zoom All
Clicking the ZoomAll button (fifth button from the top on the right hand side vertical toolbar) will zoom the contents such that all the objects fit fully onto the screen.
The objects that are taken into consideration are also objects that are currently hidden.
Press F1 inside the application to read context-sensitive help directly in the application itself
← ∈
Last modified: le 2019/04/13 07:40 | http://docs.teamtad.com/doku.php/actzoomall | 2019-05-19T15:28:11 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.teamtad.com |
Oxygen XML Web Author
Welcome to Acrolinx for Oxygen XML Web Author!
You can use Acrolinx for Oxygen XML Web Author to include content quality checks into your CMS workflow. Here are the main steps to get you working comfortably with Acrolinx for Oxygen XML Web Author.
The Sidebar Card Guide is also worth having close to hand.
You can Check
Acrolinx for Oxygen XML Web Author checks your XML and DITA documents. Currently, DITA map checking isn't supported. XML Web Author.
Release Notes
Take a look at our release notes to learn more about the development of Acrolinx for Oxygen XML Web Author. | https://docs.acrolinx.com/oxwa/latest/en | 2019-05-19T14:40:11 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.acrolinx.com |
Beta 6.0
The latest version of AURIN’s flagship facility, the ‘Portal’, went live on the 3rd of April. This version has seen the implementation of many improvements ‘under-the-hood’ to harden and scale the existing infrastructure and optimize code and attend to bugs. The user-interface is unchanged from Beta-5A but new data and improved tools and visualisations have been released.
Data
Indicator datasets
Tools and visualisations
A number of tools have been added or improved:
- Spatial Lag Residual Plot
- Spatial Lag Response Plot
- 3D Scatter Plot
- SequenceNumbers
- Dataset Attribute Filter
- Economic Prosperity Index
- Employment Vulnerability Index
- Age Aggregation
- Spatial Aggregation
- Network Analysis (Estimate)
- Network Analysis (Goodness of Fit)
- Network Analysis (Simulate)
- Generate JSONGraph (geocoded addresses)
Interoperability of tools has also been improved with enhanced data sharing and spatialisation capability. | https://docs.aurin.org.au/beta-release-notes/beta-6/ | 2019-05-19T15:18:52 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.aurin.org.au |
Define a primary service
As part of the Splunk App for PCI Compliance, primary functions are defined as one or more of the following items:
- Running Process (process name)
- Installed Service (service name)
- Listening Port (transport/port combination)
Primary functions are defined in a Splunk lookup table (
SA-EndpointProtection/lookups/primary_functions.csv). This lookup table contains three separate primary keys (one for service, process, and transport/port respectively). The remainder of the header determines whether or not the function is primary and what that function is. This results in the following CSV header:
process,service,transport,port,is_primary,function
Function names are arbitrary, but we recommend the following:
Application (name, for instance "Tomcat") Authentication Database Domain Name Service (DNS) Mail Proxy Network Time Protocol (NTP) Web
The
SA-EndpointProtection/lookups/primary_functions.csv file contains examples that come with the Splunk App for PCI Compliance.
Lookups
Primary functions running on a system are determined by comparing the defined primary functions with the running processes, installed services, and listening ports found on a system.
- Running processes are found in the "
localprocesses_tracker"
- Services are found in the "
services_tracker"
- Listening ports are found in the "
listeningports_tracker"
For example, the following search examines the "
localprocesses_tracker for primary functions":
| inputlookup append=T localprocesses_tracker | `get_primary_function(process)` | rename app as process
Compliance Managers may want to use multiple services and/or processes to determine the primary function of a system. This is easily done as long as the function name is consistent among applications in the stack.
To do this, you will need to define a primary service. You can have several service names that represent an application stack but a single function. In the
SA-EndpointProtection/lookups/primary_functions.csv file identify all of the services and/or processes associated with the primary function with the same function name.
For example:
The following search simulates a system running these services to show how they result in a single function:
| head 1 | stats count | eval service="apple|banana|carrot" | `makemv(service)` | rename service as app | mvexpand app | `get_primary_function(service)` | stats dc(function)
This search will result in a
dc(function) ==! | https://docs.splunk.com/Documentation/PCI/3.7.2/User/Defineaprimaryservice | 2019-05-19T15:11:05 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Spring Session provides an API and implementations for managing a user’s session information.
Introduction
Spring Session provides an API and implementations for managing a user’s session information. It also provides transparent integration with:essionalive when receiving WebSocket messages
Samples and Guides (Start Here)
If you are looking to get started with Spring Session, the best place to start is our Sample Applications.
HttpSession Integration
Spring Session provides transparent integration with
HttpSession.
This means that developers can switch the
HttpSession implementation out with an implementation that is backed by Spring Session.
Why Spring Session & HttpSession?
We have already mentioned that Spring Session provides transparent integration with
HttpSession, but what benefits do we get out of this?
HttpSession with Redis
Using Spring Session with
HttpSession is enabled by adding a Servlet Filter before anything that uses the
HttpSession.
You can choose from enabling this using either:
Redis Java Based Configuration
This section describes how to use Redis to back
HttpSession using Java based configuration.
Spring Java Config { @Bean public JedisConnectionFactory connectionFactory() { return new JedisConnectionFactory(); (2) } }
Java.
Last we need to ensure that our Servlet Container (i.e. Tomcat) uses our
springSessionRepositoryFilter for every request.
Fortunately, Spring Session provides a utility class named
AbstractHttpSessionApplicationInitializer both of these steps extremely easy.
You can find an example below:
public class Initializer extends AbstractHttpSessionApplicationInitializer { (1) public Initializer() { super(Config.class); (2) } }
Redis XML Based Configuration
This section describes how to use Redis to back
HttpSession using XML based configuration.
Spring XML Configuration
After adding the required dependencies, we can create our Spring configuration.
The Spring configuration is responsible for creating a Servlet Filter that replaces the
HttpSession implementation with an implementation backed by Spring Session.
Add the following Spring Configuration:
(1) <context:annotation-config/> <bean class="org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration"/> (2) <bean class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory"/>
XML, we need to instruct Spring to load our
session.xml configuration. and picks up our session.xml configuration.
Last we need to ensure that our Servlet Container (i.e. Tomcat) uses our
springSessionRepositoryFilter for every request.
The following snippet performs this last step for us:
<filter> <filter-name>springSessionRepositoryFilter</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSessionRepositoryFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping>
The DelegatingFilterProxy will look up a Bean by the name of
springSessionRepositoryFilter and cast it to a
Filter.
For every request that
DelegatingFilterProxy is invoked, the
springSessionRepositoryFilter will be invoked.
How HttpSession Integration Works
Fortunately both
HttpSession and
HttpServletRequest (the API for obtaining an
HttpSession) are both interfaces.
This means that we can provide our own implementations for each of these APIs.
First we create a custom
HttpServletRequest that returns a custom implementation of
HttpSession is overridden.
All other methods are implemented by
HttpServletRequestWrapper and simply delegate to the original
HttpServletRequest implementation.
We replace the
HttpServletRequest implementation into the
FilterChain we ensure that anything invoked after our
Filter uses the custom
HttpSession implementation.
This highlights why it is important that Spring Session’s
SessionRepositoryFilter must be placed before anything that interacts with the
HttpSession.
Multiple HttpSessions in Single Browser
Spring Session has the ability to support multiple sessions in a single browser instance. This provides the ability to support authenticating with multiple users in the same browser instance (i.e. Google Accounts).
Let’s take a look at how Spring Session keeps track of multiple sessions.
Managing a Single Session
Spring Session keeps track of the
HttpSession by adding a value to a cookie named SESSION.
For example, the SESSION cookie might have a value of:
7e8383a4-082c-4ffe-a4bc-c40fd3363c5e
Adding a Session
We can add another session by requesting a URL that contains a special parameter in it. By default the parameter name is _s. For example, the following URL would create a new session:
Rather than creating the URL ourselves, we can utilize the
HttpSessionManager to do this for us.
We can obtain the
HttpSessionManager from the
HttpServletRequest using the following:
HttpSessionManager sessionManager = (HttpSessionManager) httpRequest.getAttribute(HttpSessionManager.class.getName());
We can now use it to create a URL to add another session.
String addAlias = unauthenticatedAlias == null ? (1) sessionManager.getNewSessionAlias(httpRequest) : (2) unauthenticatedAlias; (3) String addAccountUrl = sessionManager.encodeURL(contextPath, addAlias); (4)
Now our SESSION cookie looks something like this:
0 7e8383a4-082c-4ffe-a4bc-c40fd3363c5e 1 1d526d4a-c462-45a4-93d9-84a39b6d44ad
Such that:
There is a session with the id 7e8383a4-082c-4ffe-a4bc-c40fd3363c5e
The alias for this session is 0. For example, if the URL is this alias would be used.
This is the default session. This means that if no session alias is specified, then this session is used. For example, if the URL is this session would be used.
There is a session with the id 1d526d4a-c462-45a4-93d9-84a39b6d44ad
The alias for this session is 1. If the session alias is 1, then this session is used. For example, if the URL is this alias would be used.
Automatic Session Alias Inclusion with encodeURL
The nice thing about specifying the session alias in the URL is that we can have multiple tabs open with different active sessions. The bad thing is that we need to include the session alias in every URL of our application. Fortunately, Spring Session will automatically include the session alias in any URL that passes through HttpServletResponse#encodeURL(java.lang.String)
This means that if you are using standard tag libraries the session alias is automatically included in the URL. For example, if we are currently using the session with the alias of 1, then the following:
--> <c:url <a id="navLink" href="${linkUrl}">Link</a>
will output a link of:
<a id="navLink" href="/link.jsp?_s=1">Link</a>
HttpSession & RESTful APIs
Spring Session can work with RESTful APIs by allowing the session to be provided in a header.
Spring Configuration
After adding the required dependencies, we can create our Spring configuration.
The Spring configuration is responsible for creating a Servlet Filter that replaces the
HttpSession implementation with an implementation backed by Spring Session.
Add the following Spring Configuration:
@Configuration @EnableRedisHttpSession (1) public class HttpSessionConfig { @Bean public JedisConnectionFactory connectionFactory() { return new Jedis { }
WebSocket Integration
Spring Session provides transparent integration with Spring’s WebSocket support.
Why Spring Session & WebSockets?!
WebSocket Usage
The WebSocket Sample provides a working sample on how to integrate Spring Session with WebSockets. You can follow the basic steps for integration below, but you are encouraged to follow along with the detailed WebSocket Guide when integrating with your own application:
HttpSession Integration
Before using WebSocket integration, you should be sure that you have HttpSession Integration working first.<ExpiringSession> { (1) protected void configureStompEndpoints(StompEndpointRegistry registry) { (2) registry.addEndpoint("/messages") .withSockJS(); }.
API Documentation
Session
A
Session is a simplified
Map of name value pairs.
Typical usage might look like the following:
public class RepositoryDemo<S extends Session> { private SessionRepository<S> repository; (1) public void demo() { S toSave = repository.createSession(); (2) (3) User rwinch = new User("rwinch"); toSave.setAttribute(ATTR_USER, rwinch); repository.save(toSave); (4) S session = repository.getSession(toSave.getId()); (5) (6) User user = session.getAttribute(ATTR_USER); assertThat(user).isEqualTo(rwinch); } // ... setter methods ... }
ExpiringSession
An
ExpiringSession extends a
Session by providing attributes related to the
Session instance’s expiration.
If there is no need to interact with the expiration information, prefer using the more simple
Session API.
Typical usage might look like the following:
public class ExpiringRepositoryDemo<S extends ExpiringSession> { private SessionRepository<S> repository; (1) public void demo() { S toSave = repository.createSession(); (2) // ... toSave.setMaxInactiveIntervalInSeconds(30); (3) repository.save(toSave); (4) S session = repository.getSession(toSave.getId()); (5) // ... } // ... setter methods ... }
SessionRepository
A
SessionRepository is in charge of creating, retrieving, and persisting
Session instances.
If possible, developers should not interact directly with a
SessionRepository or a
Session.
Instead, developers should prefer interacting with
SessionRepository and
Session indirectly through the HttpSession and WebSocket integration.
RedisOperationsSessionRepository
RedisOperationsSessionRepository is a
SessionRepository that is implemented using Spring Data’s
RedisOperations.
In a web environment, this is typically used in combination with
SessionRepositoryFilter.
The implementation supports
SessionDestroyedEvent through
SessionMessageListener.
Instantiating a RedisOperationsSessionRepository
A typical example of how to create a new instance can be seen below:
JedisConnectionFactory factory = new JedisConnectionFactory(); SessionRepository<? extends ExpiringSession> repository = new RedisOperationsSessionRepository(factory);
For additional information on how to create a
RedisConnectionFactory, refer to the Spring Data Redis Reference.
Storage Details
Each session is stored in Redis as a Hash. Each session is set and updated using the HMSET command. An example of how each session is stored can be seen below.
HMSET spring:session:sessions:<session-id> creationTime 1404360000000 \ maxInactiveInterval 1800 lastAccessedTime 1404360000000 \ sessionAttr:<attrName> someAttrValue sessionAttr:<attrName2> someAttrValue2
Session Expiration
An expiration is associated to each session using the EXPIRE command based upon the RedisOperationsSessionRepository.RedisSession.getMaxInactiveInterval(). For example:
EXPIRE spring:session:sessions:<session-id> 1800
Spring Session relies on the expired and delete keyspace notifications from Redis to fire a SessionDestroyedEvent.
It is the
SessionDestroyedEvent that ensures resources associated with the Session are cleaned up.
For example, when using Spring Session’s WebSocket support the Redis expired or delete event is what triggers any WebSocket connections associated with the session to be closed.
One problem with this approach is that Redis makes no guarantee of when the expired event will be fired if they key has not been accessed. Specifically the background task that Redis uses to clean up expired keys is a low priority task and may not trigger the key expiration. For additional details see Timing of expired events section in the Redis documentation.
To circumvent the fact that expired events are not guaranteed to happen we can ensure that each key is accessed when it is expected to expire. This means that if the TTL is expired on the key, Redis will remove the key and fire the expired event when we try to access they key.
For this reason, each session expiration is also tracked to the nearest minute. This allows a background task to access the potentially expired sessions to ensure that Redis expired events are fired in a more deterministic fashion. For example:
SADD spring:session:expirations:<expire-rounded-up-to-nearest-minute> <session-id> EXPIRE spring:session:expirations:<expire-rounded-up-to-nearest-minute> 1860
The background task will then use these mappings to explicitly request each key. By accessing they key, rather than deleting it, we ensure that Redis deletes the key for us only if the TTL is expired.
Optimized Writes
The
Session instances managed by
RedisOperationsSessionRepository keeps track of the properties that have changed and only updates those.
This means if an attribute is written once and read many times we only need to write that attribute once.
For example, assume the session attribute "sessionAttr2" from earlier was updated.
The following would be executed upon saving:
HMSET spring:session:sessions:<session-id> sessionAttr:<attrName2> newValue EXPIRE spring:session:sessions:<session-id> 1800
SessionDestroyedEvent
RedisOperationsSessionRepository supports firing a
SessionDestroyedEvent whenever a
Session is deleted or when it expires.
This is necessary to ensure resources associated with the
Session are properly cleaned up.
For example, when integrating with WebSockets the
SessionDestroyedEvent is in charge of closing any active WebSocket connections.
Firing a
SessionDestroyedEvent is made available through the
SessionMessageListener which listens to Redis Keyspace events.
In order for this to work, Redis Keyspace events for Generic commands and Expired events needs to be enabled.
For example:
redis-cli config set notify-keyspace-events Egx
If you are using
@EnableRedisHttpSession the
SessionMessageListener and enabling the necessary Redis Keyspace events is done automatically.
However, in a secured Redis enviornment the config command is disabled.
This means that Spring Session cannot configure Redis Keyspace events for you.
To disable the automatic configuration add
ConfigureRedisAction.NO_OP as a bean.
For example, Java Configuration can use the following:
@Bean public static ConfigureRedisAction configureRedisAction() { return ConfigureRedisAction.NO_OP; }
XML Configuraiton can use the following:
<util:constant
Viewing the Session in Redis
After installing redis-cli, you can inspect the values in Redis using the redis-cli. For example, enter the following into a terminal:
$ redis-cli redis 127.0.0.1:6379> keys * 1) "spring:session:sessions:4fc39ce3-63b3-4e17-b1c4-5e1ed96fb021" (1) 2) "spring:session:expirations:1418772300000" (2)
You can also view the attributes of each session.
redis 127.0.0.1:6379> hkeys spring:session:sessions:4fc39ce3-63b3-4e17-b1c4-5e1ed96fb021 1) "lastAccessedTime" 2) "creationTime" 3) "maxInactiveInterval" 4) "sessionAttr:username" redis 127.0.0.1:6379> hget spring:session:sessions:4fc39ce3-63b3-4e17-b1c4-5e1ed96fb021 sessionAttr:username "\xac\xed\x00\x05t\x00\x03rob"
MapSessionRepository
The
MapSessionRepository allows for persisting
ExpiringSession in a
Map with the key being the
ExpiringSession id and the value being the
ExpiringSession.
The implementation can be used with a
ConcurrentHashMap as a testing or convenience mechanism.
Alternatively, it can be used with distributed
Map implementations. For example, it can be used with Hazelcast.
Instantiating MapSessionRepository
Creating a new instance is as simple as:
SessionRepository<? extends ExpiringSession> repository = new MapSessionRepository();
Using Spring Session and Hazlecast
The Hazelcast Sample is a complete application demonstrating using Spring Session with Hazelcast. To run it use the following:
./gradlew :samples:hazelcast:tomcatRun are:
Java 5+
If you are running in a Servlet Container (not required), Servlet 2.5+
If you are using other Spring libraries (not required), the minimum required version is Spring 3.2.14. While we re-run all unit tests against Spring 3.2.x, we recommend using the latest Spring 4.x version when possible.
@EnableRedisHttpSessionrequires Redis 2.8+. This is necessary to support Session Expiration | https://docs.spring.io/spring-session/docs/1.0.2.BUILD-SNAPSHOT/reference/html5/ | 2019-05-19T14:42:39 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.spring.io |
Python Packaging User Guide¶
Welcome to the Python Packaging User Guide, a collection of tutorials and references to help you distribute and install Python packages with modern tools.
This guide is maintained on GitHub by the Python Packaging Authority. We happily accept any contributions and feedback. 😊
Essential tools and concepts for working within the Python development ecosystem are covered in our Tutorials section:
- to learn how to install packages, see the tutorial on installing packages.
- to learn how to manage dependencies in a version controlled project, see the tutorial on managing application dependencies.
- to learn how to package and distribute your projects, see the tutorial on packaging and distributing
- to get an overview of packaging options for Python libraries and applications, see the Overview of Python Packaging.
Learn more¶
Beyond our Tutorials, this guide has several other resources:
- the Guides section for walk throughs, such as Installing pip/setuptools/wheel with Linux Package Managers or Packaging binary extensions
- the Discussions section for in-depth references on topics such as Deploying Python applications or pip vs easy_install
- the PyPA specifications section for packaging interoperability specifications
Additionally, there is a list of other projects maintained by members of the Python Packaging Authority. | https://python-packaging-user-guide.readthedocs.io/ | 2019-05-19T15:10:24 | CC-MAIN-2019-22 | 1558232254889.43 | [] | python-packaging-user-guide.readthedocs.io |
Gets or sets the command executed when an end-user taps a row within the grid. This is a bindable property.
Namespace: DevExpress.Mobile.DataGrid
Assembly: DevExpress.Mobile.Grid.v18.2.dll
This documentation topic describes legacy technology. We no longer develop new functionality for the GridControl and suggest that you use the new DataGridView control instead.
To define an action to be performed when the grid's row is tapped, you can implement a command and bind it to the grid using the RowTapCommand property, or handle the GridControl.RowTap event.
A data source object index is passed to the RowTapCommand command as a parameter. | https://docs.devexpress.com/Xamarin/DevExpress.Mobile.DataGrid.GridControl.RowTapCommand | 2019-05-19T14:25:44 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.devexpress.com |
1GB of memory or more is a prerequisite of installing Ghost to production. If this space isn’t available, it is possible to configure a larger amount of swap memory.
Use the following commands one by one:
dd if=/dev/zero of=/var/swap bs=1k count=1024k mkswap /var/swap swapon /var/swap echo '/var/swap swap swap defaults 0 0' >> /etc/fstab
If the last command fails with "Permission denied" (this can happen on a fresh Amazon EC2 instance), try this instead:
echo '/var/swap swap swap defaults 0 0' | sudo tee -a /etc/fstab | https://docs.ghost.org/faq/adding-swap-memory/ | 2019-05-19T15:12:13 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.ghost.org |
Is it possible to schedule a YO via the API
Solved!Posted in General by david pichsenmeister Sat Mar 28 2015 20:04:52 GMT+0000 (UTC)·1·Viewed 1,871 times
I would like to send a schedule YO to all me subscribers like it's possible in my account dashboard. Is there any (inofficial, work in progress,...) API function to achieve that? If not, will something like that come soon? Thanks.
Sorry but scheduled Yos are no longer supported in the dashboard nor the api.
Or Arbel marked this as solved | http://docs.justyo.co/discuss/5517096416a294230084a9aa | 2019-05-19T15:09:56 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.justyo.co |
An Act to create 66.10015 of the statutes; Relating to: the effect of changes in requirements for development-related permits or authorizations on persons who apply for the permits or authorizations.
Amendment Histories
2013 Wisconsin Act 74 (PDF: )
2013 Wisconsin Act 74: LC Act Memo
Bill Text (PDF: )
LC Amendment Memo
SB314 ROCP for Committee on Government Operations, Public Works, and Telecommunications On 11/6/2013 (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2013 Assembly Bill 386 - Tabled | https://docs.legis.wisconsin.gov/2013/proposals/sb314 | 2019-05-19T15:24:44 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.legis.wisconsin.gov |
Media for Windows Phone
[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]
You can incorporate audio and video in various ways into your Windows Phone apps.
Note
Any application that incorporates media must adhere to the App certification requirements for Windows Phone.
This topic contains the following sections.
- Consuming media in a Windows Phone app
- Integrating with the Music + Videos Hub
- Capturing input from the microphone
- Connecting to FM radio
- Building a DRM-enabled Application
- Related Topics
Consuming media in a Windows Phone app
You can use the following features to consume media in a Windows Phone application:
Use the MediaPlayerLauncher class for Windows Phone to embed audio or video using the device media player. As a best practice, use the MediaPlayerLauncher class with XNA applications.
Use the MediaElement class to embed audio or video using a more customizable interface. For an example of how to use this API in your application, see How to play or stream a video file for Windows Phone 8. As a best practice, use the MediaElement API with Windows Phone apps.
Use the MediaStreamSource class for adaptive streaming solutions.
Use Microsoft.Phone.BackgroundAudio to create a media application that will continue playing audio when another application is in the foreground. For more information, see How to play background audio for Windows Phone 8.
Add sound effects using XNA Game Studio in apps that target Windows Phone OS 7.1. For an example of how to add a sound effect, see Making Sounds with XNA Game Studio.
Windows Phone supports a wide range of audio, video, and image codecs. For a full list, including maximum capabilities for each codec, see Supported media codecs for Windows Phone 8.
Integrating with the Music + Videos Hub
Windows Phone apps can integrate closely with the Music + Videos Hub. When you implement this integration, your app name is displayed in the Music + Videos Hub. Additionally, you can choose to display a history of what has been played in your application, including the most recent item that was played. For more information, see How to integrate with the Music and Videos Hub for Windows Phone 8.
You can also use several Launcher APIs to access Store. For more information, see Launchers for Windows Phone 8.
Capturing input from the microphone
Use the Microphone class from the XNA Framework to get audio input from the Windows Phone microphone.
Connecting to FM radio
You can access FM radio stations in apps that target Windows Phone OS 7.1. For more information, see How to set up and tune the FM radio for Windows Phone 8.
Note
The 7x27a processor does not allow accessing the FM radio and microphone simultaneously.
Building a DRM-enabled Application
Windows Phone supports PlayReady Digital Rights Management (DRM). For more information, see Digital Rights Management (DRM) for Windows Phone 8.
See Also
Other Resources
Supported media codecs for Windows Phone 8 | https://docs.microsoft.com/en-us/previous-versions/windows/apps/ff402550%28v%3Dvs.105%29 | 2019-05-19T14:56:05 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.microsoft.com |
/
function returns num as an array in binary
It does this with the 0th digit being on the right
>>> from sympy.physics.quantum.shor import arr >>> arr(5, 4) [0, 1, 0, 1]
This applies the continued fraction expansion to two numbers x/y
x is the numerator and y is the denominator
>>> from sympy.physics.quantum.shor import continued_fraction >>> continued_fraction(3, 8) [0, 2, 1, 2]
Finds the period of a in modulo N arithmetic
This is quantum part of Shor’s algorithm.It takes two registers, puts first in superposition of states with Hadamards so: |k>|0> with k being all possible choices. It then does a controlled mod and a QFT to determine the order of a.
This function implements Shor’s factoring algorithm on the Integer N
The algorithm starts by picking a random number (a) and seeing if it is coprime with N. If it isn’t, then the gcd of the two numbers is a factor and we are done. Otherwise, it begins the period_finding subroutine which finds the period of a in modulo N arithmetic. This period, if even, can be used to calculate factors by taking a**(r/2)-1 and a**(r/2)+1. These values are returned. | https://docs.sympy.org/0.7.2-py3k/modules/physics/quantum/shor.html | 2019-05-19T14:42:12 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.sympy.org |
The VMware Identity Manager machine accesses the cloud application catalog and other Web services on the Internet. If your network configuration provides Internet access through an HTTP proxy, you must adjust your proxy settings on the VMware Identity Manager machine.
Enable your proxy to handle only Internet traffic. To ensure that the proxy is set up correctly, set the parameter for internal traffic to no-proxy within the domain.
Procedure
- Log in to the VMware Identity Manager console and navigate to the Appliance Settings > VA Configuration page.
- Click Manage Configuration and then click Proxy Configuration.
- Enable Proxy.
- In Proxy host with port text box, enter the proxy name and port. For example, proxyhost.example.com:3128
- In the Non-Proxied hosts text box, enter the non-proxy hosts that are accessed without going through the proxy server.
Use a comma to separate a list of host names.
- Click Save. | https://docs.vmware.com/en/VMware-Identity-Manager/3.3/vidm_windows_install/GUID-E51F7A26-315D-4D92-B778-F0B9E96C4E78.html | 2019-05-19T14:20:08 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.vmware.com |
3.4.16. Logs¶
- class
buildbot.process.log.
Log¶
This class handles write-only access to log files from running build steps. It does not provide an interface for reading logs - such access should occur directly through the Data API.
Instances of this class can only be created by the
addLogmethod of a build step.
decoder¶
A callable used to decode bytestrings. See
logEncoding.
subscribe(receiver)¶
receiverto be called with line-delimited chunks of log data. The callable is invoked as
receiver(stream, chunk), where the stream is indicated by a single character, or None for logs without streams. The chunk is a single string containing an arbitrary number of log lines, and terminated with a newline. When the logfile is finished,
receiverwill be invoked with
Nonefor both arguments.
The callable cannot return a Deferred. If it must perform some asynchronous operation, it will need to handle its own Deferreds, and be aware that multiple overlapping calls may occur.
Note that no “rewinding” takes place: only log content added after the call to
subscribewill be supplied to
receiver.
In use, callers will receive a subclass with methods appropriate for the log type:
- class
buildbot.process.log.
TextLog¶
addContent(text):
Add the given data to the log. The data need not end on a newline boundary.
- class
buildbot.process.log.
StreamLog¶
This class handles logs containing three interleaved streams: stdout, stderr, and header. The resulting log maintains data distinguishing these streams, so they can be filtered or displayed in different colors. This class is used to represent the stdio log in most steps. | http://docs.buildbot.net/html/developer/cls-log.html | 2017-10-17T07:56:20 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.buildbot.net |
The Urban 8+
- Android: Android SDK installed and updated (requires Android MinSdkVersion = 16)
Setup
Download the latest plugin
and import the
unitypackage into the unity project:
Open Assets -> Import Package -> Custom Package.
Configure Urban Airship Settings:
Open Window -> Urban Airship -> Settings and set the Urban Airship settings.
After generating a project for iOS, enable Push Notifications in the project editor’s Capabilities pane:
Notification Service Extension
In order to take advantage of iOS 10 notification attachments, such as images, animated gifs, and video, you will need to create a notification service extension by following the iOS Notification Service Extension Guide.
Send Your First Push Notification
UAirship.Shared.UserNotificationsEnabled = true;
string channelId = UAirship.Shared.
Enabling User Notifications
UAirship.Shared.UserNotificationsEnabled = true;.
Listening for Events
OnChannelUpdated:
UAirship.Shared.OnChannelUpdated += (string channelId) => { Debug.Log ("Channel updated: " + channelId); };
OnDeepLinkReceived:
UAirship.Shared.OnDeepLinkReceived += (string deeplink) => { Debug.Log ("Received deep link: " + deeplink); };
OnPushReceived:
UAirship.Shared.OnPushReceived += (PushMessage message) => { Debug.Log ("Received push! " + message.Alert); };
Available events:
- OnChannelUpdated
- Event when channel registration updates.
- OnDeepLinkReceived
- Event when a new deep link is available. The app should navigate to the proper page when the event is received.
- OnPushReceived
- Event when a push is received.
Addressing Devices
To help target specific devices or users for a notification, we have Tags, Named Users and Tag Groups.
// Add tag UAirship.Shared.AddTag ("some-tag"); // Remove tag UAirship.Shared.RemoveTag ("other-tag"); // Get tags IEnumerable<string> tags = UAirship.Shared.Tags;
Tags allow you to attribute arbitrary metadata to a specific device. Common examples include favorites such as sports teams or news story categories.
Named Users
UAirship.Shared.NamedUserId = "coolNamedUserId";
Associating the channel with a Named User ID, will implicitly disassociate the channel from the previously associated Named User ID, if it plugin will log an error. In order to change this setting, see the Settings documentation.
Tag Groups
Channel Tag Group Example:
UAirship.Shared.EditChannelTagGroups () .AddTag ("loyalty", "silver-member") .RemoveTag ("loyalty", "bronze-member") .Apply ();
Named User Tag Group Example:
UAirship.Shared.EditNamedUserTagGroups () .AddTag ("loyalty", "silver-member") .RemoveTag ("loyalty", "bronze-member")
Associating an identifier:
UAirship.Shared.AssociateIdentifier ("some key", "some identifier");.
Message Center
The default message center can be displayed at any time.
UAirship.Shared.DisplayMessageCenter ();
Urban Airship Message Center is a place in your app where you can display persistent rich messages, including HTML, video, etc. The messages are hosted by Urban Airship, and are typically displayed in standard inbox-style within your app. | https://docs.urbanairship.com/platform/unity/ | 2017-10-17T07:53:44 | CC-MAIN-2017-43 | 1508187820930.11 | [array(['https://docs.urbanairship.com/images/unity-assets-import-package.png',
None], dtype=object)
array(['https://docs.urbanairship.com/images/unity-ua-settings.png', None],
dtype=object)
array(['https://docs.urbanairship.com/images/unity-ua-config.png', None],
dtype=object)
array(['https://docs.urbanairship.com/images/ios-enable-push-notifications.png',
None], dtype=object) ] | docs.urbanairship.com |
Alloy Discovery Enterprise Tools and Resources
The Alloy Discovery Enterprise 8 suite includes main components and auxiliary tools (utilities, configuration files, and resources). The main components can be viewed and launched from the Alloy Control Panel.
See the table below for list of basic auxiliary tools with descriptions and file locations.
NOTE: Depending on the installation options, the set of available components, tools, and resources may vary. | https://docs.alloysoftware.com/alloydiscovery/8/docs/installguide/installguide/appendix/tools-and-resources.htm | 2021-02-24T22:38:29 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.alloysoftware.com |
Welcome to the new Amazon S3 User Guide! The Amazon S3 User Guide combines information and instructions from the three retired guides: Amazon S3 Developer Guide, Amazon S3 Console User Guide, and Amazon S3 Getting Started Guide.
Uploading objects using presigned URLs.
All objects and buckets by default are private. The presigned URLs are useful if you want your user/customer to be able to upload a specific object to your bucket, but you don't require them to have AWS security credentials or permissions.
When you create a presigned URL, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The presigned URLs are valid only for the specified duration. That is, you must start the action before the expiration date and time. If the action consists of multiple steps, such as a multipart upload, all steps must be started before the expiration, otherwise you will receive an error when Amazon S3 attempts to start a step with an expired URL.
You can use the presigned URL multiple times, up to the expiration date and time.
Presigned URL access
Since presigned URLs grant access to your Amazon S3 buckets to whoever has the URL, we recommend that you protect them appropriately. For more details about protecting presigned URLs, see Limiting presigned URL capabilities.
Anyone with valid security credentials can create a presigned URL. However, for you to successfully upload an object, the presigned URL must be created by someone who has permission to perform the operation that the presigned URL is based upon.
Generate a presigned URL for object upload
You can generate a presigned URL programmatically using the AWS SDK for Java, .NET,
Ruby, PHP,
Node.js, and Python
If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a presigned object URL without writing any code. Anyone who receives a valid presigned URL can then programmatically upload an object. For more information, see Using Amazon S3 from AWS Explorer. For instructions on how to install AWS Explorer, see Developing with Amazon S3 using the AWS SDKs, and explorers.
You can use the AWS SDK to generate a presigned URL that you, or anyone you give the URL, can use to upload an object to Amazon S3. When you use the URL to upload an object, Amazon S3 creates the object in the specified bucket. If an object with the same key that is specified in the presigned URL already exists in the bucket, Amazon S3 replaces the existing object with the uploaded object.
Examples
The following examples show how to upload objects using presigned URLs.
- Java
To successfully complete an upload, you must do the following:
Specify the HTTP PUT verb when creating the
GeneratePresignedUrlRequestand
HttpURLConnectionobjects.
Interact with the
HttpURLConnectionobject in some way after finishing the upload. The following example accomplishes this by using the
HttpURLConnectionobject to check the HTTP response code.
This example generates a presigned URL and uses it to upload sample data as an object. For instructions on creating and testing a working sample, see Testing the Amazon S3 Java Code Examples.
import com.amazonaws.AmazonServiceException; import com.amazonaws.HttpMethod;.GeneratePresignedUrlRequest; import com.amazonaws.services.s3.model.S3Object; import java.io.IOException; import java.io.OutputStreamWriter; import java.net.HttpURLConnection; import java.net.URL; public class GeneratePresignedUrlAndUploadObject { public static void main(String[] args) throws IOException { Regions clientRegion = Regions.DEFAULT_REGION; String bucketName = "*** Bucket name ***"; String objectKey = "*** Object key ***"; try { AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withCredentials(new ProfileCredentialsProvider()) .withRegion(clientRegion) .build(); // Set the pre-signed URL to expire after one hour. java.util.Date expiration = new java.util.Date(); long expTimeMillis = expiration.getTime(); expTimeMillis += 1000 * 60 * 60; expiration.setTime(expTimeMillis); // Generate the pre-signed URL. System.out.println("Generating pre-signed URL."); GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, objectKey) .withMethod(HttpMethod.PUT) .withExpiration(expiration); URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest); // Create the connection and use it to upload the new object using the pre-signed URL. HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setDoOutput(true); connection.setRequestMethod("PUT"); OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream()); out.write("This text uploaded as an object via presigned URL."); out.close(); // Check the HTTP response code. To complete the upload and make the object available, // you must interact with the connection object in some way. connection.getResponseCode(); System.out.println("HTTP response code: " + connection.getResponseCode()); // Check to make sure that the object was uploaded successfully. S3Object object = s3Client.getObject(bucketName, objectKey); System.out.println("Object " + object.getKey() + " created in bucket " + object.getBucketName()); } shows how to use the AWS SDK for .NET to upload an object to an S3 bucket using a presigned URL.
This example generates a presigned URL for a specific object and uses it to upload a file. For information about the example's compatibility with a specific version of the AWS SDK for .NET and instructions about how to create and test a working sample, see Running the Amazon S3 .NET Code Examples.
using Amazon; using Amazon.S3; using Amazon.S3.Model; using System; using System.IO; using System.Net; namespace Amazon.DocSamples.S3 { class UploadObjectUsingPresignedURLTest { private const string bucketName = "*** provide bucket name ***"; private const string objectKey = "*** provide the name for the uploaded object ***"; private const string filePath = "*** provide the full path name of the file to upload ***"; // Specify how long the presigned URL lasts, in hours private const double timeoutDuration = 12; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 s3Client; public static void Main() { s3Client = new AmazonS3Client(bucketRegion); var url = GeneratePreSignedURL(timeoutDuration); UploadObject(url); } private static void UploadObject(string url) { HttpWebRequest httpRequest = WebRequest.Create(url) as HttpWebRequest; httpRequest.Method = "PUT"; using (Stream dataStream = httpRequest.GetRequestStream()) { var buffer = new byte[8000]; using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read)) { int bytesRead = 0; while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0) { dataStream.Write(buffer, 0, bytesRead); } } } HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse; } private static string GeneratePreSignedURL(double duration) { var request = new GetPreSignedUrlRequest { BucketName = bucketName, Key = objectKey, Verb = HttpVerb.PUT, Expires = DateTime.UtcNow.AddHours(duration) }; string url = s3Client.GetPreSignedURL(request); return url; } } }
- Ruby
The following tasks guide you through using a Ruby script to upload an object using a presigned URL for SDK for Ruby - Version 3.
The following Ruby code example demonstrates the preceding tasks for SDK for Ruby - Version 3.
require 'aws-sdk-s3' require 'net/http' # Uploads an object to a bucket in Amazon Simple Storage Service (Amazon S3) # by using a presigned URL. # # Prerequisites: # # - An S3 bucket. # - An object in the bucket to upload content to. # # @param s3_client [Aws::S3::Resource] An initialized S3 resource. # @param bucket_name [String] The name of the bucket. # @param object_key [String] The name of the object. # @param object_content [String] The content to upload to the object. # @param http_client [Net::HTTP] An initialized HTTP client. # This is especially useful for testing with mock HTTP clients. # If not specified, a default HTTP client is created. # @return [Boolean] true if the object was uploaded; otherwise, false. # @example # exit 1 unless object_uploaded_to_presigned_url?( # Aws::S3::Resource.new(region: 'us-east-1'), # 'doc-example-bucket', # 'my-file.txt', # 'This is the content of my-file.txt' # ) def object_uploaded_to_presigned_url?( s3_resource, bucket_name, object_key, object_content, http_client = nil ) object = s3_resource.bucket(bucket_name).object(object_key) url = URI.parse(object.presigned_url(:put)) if http_client.nil? Net::HTTP.start(url.host) do |http| http.send_request( 'PUT', url.request_uri, object_content, 'content-type' => '' ) end else http_client.start(url.host) do |http| http.send_request( 'PUT', url.request_uri, object_content, 'content-type' => '' ) end end content = object.get.body puts "The presigned URL for the object '#{object_key}' in the bucket " \ "'#{bucket_name}' is:\n\n" puts url puts "\nUsing this presigned URL to get the content that " \ "was just uploaded to this object, the object\'s content is:\n\n" puts content.read return true rescue StandardError => e puts "Error uploading to presigned URL: #{e.message}" return false end | https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html | 2021-02-25T00:15:49 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.aws.amazon.com |
Install plugins
NOTE: Before installing a plugin, make sure that the plugin is compatible with your OpenProject version.
OpenProject plug-ins are separated in Ruby gems. You can install them by including the gems in the /opt/bitnami/apps/openproject/htdocs/Gemfile.plugins file. An example of a Gemfile.plugins file looks like this:
# Required by backlogs gem "openproject-pdf_export", git: "", :branch => "stable" gem "openproject-backlogs", git: "", :branch => "stable"
Then, to install the plugin, run the following commands:
$ cd /opt/bitnami $ cd apps/openproject/htdocs $ bundle install --no-deployment --without development test postgres sqlite $ bower install --allow-root $ RAILS_ENV="production" bundle exec rake db:migrate $ RAILS_ENV="production" bundle exec rake db:seed $ RAILS_ENV="production" bundle exec rake assets:precompile $ touch tmp/restart.txt
The next Web request to the server will take longer (as the application is restarted). All subsequent requests should be as fast as always.
Troubleshooting
If you run the previous commands as the root user, change permissions of the tmp/ folder:
$ chown -R daemon:daemon /opt/bitnami/apps/openproject/htdocs/tmp | https://docs.bitnami.com/virtual-machine/apps/openproject/configuration/install-plugins/ | 2021-02-24T23:22:01 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.bitnami.com |
M Publishers
From GCD
- Madison Comics - Independent publisher in the mid-1980's.
- Mad Love (1994) - P.O. Box 61, Northampton, NN1 4DD, UK.
- Mad Monkey Press - Small color publisher of the 1990's.
- Magazine Enterprises, Inc. - Publisher of movie spinoff comics and others.
- Magian Line (1994) - P.O. Box 170712, San Francisco, CA 94117.
- MagiComics - Late 1990's b/w publisher.
- Magic Whistle (1994) - 14 Bayard Street #3, Brooklyn, NY 11211.
- Magnecom (1994) - (c/o Rip Off Press).
- Magnetic Ink (1994) - 5545 Montgomery Road, Apt #1, Cincinnati, OH 45212.
- Magnum Comics - 1990's sports comics publisher.
- Makeshift Media (1994) - 6515 19th Avenue NW, Seattle, WA 98117.
- Malibu Comics - Major Independent publisher of the 1980's and early 1990's.
- Mansion Comics (1994) - (c/o Suzerain Group).
- Manuscript Press - Comic strip reprint publisher. David Anthony Kraft's Comics Interview #9 pg 60 ad (March 1984).
- Marier and Crew Productions - Publisher of Thoughtful Man.
- Mark 1 Comics - Publisher of Shaloman and other Jewish focused comic books.
- Mark's Giant Economy Size Comics - Publisher of Radical Dreamer Vol. 2 in black and white.
- Marshall Comics (1994) - P.O. Box 283, Rancocas, NJ 08073-0283.
- Marvel Comics - Largest Publisher of comic books in the 1970's - 1990's.
- Matrix Graphics - B/w publisher of the mid-1980's.
- Mauretania (1994) - 221A Kilburn Lane, London W10 4BQ ENGLAND.
- Maximum Press - One of Rob Lefield's many imprints and companies.
- MBS Publishing Ltd. (1994) - 4 Greenfield Rd, Old Swan, Liberpool, L13 3BN ENGLAND.
- Medeia Press - Publisher of Demon Realm.
- Mediawarp - b/w publisher.
- Megaton Comics - Mid-1980's b/w publisher.
- Megaton Publications - Late-1970's b/w publisher.
- Methodical (1994) - 3863 S. Spring Apt 8, St. Louis, MA 63116.
- MF Enterprises - mid-1960's color publisher.
- Michael L. Teague - Self-publisher in the early 2000's.
- Milestone Media - Publisher in the 1990's who were distributed by DC Comics.
- Millar Publishing - Publisher of dragster comics and magazines in the 1960s.
- Millennium (1994) - 105 Edgewater Rd., Narragansett, RI 02882. 401-783-2843.
- Minneapolis College of Art and Design - Student produced comic book of the early 2000s.
- Mirage Studios - publisher of the Teenage Mutant Ninja Turtles.
- Miscellania Unlimted Press - Early 1990s publisher. (Comics Career #20 pg 8).
- Misery and Vomit (1994) - P.O. Box 42033, Montreal, Quebec, H23 2T3 CANADA.
- Missoula Comix - Underground comix publisher in the late 1970's.
- Mixx Entertainment - publisher of the translated Japanese manga in the late 1990's and beyond. [Tokyopop]
- MLJ Magazines - Golden Age publisher of Archie and other teenage comic books.
- Modern Comics - Mid 1970's publisher.
- Modern Day Periodicals - Black & white magazine publisher in early 1980's.
- Modern Historicality (1994) - P.O. Box 877, Tallahassee, FL 32302.
- Mojo Press - Late 1990's publisher of graphic novels and such.
- Monster Comics (c/o Fantagraphics).
- Moonstone - Late 1990's early 2000's publisher of b/w books.
- Moordam Comics - Late 1990's publisher of b/w humor books.
- More Fun Magazine Inc. - Another Wheeler-Nicholson company of the mid 1930s.
- Movieland Publishing - Publisher of Comics Feature.
- Mu Press - Furry comic book publisher.
- Mulehide Graphics (1994) - P.O. Box 5844, Bellingham WA 98227-5844. 206-671-5212.
- Music City Comics - b/w publisher of the late 1980's.
- Mythic Comics - b/w publisher of the 1990's. | https://docs.comics.org/wiki/M_Publishers | 2021-02-24T23:05:01 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.comics.org |
VM-Series Firewall on NSX-T (East-West) Integration
NSX-T Manager, vCenter, Panorama, and the VM-Series firewall work together to meet the security challenges of your NSX-T Data Center.
- Register the VM-Series firewall as a service—Use Panorama to connect to your VMware NSX-T manager. Panorama communicates with NSX-T Manager using the NSX-T API and establishes bi-directional communication. On Panorama, you configure the Service Manager by entering the IP address, username, and password of NSX-T Manager to initiate communication.After establishing communication with NSX-T Manager, configure the service definition. The service definition includes the location of the VM-Series firewall base image, the authorization code needed to license the VM-Series firewall, and the device groups and template stack to which the firewall will belong.Additionally, NSX-T Manager uses this connection to send updates on the changes in the NSX-T environment with Panorama.
- Deploy the VM-Series firewall per host or in a service cluster—NSX-T Manager uses the information pushed from Panorama in the service definition to deploy the VM-Series firewall. Choose a where the VM-Series firewall will be deployed (in a service cluster or on each ESXi host) and how NSX-T provides a management IP address to the VM-Series firewall (DHCP or static IP). When the firewall boots up, NSX-T manager’s API connects the VM-Series firewall to the hypervisor so it that can receive traffic from the vSwitch.
- The VM-Series connects to Panorama—The VM-Series firewall then connects to Panorama to obtain its license. Panorama gets the license from the Palo Alto Networks update server and sends it to the firewall. When the firewall gets its license, it reboots and comes back up with a serial number.If Panorama does not have internet access, it cannot retrieve licenses and push them to the firewall, so you have to manually license each firewall individually. If the VM-Series firewall does not have internet access, you must manually add the serial numbers to Panorama to register them as managed devices, so Panorama can push template stacks, device groups, and other configuration information. For more information, see Activate the License for the VM-Series Firewall for VMware NSX.
- Panorama sends security policy to the VM-Series firewall—When the firewall reconnects to Panorama, it is added to device group and template stack defined in the service definition and Panorama pushes the appropriate security policy to that firewall. The firewall is now ready to secure traffic in your NSX-T data center.
- Create network introspection rules to redirect traffic to the VM-Series firewall—On the NSX-T Manager, create a service chain and network introspection rules that redirect traffic in your NSX-T data center.
- Send real-time updates from NSX-T Manager—The NSX-T Manager sends real-time updates about changes in the virtual environment to Panorama. These updates include changes in group membership and IP addresses of virtual machines in groups that send traffic to the VM-Series firewall.
- Panorama sends dynamic updates—As Panorama receives updates from NSX-T Manager, it sends those updates from its managed VM-Series firewalls. Panorama places virtual machines into dynamic address groups based on criteria that you determine and pushes dynamic address group membership information to the firewalls. This allows firewalls to apply the correct security policy to traffic flowing to and from virtual machines in your NSX-T data center.
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/vm-series/9-1/vm-series-deployment/set-up-the-vm-series-firewall-on-nsx/set-up-the-vm-series-firewall-on-nsx-t-east-west/vm-series-firewall-on-nsx-t-east-west-integration.html | 2021-02-25T00:05:19 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['/content/dam/techdocs/en_US/dita/_graphics/9-1/virtualization/nsx-t/nsxt-ew-integration-workflow.png',
'nsxt-ew-integration-workflow.png'], dtype=object) ] | docs.paloaltonetworks.com |
Horizon Administrator cannot uninstall a ThinApp application.
Problem
The ThinApp application installation status shows Uninstall Error.
Cause
Common causes for this error include the following:
- The ThinApp application was busy when Horizon Administrator tried to uninstall it.
- Network connectivity was lost between the Connection Server host and the machine.
You can see the Horizon Agent and Connection Server log files for more information about the cause of the problem.
Horizon Agent log files are located on the machine in drive:\Documents and Settings\All Users\Application Data\VMware\VDM\logs for Windows XP systems and drive:\ProgramData\VMware\VDM\logs for Windows 7 systems.
Connection Server log files are located on the Connection Server host in the drive:\Documents and Settings\All Users\Application Data\VMware\VDM\logs directory.
Solution
- In Horizon Administrator, select .
- Click the name of the ThinApp application.
- Click the Machines tab, select the machine, and click Retry Uninstall to retry the uninstall operation.
- If the uninstall operation still fails, manually remove the ThinApp application from the machine and then click Remove App Status For Desktop.This command clears the ThinApp application assignment in Horizon Administrator. It does not remove any files or settings in the machine.Important: Use this command only after manually removing the ThinApp application from the machine. | https://docs.vmware.com/en/VMware-Horizon-7/7.9/horizon-administration/GUID-160642AF-82A3-4A3C-B113-62DD6372FF92.html | 2021-02-24T23:41:38 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.vmware.com |
Prerequisites OS SupportThese are Operating Systems supported for NCM Core servers (AS, DS, DB) and Smarts Integration Adapter. Bandwidth Combination, Application, Device, and Database server requirementsThe hardware requirements for Network Configuration Manager. Supported virtual hardware RSA Token server hardware requirements for Windows NCM Clients Requirements Network Configuration Manager Environment sizingThe type and number of servers needed to run Network Configuration Manager based on the number of devices. Basic disk partitioning (Linux) NCM CompatibilityUpgrading to Network Configuration Manager 10.1.0 will make previous versions of Advisors and Adapters inoperable until a compatible release is installed. Smart Assurance InteroperabilityNCM version 10.1.0 is part of Smart Assurance 10.1.0. The suite components interoperate as described here. Previous topic: Revision history Next topic: Software requirements for Linux | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.0/ncm-support-matrix-10.1.0/GUID-1FCA005D-9A2C-4438-B379-484B537AC877.html | 2021-02-25T00:15:22 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.vmware.com |
After you save the VMware Identity Manager on Windows configuration to the service-backup-hostname-timestamp.enc file and install Workspace ONE Access on Linux, you can run the configuration package on the Linux host.
Prerequisites
- Download the VMware Workspace ONE Access SVA file from the My VMware site at my.vmware.com and deploy it on a Linux host and finish the setup wizard. See Installing and Configuring VMware Workspace ONE Access.Deploying the SVA file displays the setup wizard at https:// WS1AccessHostnameFQDN:8443/cfg/landing.Note:
- As a best practice, use different IP addresses and fully qualified domain names (FQDN) for the new Linux nodes than you used for your Windows nodes.
- Because you use the configuration from your VMware Identity Manager 19.03 deployment on Windows, do not perform any configuration on the VMware Workspace ONE Access 20.01 deployment on Linux. Therefore, do not continue beyond the Get Started page of the setup wizard.
- If applicable, upgrade your VMware Workspace ONE Access connector instances to VMware Workspace ONE Access connector 20.01.
If your deployment is not integrated with Virtual Apps (ThinApp packaged applications, Horizon desktops and applications, or Citrix published resources), provide the newest connector functionality by upgrading your connector instances to VMware Workspace ONE Access connector 20.01. See Migrating to VMware Workspace ONE Access 20.01 Connectors.Important: Citrix, Horizon Connection Server, and ThinApp integrations are not available with the Workspace ONE Access 20.01 connector.
- To use ThinApp packaged applications, use the VMware Identity Manager connector (Linux) version 2018.8.1.0.
- To use other Virtual Apps, such Horizon desktops and applications or Citrix published resources, use the VMware Identity Manager connector (Windows) version 19.03.
Procedure
- Using your preferred SSH client, log in to the Linux virtual appliance as the
rootuser with the default password of
vmware.
- Copy the service-backup-hostname-timestamp.enc file to the /root directory of the Linux virtual appliance.
- To configure Workspace ONE Access with your saved Windows configuration, run the /usr/local/horizon/scripts/importServiceConfiguration.sh password command.Replace the placeholder, password, with the password you used to create the configuration package containing the service-backup-hostname-timestamp.enc file.The command restarts the service.
What to do next
Perform the necessary Workspace ONE Access post-migration procedures on the Linux system. See Workspace ONE Access 20.01 Post-Migration Configuration | https://docs.vmware.com/en/VMware-Workspace-ONE-Access/20.01/ws1_access_migration/GUID-3AED35F8-7478-4F80-A058-1C640C254C61.html | 2021-02-25T00:34:00 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.vmware.com |
The SEG v2 configurations are controlled at an individual node level. The custom gateway setting feature centralizes the configuration on the Workspace ONE UEM Console as part of the MEM configuration itself.
Prerequisites
The following table lists the requirements for the SEG custom settings feature:
Configure SEG Custom Gateway Settings
The SEG custom settings are available as key-value pairs on the Workspace ONE UEM console. The commonly used properties are seeded on the Workspace ONE UEM Console. To configure the custom settings, perform the following steps:
- Log in to the Workspace ONE UEM console.
- Navigate to the.
- Configure the Email Settings for SEG.
- Configure the additional settings for SEG using the Advanced option.
- Navigate to the Custom Gateway Settings, click ADD ROW, and enter the supported configuration as the key-value pair:
- Key: Enter the property or setting name.
- Type: Enter the type of value such as string, integer, and so on.
- Value: Enter the property or custom value.
- Click Save.
Apply the Custom Gateway Settings on the SEG Service
During an installation or upgrade, if the custom settings are provided on the Workspace ONE UEM console, then the SEG service starts with the applied custom settings
If the custom settings are added or updated on the Workspace ONE UEM console when the SEG service is running, then a refreshSettings notification is triggered for SEG. The SEG fetches the latest custom gateway settings. A few of the custom settings are applied immediately, whereas the other custom settings might require you to restart the SEG service.
Supported Configuration for the Custom Gateway Settings
The following section lists all the supported SEG properties or settings for the custom settings feature.
The properties or settings are grouped based on feature or functionality. The custom settings can be added on the Workspace ONE UEM console in any order.
JVM Arguments or System Settings
The JVM arguments or system settings property keys start with -D. If the property value is modified, SEG updates the custom system settings in the segServiceWrapper.conf (for Windows) or seg-jvm-args.conf (for UAG). If the system setting is updated when the SEG service is running, then the SEG triggers a service restart.
You can configure the seg.custom.settings.service.restart.code=0 property in the application-override.properties file to disable the automatic restart of the SEG service. | https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/2011/WS1-Secure-Email-Gateway/GUID-2A63696B-9DC2-487B-9AD4-A3D1C4B41961.html | 2021-02-24T23:40:39 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.vmware.com |
Basic Usage¶
The
spack command has many subcommands. You’ll only need a
small subset of them for typical usage.
Note that Spack colorizes output.
less -R should be used with
Spack to maintain this colorization. E.g.:
$ spack find | less -R
It is recommended that the following be put in your
.bashrc file:
alias less='less -R'
If you do not see colorized output when using
less -R it is because color
is being disabled in the piped output. In this case, tell spack to force
colorized output.
$ spack --color always | less -R
Listing available packages¶
To install software with Spack, you need to know what software is
available. You can see a list of available package names at the
Package List webpage, or using the
spack list command.
spack list¶
The
spack list command prints out a list of all of the packages Spack
can install:
$ spack list 3dtk py-dill 3proxy py-discover abduco py-diskcache abi-compliance-checker py-distributed abi-dumper py-distro abinit py-django abseil-cpp py-dlcpar abyss py-dnaio accfft py-docker acct py-dockerpy-creds ...
There are thousands of them, so we’ve truncated the output above, but you
can find a full list here.
Packages are listed by name in alphabetical order.
A pattern to match with no wildcards,
* or
?,
will be treated as though it started and ended with
*, so
util is equivalent to
*util*. All patterns will be treated
as case-insensitive. You can also add the
-d to search the description of
the package in addition to the name. Some examples:
All packages whose names contain “sql”:
$ spack list sql mysql py-agate-sql py-mysqlclient py-pysqlite r-rmysql sqlcipher perl-dbd-mysql py-azure-mgmt-sql py-mysqldb1 py-sqlalchemy r-rpostgresql sqlite perl-dbd-sqlite py-azure-mgmt-sqlvirtualmachine py-pygresql py-sqlalchemy-utils r-rsqlite sqlite-jdbc postgresql py-mysql-connector-python py-pymysql py-sqlparse r-sqldf sqlitebrowser
All packages whose names or descriptions contain documentation:
$ spack list --search-description documentation asciidoc-py3 gtk-doc py-alabaster py-recommonmark r-rcpp r-uwot byacc libxfixes py-astropy-helpers py-sphinx r-rdpack sowing compositeproto libxpresent py-dask-sphinx-theme py-sphinxautomodapi r-rinside texinfo damageproto man-db py-docutils py-sphinxcontrib-websupport r-roxygen2 totalview double-conversion perl-bioperl py-epydoc r-lifecycle r-spam xorg-docs doxygen perl-db-file py-markdown r-modeltools r-stanheaders xorg-sgml-doctools gflags perl-io-prompt py-python-docs-theme r-quadprog r-units
spack info¶
To get more information on a particular package from spack list, use spack info. Just supply the name of a package:
$
Most of the information is self-explanatory. The safe versions are versions that Spack knows the checksum for, and it will use the checksum to verify that these versions download without errors or viruses.
Dependencies and virtual dependencies are described in more detail later.
spack versions¶
To see more available versions of a package, run
spack versions.
For example:
$ spack versions libelf 0.8.13
There are two sections in the output. Safe versions are versions for which Spack has a checksum on file. It can verify that these versions are downloaded correctly.
In many cases, Spack can also show you what versions are available out on the web—these are remote versions. Spack gets this information by scraping it directly from package web pages. Depending on the package and how its releases are organized, Spack may or may not be able to find remote versions.
Installing and uninstalling¶
spack install¶
spack install will install any package shown by
spack list.
For example, To install the latest version of the
mpileaks
package, you might type this:
$ spack install mpileaks
If
mpileaks depends on other packages, Spack will install the
dependencies first. It then fetches the
mpileaks tarball, expands
it, verifies that it was downloaded without errors, builds it, and
installs it in its own directory under
$SPACK_ROOT/opt. You’ll see
a number of messages from Spack, a lot of build output, and a message
that the package is installed.
$ spack install mpileaks ... dependency build output ... ==> Installing mpileaks-1.0-ph7pbnhl334wuhogmugriohcwempqry2 ==> No binary for mpileaks-1.0-ph7pbnhl334wuhogmugriohcwempqry2 found: installing from source ==> mpileaks: Executing phase: 'autoreconf' ==> mpileaks: Executing phase: 'configure' ==> mpileaks: Executing phase: 'build' ==> mpileaks: Executing phase: 'install' [+] ~/spack/opt/linux-rhel7-broadwell/gcc-8.1.0/mpileaks-1.0-ph7pbnhl334wuhogmugriohcwempqry2
The last line, with the
[+], indicates where the package is
installed.
Add the Spack debug option (one or more times) –
spack -d install
mpileaks – to get additional (and even more verbose) output.
Building a specific version¶
Spack can also build specific versions of a package. To do this,
just add
@ after the package name, followed by a version:
$ spack install [email protected]
Any number of versions of the same package can be installed at once without interfering with each other. This is good for multi-user sites, as installing a version that one user needs will not disrupt existing installations for other users.
In addition to different versions, Spack can customize the compiler, compile-time options (variants), compiler flags, and platform (for cross compiles) of an installation. Spack is unique in that it can also configure the dependencies a package is built with. For example, two configurations of the same version of a package, one built with boost 1.39.0, and the other version built with version 1.43.0, can coexist.
This can all be done on the command line using the spec syntax.
Spack calls the descriptor used to refer to a particular package
configuration a spec. In the commands above,
mpileaks and
[email protected] are both valid specs. We’ll talk more about how
you can use them to customize an installation in Specs & dependencies.
spack uninstall¶
To uninstall a package, type
spack uninstall <package>. This will ask
the user for confirmation before completely removing the directory
in which the package was installed.
$ spack uninstall mpich
If there are still installed packages that depend on the package to be uninstalled, spack will refuse to uninstall it.
To uninstall a package and every package that depends on it, you may give the
--dependents option.
$ spack uninstall --dependents mpich
will display a list of all the packages that depend on
mpich and, upon
confirmation, will uninstall them in the right order.
A command like
$ spack uninstall mpich
may be ambiguous if multiple
mpich configurations are installed.
For example, if both
[email protected] and
[email protected] are installed,
mpich could refer to either one. Because it cannot determine which
one to uninstall, Spack will ask you either to provide a version number
to remove the ambiguity or use the
--all option to uninstall all of
the matching packages.
You may force uninstall a package with the
--force option
$ spack uninstall --force mpich
but you risk breaking other installed packages. In general, it is safer to
remove dependent packages before removing their dependencies or use the
--dependents option.
Garbage collection¶
When Spack builds software from sources, if often installs tools that are needed
just to build or test other software. These are not necessary at runtime.
To support cases where removing these tools can be a benefit Spack provides
the
spack gc (“garbage collector”) command, which will uninstall all unneeded packages:
$ spack find ==> 24 installed packages -- linux-ubuntu18.04-broadwell / [email protected] ---------------------- [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] $ spack gc ==> The following packages will be uninstalled: -- linux-ubuntu18.04-broadwell / [email protected] ---------------------- vn47edz [email protected] 6m3f2qn [email protected] ubl6bgk [email protected] pksawhz [email protected] urdw22a [email protected] ki6nfw5 [email protected] fklde6b [email protected] b6pswuo [email protected] k3s2csy [email protected] lp5ya3t [email protected] ylvgsov [email protected] 5omotir [email protected] leuzbbh [email protected] 5vmfbrq [email protected] 5bmv4tg [email protected] ==> Do you want to proceed? [y/N] y [ ... ] $ spack find ==> 9 installed packages -- linux-ubuntu18.04-broadwell / [email protected] ---------------------- [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
In the example above Spack went through all the packages in the package database and removed everything that is not either:
- A package installed upon explicit request of the user
- A
linkor
rundependency, even transitive, of one of the packages at point 1.
You can check Viewing more metadata to see how to query for explicitly installed packages or Dependency types for a more thorough treatment of dependency types.
Marking packages explicit or implicit¶
By default, Spack will mark packages a user installs as explicitly installed,
while all of its dependencies will be marked as implicitly installed. Packages
can be marked manually as explicitly or implicitly installed by using
spack mark. This can be used in combination with
spack gc to clean up
packages that are no longer required.
$ spack install m4 ==> 29005: Installing libsigsegv [...] ==> 29005: Installing m4 [...] $ spack install m4 ^[email protected] ==> 39798: Installing libsigsegv [...] ==> 39798: Installing m4 [...] $ spack find -d ==> 4 installed packages -- linux-fedora32-haswell / [email protected] -------------------------- [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] $ spack gc ==> There are no unused specs. Spack's store is clean. $ spack mark -i m4 ^[email protected] ==> [email protected] : marking the package implicit $ spack gc ==> The following packages will be uninstalled: -- linux-fedora32-haswell / [email protected] -------------------------- 5fj7p2o [email protected] c6ensc6 [email protected] ==> Do you want to proceed? [y/N]
In the example above, we ended up with two versions of
m4 since they depend
on different versions of
libsigsegv.
spack gc will not remove any of
the packages since both versions of
m4 have been installed explicitly
and both versions of
libsigsegv are required by the
m4 packages.
spack mark can also be used to implement upgrade workflows. The following
example demonstrates how the
spack mark and
spack gc can be used to
only keep the current version of a package installed.
When updating Spack via
git pull, new versions for either
libsigsegv
or
m4 might be introduced. This will cause Spack to install duplicates.
Since we only want to keep one version, we mark everything as implicitly
installed before updating Spack. If there is no new version for either of the
packages,
spack install will simply mark them as explicitly installed and
spack gc will not remove them.
$ spack install m4 ==> 62843: Installing libsigsegv [...] ==> 62843: Installing m4 [...] $ spack mark -i -a ==> [email protected] : marking the package implicit $ git pull [...] $ spack install m4 [...] ==> [email protected] : marking the package explicit [...] $ spack gc ==> There are no unused specs. Spack's store is clean.
When using this workflow for installations that contain more packages, care
has to be taken to either only mark selected packages or issue
spack install
for all packages that should be kept.
You can check Viewing more metadata to see how to query for explicitly or implicitly installed packages.
Non-Downloadable Tarballs¶
The tarballs for some packages cannot be automatically downloaded by Spack. This could be for a number of reasons:
- The author requires users to manually accept a license agreement before downloading (
jdkand
galahad).
- The software is proprietary and cannot be downloaded on the open Internet.
To install these packages, one must create a mirror and manually add the tarballs in question to it (see Mirrors):
Create a directory for the mirror. You can create this directory anywhere you like, it does not have to be inside
~/.spack:
$ mkdir ~/.spack/manual_mirror
Register the mirror with Spack by creating
~/.spack/mirrors.yaml:
mirrors: manual:
Put your tarballs in it. Tarballs should be named
<package>/<package>-<version>.tar.gz. For example:
$ ls -l manual_mirror/galahad -rw-------. 1 me me 11657206 Jun 21 19:25 galahad-2.60003.tar.gz
Install as usual:
$ spack install galahad
Seeing installed packages¶
We know that
spack list shows you the names of available packages,
but how do you figure out which are already installed?
spack find¶
spack find shows the specs of installed packages. A spec is
like a name, but it has a version, compiler, architecture, and build
options associated with it. In spack, you can have many installations
of the same package with different specs.
Running
spack find with no arguments lists installed packages:
$ spack find ==> 74 installed packages. -- linux-debian7-x86_64 / [email protected] -------------------------------- [email protected] libdwarf@20130729 [email protected] [email protected] libdwarf@20130729 [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] tk@src jpeg@9a [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] -- linux-debian7-x86_64 / [email protected] -------------------------------- [email protected] [email protected]
Packages are divided into groups according to their architecture and compiler. Within each group, Spack tries to keep the view simple, and only shows the version of installed packages.
Viewing more metadata¶
spack find can filter the package list based on the package name,
spec, or a number of properties of their installation status. For
example, missing dependencies of a spec can be shown with
--missing, deprecated packages can be included with
--deprecated, packages which were explicitly installed with
spack install <package> can be singled out with
--explicit and
those which have been pulled in only as dependencies with
--implicit.
In some cases, there may be different configurations of the same
version of a package installed. For example, there are two
installations of
libdwarf@20130729 above. We can look at them
in more detail using
spack find --deps, and by asking only to show
libdwarf packages:
$ spack find --deps libdwarf ==> 2 installed packages. -- linux-debian7-x86_64 / [email protected] -------------------------------- libdwarf@20130729-d9b90962 ^[email protected] libdwarf@20130729-b52fac98 ^[email protected]
Now we see that the two instances of
libdwarf depend on
different versions of
libelf: 0.8.12 and 0.8.13. This view can
become complicated for packages with many dependencies. If you just
want to know whether two packages’ dependencies differ, you can use
spack find --long:
$ spack find --long libdwarf ==> 2 installed packages. -- linux-debian7-x86_64 / [email protected] -------------------------------- libdwarf@20130729-d9b90962 libdwarf@20130729-b52fac98
Now the
libdwarf installs have hashes after their names. These are
hashes over all of the dependencies of each package. If the hashes
are the same, then the packages have the same dependency configuration.
If you want to know the path where each package is installed, you can
use
spack find --paths:
$ spack find --paths ==> 74 installed packages. -- linux-debian7-x86_64 / [email protected] -------------------------------- [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected] [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected] [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected] [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected] [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected] [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected] [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected] ...
You can restrict your search to a particular package by supplying its name:
$ spack find --paths libelf -- linux-debian7-x86_64 / [email protected] -------------------------------- [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected] [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected] [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected]
Spec queries¶
spack find actually does a lot more than this. You can use
specs to query for specific configurations and builds of each
package. If you want to find only libelf versions greater than version
0.8.12, you could say:
$ spack find [email protected]: -- linux-debian7-x86_64 / [email protected] -------------------------------- [email protected] [email protected]
Finding just the versions of libdwarf built with a particular version of libelf would look like this:
$ spack find --long libdwarf ^[email protected] ==> 1 installed packages. -- linux-debian7-x86_64 / [email protected] -------------------------------- libdwarf@20130729-d9b90962
We can also search for packages that have a certain attribute. For example,
spack find libdwarf +debug will show only installations of libdwarf
with the ‘debug’ compile-time option enabled.
The full spec syntax is discussed in detail in Specs & dependencies.
Machine-readable output¶
If you only want to see very specific things about installed packages,
Spack has some options for you.
spack find --format can be used to
output only specific fields:
$ spack find --format "{name}-{version}-{hash}" autoconf-2.69-icynozk7ti6h4ezzgonqe6jgw5f3ulx4 automake-1.16.1-o5v3tc77kesgonxjbmeqlwfmb5qzj7zy bzip2-1.0.6-syohzw57v2jfag5du2x4bowziw3m5p67 bzip2-1.0.8-zjny4jwfyvzbx6vii3uuekoxmtu6eyuj cmake-3.15.1-7cf6onn52gywnddbmgp7qkil4hdoxpcb ...
or:
$ spack find --format "{hash:7}" icynozk o5v3tc7 syohzw5 zjny4jw 7cf6onn ...
This uses the same syntax as described in documentation for
format() – you can use any of the options there.
This is useful for passing metadata about packages to other command-line
tools.
Alternately, if you want something even more machine readable, you can
output each spec as JSON records using
spack find --json. This will
output metadata on specs and all dependencies as json:
$ spack find --json [email protected] [ { "name": "sqlite", "hash": "3ws7bsihwbn44ghf6ep4s6h4y2o6eznv", "version": "3.28.0", "arch": { "platform": "darwin", "platform_os": "mojave", "target": "x86_64" }, "compiler": { "name": "apple-clang", "version": "10.0.0" }, "namespace": "builtin", "parameters": { "fts": true, "functions": false, "cflags": [], "cppflags": [], "cxxflags": [], "fflags": [], "ldflags": [], "ldlibs": [] }, "dependencies": { "readline": { "hash": "722dzmgymxyxd6ovjvh4742kcetkqtfs", "type": [ "build", "link" ] } } }, ... ]
You can use this with tools like jq to quickly create JSON records structured the way you want:
$ spack find --json [email protected] | jq -C '.[] | { name, version, hash }' { "name": "sqlite", "version": "3.28.0", "hash": "3ws7bsihwbn44ghf6ep4s6h4y2o6eznv" } { "name": "readline", "version": "7.0", "hash": "722dzmgymxyxd6ovjvh4742kcetkqtfs" } { "name": "ncurses", "version": "6.1", "hash": "zvaa4lhlhilypw5quj3akyd3apbq5gap" }
Using installed packages¶
There are several different ways to use Spack packages once you have installed them. As you’ve seen, spack packages are installed into long paths with hashes, and you need a way to get them into your path. The easiest way is to use spack load, which is described in the next section.
Some more advanced ways to use Spack packages include:
- environments, which you can use to bundle a number of related packages to “activate” all at once, and
- environment modules, which are commonly used on supercomputing clusters. Spack generates module files for every installation automatically, and you can customize how this is done.
spack load / unload¶
If you have shell support enabled you can use the
spack load command to quickly get a package on your
PATH.
For example this will add the
mpich package built with
gcc to
your path:
$ spack install mpich %[email protected] # ... wait for install ... $ spack load mpich %[email protected] $ which mpicc ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected]/bin/mpicc
These commands will add appropriate directories to your
PATH,
MANPATH,
CPATH, and
LD_LIBRARY_PATH according to the
prefix inspections defined in your
modules configuration. When you no longer want to use a package, you
can type unload or unuse similarly:
$ spack unload mpich %[email protected]
Ambiguous specs¶
If a spec used with load/unload or is ambiguous (i.e. more than one installed package matches it), then Spack will warn you:
$ spack load libelf ==> Error: libelf matches multiple packages. Matching packages: qmm4kso [email protected]%[email protected] arch=linux-debian7-x86_64 cd2u6jt [email protected]%[email protected] arch=linux-debian7-x86_64 Use a more specific spec
You can either type the
spack load command again with a fully
qualified argument, or you can add just enough extra constraints to
identify one package. For example, above, the key differentiator is
that one
libelf is built with the Intel compiler, while the other
used
gcc. You could therefore just type:
$ spack load libelf %intel
To identify just the one built with the Intel compiler. If you want to be
very specific, you can load it by its hash. For example, to load the
first
libelf above, you would run:
$ spack load /qmm4kso
We’ll learn more about Spack’s spec syntax in the next section.
Specs & dependencies¶
We know that
spack install,
spack uninstall, and other
commands take a package name with an optional version specifier. In
Spack, that descriptor is called a spec. Spack uses specs to refer
to a particular build configuration (or configurations) of a package.
Specs are more than a package name and a version; you can use them to
specify the compiler, compiler version, architecture, compile options,
and dependency options for a build. In this section, we’ll go over
the full syntax of specs.
Here is an example of a much longer spec than we’ve seen thus far:
mpileaks @1.2:1.4 %[email protected] +debug -qt target=x86_64 ^callpath @1.1 %[email protected]
If provided to
spack install, this will install the
mpileaks
library at some version between
1.2 and
1.4 (inclusive),
built using
gcc at version 4.7.5 for a generic
x86_64 architecture,
with debug options enabled, and without Qt support. Additionally, it
says to link it with the
callpath library (which it depends on),
and to build callpath with
gcc 4.7.2. Most specs will not be as
complicated as this one, but this is a good example of what is
possible with specs.
More formally, a spec consists of the following pieces:
- Package name identifier (
mpileaksabove)
@Optional version specifier (
@1.2:1.4)
%Optional compiler specifier, with an optional compiler version (
gccor
[email protected])
+or
-or
~Optional variant specifiers (
+debug,
-qt, or
~qt) for boolean variants
name=<value>Optional variant specifiers that are not restricted to boolean variants
name=<value>Optional compiler flag specifiers. Valid flag names are
cflags,
cxxflags,
fflags,
cppflags,
ldflags, and
ldlibs.
target=<value> os=<value>Optional architecture specifier (
target=haswell os=CNL10)
^Dependency specs (
^[email protected])
There are two things to notice here. The first is that specs are
recursively defined. That is, each dependency after
^ is a spec
itself. The second is that everything is optional except for the
initial package name identifier. Users can be as vague or as specific
as they want about the details of building packages, and this makes
spack good for beginners and experts alike.
To really understand what’s going on above, we need to think about how
software is structured. An executable or a library (these are
generally the artifacts produced by building software) depends on
other libraries in order to run. We can represent the relationship
between a package and its dependencies as a graph. Here is the full
dependency graph for
mpileaks:
Each box above is a package and each arrow represents a dependency on
some other package. For example, we say that the package
mpileaks
depends on
callpath and
mpich.
mpileaks also depends
indirectly on
dyninst,
libdwarf, and
libelf, in that
these libraries are dependencies of
callpath. To install
mpileaks, Spack has to build all of these packages. Dependency
graphs in Spack have to be acyclic, and the depends on relationship
is directional, so this is a directed, acyclic graph or DAG.
The package name identifier in the spec is the root of some dependency
DAG, and the DAG itself is implicit. Spack knows the precise
dependencies among packages, but users do not need to know the full
DAG structure. Each
^ in the full spec refers to some dependency
of the root package. Spack will raise an error if you supply a name
after
^ that the root does not actually depend on (e.g.
mpileaks
^[email protected]).
Spack further simplifies things by only allowing one configuration of
each package within any single build. Above, both
mpileaks and
callpath depend on
mpich, but
mpich appears only once in
the DAG. You cannot build an
mpileaks version that depends on one
version of
mpich and on a
callpath version that depends on
some other version of
mpich. In general, such a configuration
would likely behave unexpectedly at runtime, and Spack enforces this
to ensure a consistent runtime environment.
The point of specs is to abstract this full DAG from Spack users. If
a user does not care about the DAG at all, she can refer to mpileaks
by simply writing
mpileaks. If she knows that
mpileaks
indirectly uses
dyninst and she wants a particular version of
dyninst, then she can refer to
mpileaks ^[email protected]. Spack
will fill in the rest when it parses the spec; the user only needs to
know package names and minimal details about their relationship.
When spack prints out specs, it sorts package names alphabetically to normalize the way they are displayed, but users do not need to worry about this when they write specs. The only restriction on the order of dependencies within a spec is that they appear after the root package. For example, these two specs represent exactly the same configuration:
mpileaks ^[email protected] ^[email protected] mpileaks ^[email protected] ^[email protected]
You can put all the same modifiers on dependency specs that you would
put on the root spec. That is, you can specify their versions,
compilers, variants, and architectures just like any other spec.
Specifiers are associated with the nearest package name to their left.
For example, above,
@1.1 and
%[email protected] associates with the
callpath package, while
@1.2:1.4,
%[email protected],
+debug,
-qt, and
target=haswell os=CNL10 all associate with the
mpileaks package.
In the diagram above,
mpileaks depends on
mpich with an
unspecified version, but packages can depend on other packages with
constraints by adding more specifiers. For example,
mpileaks
could depend on
[email protected]: if it can only build with version
1.2 or higher of
mpich.
Below are more details about the specifiers that you can add to specs.
Version specifier¶
A version specifier comes somewhere after a package name and starts
with
@. It can be a single version, e.g.
@1.0,
@3, or
@1.2a7. Or, it can be a range of versions, such as
@1.0:1.5
(all versions between
1.0 and
1.5, inclusive). Version ranges
can be open, e.g.
:3 means any version up to and including
3.
This would include
3.4 and
3.4.2.
4.2: means any version
above and including
4.2. Finally, a version specifier can be a
set of arbitrary versions, such as
@1.0,1.5,1.7 (
1.0,
1.5,
or
1.7). When you supply such a specifier to
spack install,
it constrains the set of versions that Spack will install.
If the version spec is not provided, then Spack will choose one according to policies set for the particular spack installation. If the spec is ambiguous, i.e. it could match multiple versions, Spack will choose a version within the spec’s constraints according to policies set for the particular Spack installation.
Details about how versions are compared and how Spack determines if one version is less than another are discussed in the developer guide.
Compiler specifier¶
A compiler specifier comes somewhere after a package name and starts
with
%. It tells Spack what compiler(s) a particular package
should be built with. After the
% should come the name of some
registered Spack compiler. This might include
gcc, or
intel,
but the specific compilers available depend on the site. You can run
spack compilers to get a list; more on this below.
The compiler spec can be followed by an optional compiler version. A compiler version specifier looks exactly like a package version specifier. Version specifiers will associate with the nearest package name or compiler specifier to their left in the spec.
If the compiler spec is omitted, Spack will choose a default compiler based on site policies.
Variants¶
Variants are named options associated with a particular package. They are
optional, as each package must provide default values for each variant it
makes available. Variants can be specified using
a flexible parameter syntax
name=<value>. For example,
spack install libelf debug=True will install libelf built with debug
flags. The names of particular variants available for a package depend on
what was provided by the package author.
spack info <package> will
provide information on what build variants are available.
For compatibility with earlier versions, variants which happen to be
boolean in nature can be specified by a syntax that represents turning
options on and off. For example, in the previous spec we could have
supplied
libelf +debug with the same effect of enabling the debug
compile time option for the libelf package.
Depending on the package a variant may have any default value. For
libelf here,
debug is
False by default, and we turned it on
with
debug=True or
+debug. If a variant is
True by default
you can turn it off by either adding
-name or
~name to the spec.
There are two syntaxes here because, depending on context,
~ and
- may mean different things. In most shells, the following will
result in the shell performing home directory substitution:
mpileaks ~debug # shell may try to substitute this! mpileaks~debug # use this instead
If there is a user called
debug, the
~ will be incorrectly
expanded. In this situation, you would want to write
libelf
-debug. However,
- can be ambiguous when included after a
package name without spaces:
mpileaks-debug # wrong! mpileaks -debug # right
Spack allows the
- character to be part of package names, so the
above will be interpreted as a request for the
mpileaks-debug
package, not a request for
mpileaks built without
debug
options. In this scenario, you should write
mpileaks~debug to
avoid ambiguity.
When spack normalizes specs, it prints them out with no spaces boolean
variants using the backwards compatibility syntax and uses only
~
for disabled boolean variants. The
- and spaces on the command
line are provided for convenience and legibility.
Compiler Flags¶
Compiler flags are specified using the same syntax as non-boolean variants,
but fulfill a different purpose. While the function of a variant is set by
the package, compiler flags are used by the compiler wrappers to inject
flags into the compile line of the build. Additionally, compiler flags are
inherited by dependencies.
spack install libdwarf cppflags="-g" will
install both libdwarf and libelf with the
-g flag injected into their
compile line.
Notice that the value of the compiler flags must be quoted if it
contains any spaces. Any of
cppflags=-O3,
cppflags="-O3",
cppflags='-O3', and
cppflags="-O3 -fPIC" are acceptable, but
cppflags=-O3 -fPIC is not. Additionally, if the value of the
compiler flags is not the last thing on the line, it must be followed
by a space. The command
spack install libelf cppflags="-O3"%intel
will be interpreted as an attempt to set
cppflags="-O3%intel".
The six compiler flags are injected in the order of implicit make commands
in GNU Autotools. If all flags are set, the order is
$cppflags $cflags|$cxxflags $ldflags <command> $ldlibs for C and C++ and
$fflags $cppflags $ldflags <command> $ldlibs for Fortran.
Compiler environment variables and additional RPATHs¶
Sometimes compilers require setting special environment variables to
operate correctly. Spack handles these cases by allowing custom environment
modifications in the
environment attribute of the compiler configuration
section. See also the Environment Modifications section
of the configuration files docs for more information.
It is also possible to specify additional
RPATHs that the
compiler will add to all executables generated by that compiler. This is
useful for forcing certain compilers to RPATH their own runtime libraries, so
that executables will run without the need to set
LD_LIBRARY_PATH.
compilers: - compiler: spec: [email protected] paths: cc: /opt/gcc/bin/gcc c++: /opt/gcc/bin/g++ f77: /opt/gcc/bin/gfortran fc: /opt/gcc/bin/gfortran environment: unset: - BAD_VARIABLE set: GOOD_VARIABLE_NUM: 1 GOOD_VARIABLE_STR: good prepend_path: PATH: /path/to/binutils append_path: LD_LIBRARY_PATH: /opt/gcc/lib extra_rpaths: - /path/to/some/compiler/runtime/directory - /path/to/some/other/compiler/runtime/directory
Architecture specifiers¶
Each node in the dependency graph of a spec has an architecture attribute.
This attribute is a triplet of platform, operating system and processor.
You can specify the elements either separately, by using
the reserved keywords
platform,
os and
target:
$ spack install libelf platform=linux $ spack install libelf os=ubuntu18.04 $ spack install libelf target=broadwell
or together by using the reserved keyword
arch:
$ spack install libelf arch=cray-CNL10-haswell
Normally users don’t have to bother specifying the architecture if they are installing software for their current host, as in that case the values will be detected automatically. If you need fine-grained control over which packages use which targets (or over all packages’ default target), see Concretization Preferences.
Cray machines
The situation is a little bit different for Cray machines and a detailed explanation on how the architecture can be set on them can be found at Spack on Cray
Support for specific microarchitectures¶
Spack knows how to detect and optimize for many specific microarchitectures
(including recent Intel, AMD and IBM chips) and encodes this information in
the
target portion of the architecture specification. A complete list of
the microarchitectures known to Spack can be obtained in the following way:
$ spack arch --known-targets Generic architectures (families) aarch64 arm ppc ppc64 ppc64le ppcle sparc sparc64 x86 x86_64 GenuineIntel - x86 i686 pentium2 pentium3 pentium4 prescott GenuineIntel - x86_64 nocona nehalem sandybridge haswell skylake skylake_avx512 cascadelake core2 westmere ivybridge broadwell mic_knl cannonlake icelake AuthenticAMD - x86_64 k10 bulldozer zen piledriver zen2 steamroller excavator IBM - ppc64 power7 power8 power9 IBM - ppc64le power8le power9le Cavium - aarch64 thunderx2 Fujitsu - aarch64 a64fx ARM - aarch64 graviton graviton2
When a spec is installed Spack matches the compiler being used with the microarchitecture being targeted to inject appropriate optimization flags at compile time. Giving a command such as the following:
$ spack install zlib%[email protected] target=icelake
will produce compilation lines similar to:
$ /usr/bin/gcc-9 -march=icelake-client -mtune=icelake-client -c ztest10532.c $ /usr/bin/gcc-9 -march=icelake-client -mtune=icelake-client -c -fPIC -O2 ztest10532. ...
where the flags
-march=icelake-client -mtune=icelake-client are injected
by Spack based on the requested target and compiler.
If Spack knows that the requested compiler can’t optimize for the current target or can’t build binaries for that target at all, it will exit with a meaningful error message:
$ spack install zlib%[email protected] target=icelake ==> Error: cannot produce optimized binary for micro-architecture "icelake" with [email protected] [supported compiler versions are 8:]
When instead an old compiler is selected on a recent enough microarchitecture but there is
no explicit
target specification, Spack will optimize for the best match it can find instead
of failing:
$ spack arch linux-ubuntu18.04-broadwell $ spack spec zlib%[email protected] Input spec -------------------------------- zlib%[email protected] Concretized -------------------------------- [email protected]%[email protected]+optimize+pic+shared arch=linux-ubuntu18.04-haswell $ spack spec zlib%[email protected] Input spec -------------------------------- zlib%[email protected] Concretized -------------------------------- [email protected]%[email protected]+optimize+pic+shared arch=linux-ubuntu18.04-broadwell
In the snippet above, for instance, the microarchitecture was demoted to
haswell when
compiling with
[email protected] since support to optimize for
broadwell starts from
[email protected]:.
Finally, if Spack has no information to match compiler and target, it will proceed with the installation but avoid injecting any microarchitecture specific flags.
Warning
Currently, Spack doesn’t print any warning to the user if it has no information on which optimization flags should be used for a given compiler. This behavior might change in the future.
Virtual dependencies¶
The dependency graph for
mpileaks we saw above wasn’t quite
accurate.
mpileaks uses MPI, which is an interface that has many
different implementations. Above, we showed
mpileaks and
callpath depending on
mpich, which is one particular
implementation of MPI. However, we could build either with another
implementation, such as
openmpi or
mvapich.
Spack represents interfaces like this using virtual dependencies.
The real dependency DAG for
mpileaks looks like this:
Notice that
mpich has now been replaced with
mpi. There is no
real MPI package, but some packages provide the MPI interface, and
these packages can be substituted in for
mpi when
mpileaks is
built.
You can see what virtual packages a particular package provides by getting info on it:
$
Spack is unique in that its virtual packages can be versioned, just
like regular packages. A particular version of a package may provide
a particular version of a virtual package, and we can see above that
mpich versions
1 and above provide all
mpi interface
versions up to
1, and
mpich versions
3 and above provide
mpi versions up to
3. A package can depend on a particular
version of a virtual package, e.g. if an application needs MPI-2
functions, it can depend on
mpi@2: to indicate that it needs some
implementation that provides MPI-2 functions.
Constraining virtual packages¶
When installing a package that depends on a virtual package, you can opt to specify the particular provider you want to use, or you can let Spack pick. For example, if you just type this:
$ spack install mpileaks
Then spack will pick a provider for you according to site policies.
If you really want a particular version, say
mpich, then you could
run this instead:
$ spack install mpileaks ^mpich
This forces spack to use some version of
mpich for its
implementation. As always, you can be even more specific and require
a particular
mpich version:
$ spack install mpileaks ^mpich@3
The
mpileaks package in particular only needs MPI-1 commands, so
any MPI implementation will do. If another package depends on
mpi@2 and you try to give it an insufficient MPI implementation
(e.g., one that provides only
mpi@:1), then Spack will raise an
error. Likewise, if you try to plug in some package that doesn’t
provide MPI, Spack will raise an error.
Specifying Specs by Hash¶
Complicated specs can become cumbersome to enter on the command line,
especially when many of the qualifications are necessary to distinguish
between similar installs. To avoid this, when referencing an existing spec,
Spack allows you to reference specs by their hash. We previously
discussed the spec hash that Spack computes. In place of a spec in any
command, substitute
/<hash> where
<hash> is any amount from
the beginning of a spec hash.
For example, lets say that you accidentally installed two different
mvapich2 installations. If you want to uninstall one of them but don’t
know what the difference is, you can run:
$ spack find --long mvapich2 ==> 2 installed packages. -- linux-centos7-x86_64 / [email protected] ---------- qmt35td [email protected]%gcc er3die3 [email protected]%gcc
You can then uninstall the latter installation using:
$ spack uninstall /er3die3
Or, if you want to build with a specific installation as a dependency, you can use:
$ spack install trilinos ^/er3die3
If the given spec hash is sufficiently long as to be unique, Spack will replace the reference with the spec to which it refers. Otherwise, it will prompt for a more qualified hash.
Note that this will not work to reinstall a dependency uninstalled by
spack uninstall --force.
spack providers¶
You can see what packages provide a particular virtual package using
spack providers. If you wanted to see what packages provide
mpi, you would just run:
$ spack providers mpi cray-mpich intel-oneapi-mpi mpich@1: mpt mvapich2 mvapich2-gdr openmpi [email protected]: fujitsu-mpi intel-parallel-studio mpich@3: mpt@1: [email protected]: mvapich2x [email protected] spectrum-mpi intel-mpi mpich mpilander mpt@3: [email protected]: nvhpc [email protected]:
And if you only wanted to see packages that provide MPI-2, you would add a version specifier to the spec:
$ spack providers mpi@2 intel-mpi mpich mpt [email protected]: mvapich2x [email protected] spectrum-mpi intel-oneapi-mpi mpich@3: mpt@3: [email protected]: nvhpc [email protected]: intel-parallel-studio mpilander mvapich2 mvapich2-gdr openmpi [email protected]:
Notice that the package versions that provide insufficient MPI versions are now filtered out.
Deprecating insecure packages¶
spack deprecate allows for the removal of insecure packages with
minimal impact to their dependents.
Warning
The
spack deprecate command is designed for use only in
extraordinary circumstances. This is a VERY big hammer to be used
with care.
The
spack deprecate command will remove one package and replace it
with another by replacing the deprecated package’s prefix with a link
to the deprecator package’s prefix.
Warning
The
spack deprecate command makes no promises about binary
compatibility. It is up to the user to ensure the deprecator is
suitable for the deprecated package.
Spack tracks concrete deprecated specs and ensures that no future packages concretize to a deprecated spec.
The first spec given to the
spack deprecate command is the package
to deprecate. It is an abstract spec that must describe a single
installed package. The second spec argument is the deprecator
spec. By default it must be an abstract spec that describes a single
installed package, but with the
-i/--install-deprecator it can be
any abstract spec that Spack will install and then use as the
deprecator. The
-I/--no-install-deprecator option will ensure
the default behavior.
By default,
spack deprecate will deprecate all dependencies of the
deprecated spec, replacing each by the dependency of the same name in
the deprecator spec. The
-d/--dependencies option will ensure the
default, while the
-D/--no-dependencies option will deprecate only
the root of the deprecate spec in favor of the root of the deprecator
spec.
spack deprecate can use symbolic links or hard links. The default
behavior is symbolic links, but the
-l/--link-type flag can take
options
hard or
soft.
Verifying installations¶
The
spack verify command can be used to verify the validity of
Spack-installed packages any time after installation.
At installation time, Spack creates a manifest of every file in the installation prefix. For links, Spack tracks the mode, ownership, and destination. For directories, Spack tracks the mode, and ownership. For files, Spack tracks the mode, ownership, modification time, hash, and size. The Spack verify command will check, for every file in each package, whether any of those attributes have changed. It will also check for newly added files or deleted files from the installation prefix. Spack can either check all installed packages using the -a,–all or accept specs listed on the command line to verify.
The
spack verify command can also verify for individual files that
they haven’t been altered since installation time. If the given file
is not in a Spack installation prefix, Spack will report that it is
not owned by any package. To check individual files instead of specs,
use the
-f,--files option.
Spack installation manifests are part of the tarball signed by Spack for binary package distribution. When installed from a binary package, Spack uses the packaged installation manifest instead of creating one at install time.
The
spack verify command also accepts the
-l,--local option to
check only local packages (as opposed to those used transparently from
upstream spack instances) and the
-j,--json option to output
machine-readable json data for any errors.
Extensions & Python support¶
Spack’s installation model assumes that each package will live in its
own install prefix. However, certain packages are typically installed
within the directory hierarchy of other packages. For example,
Python packages are typically installed in the
$prefix/lib/python-2.7/site-packages directory.
Spack has support for this type of installation as well. In Spack, a package that can live inside the prefix of another package is called an extension. Suppose you have Python installed like so:
$ spack find python ==> 1 installed packages. -- linux-debian7-x86_64 / [email protected] -------------------------------- [email protected]
spack extensions¶
You can find extensions for your Python installation like this:
$ ==> None activated.
The extensions are a subset of what’s returned by
spack list, and
they are packages like any other. They are installed into their own
prefixes, and you can see this with
spack find --paths:
$ spack find --paths py-numpy ==> 1 installed packages. -- linux-debian7-x86_64 / [email protected] -------------------------------- [email protected] ~/spack/opt/linux-debian7-x86_64/[email protected]/[email protected]
However, even though this package is installed, you cannot use it
directly when you run
python:
$ spack load python $ python Python 2.7.8 (default, Feb 17 2015, 01:35:25) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named numpy >>>
Using Extensions¶
There are four ways to get
numpy working in Python. The first is
to use Shell support. You can simply
load the extension,
and it will be added to the
PYTHONPATH in your current shell:
$ spack load python $ spack load py-numpy
Now
import numpy will succeed for as long as you keep your current
session open.
Instead of using Spack’s environment modification capabilities through
the
spack load command, you can load numpy through your
environment modules (using
environment-modules or
lmod). This
will also add the extension to the
PYTHONPATH in your current
shell.
$ module load <name of numpy module>
If you do not know the name of the specific numpy module you wish to
load, you can use the
spack module tcl|lmod loads command to get
the name of the module from the Spack spec.
Activating Extensions in a View¶
Another way to use extensions is to create a view, which merges the
python installation along with the extensions into a single prefix.
See Filesystem Views for a more in-depth description of views and
spack view for usage of the
spack view command.
Activating Extensions Globally¶
As an alternative to creating a merged prefix with Python and its extensions, and prior to support for views, Spack has provided a means to install the extension into the Spack installation prefix for the extendee. This has typically been useful since extendable packages typically search their own installation path for addons by default.
Global activations are performed with the
spack activate command:
spack activate¶
$ spack activate py-numpy ==> Activated extension [email protected]%[email protected] arch=linux-debian7-x86_64-3c74eb69 for [email protected]%[email protected]. ==> Activated extension [email protected]%[email protected] arch=linux-debian7-x86_64-5f70f816 for [email protected]%[email protected]. ==> Activated extension [email protected]%[email protected] arch=linux-debian7-x86_64-66733244 for [email protected]%[email protected].
Several things have happened here. The user requested that
py-numpy be activated in the
python installation it was built
with. Spack knows that
py-numpy depends on
py-nose and
py-setuptools, so it activated those packages first. Finally,
once all dependencies were activated in the
python installation,
py-numpy was activated as well.
If we run
spack extensions again, we now see the three new
packages listed as activated:
$ ==> 3 currently activated: -- linux-debian7-x86_64 / [email protected] -------------------------------- [email protected] [email protected] [email protected]
Now, when a user runs python,
numpy will be available for import
without the user having to explicitly load it.
[email protected] now
acts like a system Python installation with
numpy installed inside
of it.
Spack accomplishes this by symbolically linking the entire prefix of
the
py-numpy package into the prefix of the
python package. To the
python interpreter, it looks like
numpy is installed in the
site-packages directory.
The only limitation of global activation is that you can only have a single version of an extension activated at a time. This is because multiple versions of the same extension would conflict if symbolically linked into the same prefix. Users who want a different version of a package can still get it by using environment modules or views, but they will have to explicitly load their preferred version.
spack activate --force¶
If, for some reason, you want to activate a package without its
dependencies, you can use
spack activate --force:
$ spack activate --force py-numpy ==> Activated extension [email protected]%[email protected] arch=linux-debian7-x86_64-66733244 for [email protected]%[email protected].
spack deactivate¶
We’ve seen how activating an extension can be used to set up a default
version of a Python module. Obviously, you may want to change that at
some point.
spack deactivate is the command for this. There are
several variants:
spack deactivate <extension>will deactivate a single extension. If another activated extension depends on this one, Spack will warn you and exit with an error.
spack deactivate --force <extension>deactivates an extension regardless of packages that depend on it.
spack deactivate --all <extension>deactivates an extension and all of its dependencies. Use
--forceto disregard dependents.
spack deactivate --all <extendee>deactivates all activated extensions of a package. For example, to deactivate all python extensions, use:
$ spack deactivate --all python
Filesystem requirements¶
By default, Spack needs to be run from a filesystem that supports
flock locking semantics. Nearly all local filesystems and recent
versions of NFS support this, but parallel filesystems or NFS volumes may
be configured without
flock support enabled. You can determine how
your filesystems are mounted with
mount. The output for a Lustre
filesystem might look like this:
$ mount | grep lscratch mds1-lnet0@o2ib100:/lsd on /p/lscratchd type lustre (rw,nosuid,lazystatfs,flock) mds2-lnet0@o2ib100:/lse on /p/lscratche type lustre (rw,nosuid,lazystatfs,flock)
Note the
flock option on both Lustre mounts.
If you do not see this or a similar option for your filesystem, you have
a few options. First, you can move your Spack installation to a
filesystem that supports locking. Second, you could ask your system
administrator to enable
flock for your filesystem.
If none of those work, you can disable locking in one of two ways:
- Run Spack with the
-Lor
--disable-locksoption to disable locks on a call-by-call basis.
- Edit config.yaml and set the
locksoption to
falseto always disable locking.
Warning
If you disable locking, concurrent instances of Spack will have no way
to avoid stepping on each other. You must ensure that there is only
one instance of Spack running at a time. Otherwise, Spack may end
up with a corrupted database file, or you may not be able to see all
installed packages in commands like
spack find.
If you are unfortunate enough to run into this situation, you may be
able to fix it by running
spack reindex.
This issue typically manifests with the error below:
$ ./spack find Traceback (most recent call last): File "./spack", line 176, in <module> main() File "./spack", line 154,' in main return_val = command(parser, args) File "./spack/lib/spack/spack/cmd/find.py", line 170, in find specs = set(spack.installed_db.query(\**q_args)) File "./spack/lib/spack/spack/database.py", line 551, in query with self.read_transaction(): File "./spack/lib/spack/spack/database.py", line 598, in __enter__ if self._enter() and self._acquire_fn: File "./spack/lib/spack/spack/database.py", line 608, in _enter return self._db.lock.acquire_read(self._timeout) File "./spack/lib/spack/llnl/util/lock.py", line 103, in acquire_read self._lock(fcntl.LOCK_SH, timeout) # can raise LockError. File "./spack/lib/spack/llnl/util/lock.py", line 64, in _lock fcntl.lockf(self._fd, op | fcntl.LOCK_NB) IOError: [Errno 38] Function not implemented
A nicer error message is TBD in future versions of Spack.
Getting Help¶
spack help¶
If you don’t find what you need here, the
help subcommand will
print out out a list of all of spack’s options and subcommands:
$ spack help usage: spack [-hkV] [--color {always,never,auto}] COMMAND ... A flexible package manager that supports multiple versions, configurations, platforms, and compilers. These are common spack commands: query packages: list list and search available packages info get detailed information on a particular package find list and search installed packages build packages: install build and install packages uninstall remove installed packages gc remove specs that are now no longer needed spec show what would be installed, given a spec configuration: external manage external packages in Spack configuration environments: env manage virtual environments view project packages to a compact naming scheme on the filesystem. create packages: create create a new package file edit open package files in $EDITOR system: arch print architecture information about this machine compilers list available compilers user environment: load add package to the user environment module manipulate module files unload remove package from the user environment optional arguments: -h, --help show this help message and exit -k, --insecure do not check ssl certificates when downloading -V, --version show version number and exit --color {always,never,auto} when to colorize output (default: auto) more help: spack help --all list all commands and options spack help <command> help on a specific command spack help --spec help on the package specification syntax spack docs open in a browser
Adding an argument, e.g.
spack help <subcommand>, will print out
usage information for a particular subcommand:
$ spack help install usage: spack install [-hnvy] [--only {package,dependencies}] [-u UNTIL] [-j JOBS] [--overwrite] [--fail-fast] [--keep-prefix] [--keep-stage] [--dont-restage] [--use-cache | --no-cache | --cache-only] [--include-build-deps] [--no-check-signature] [--require-full-hash-match] [--show-log-on-error] [--source] [--deprecated] [--fake] [--only-concrete] [-f SPEC_YAML_FILE] [--clean | --dirty] [--test {root,all} | --run-tests] [--log-format {None,junit,cdash}] [--log-file LOG_FILE] [--help-cdash] ... build and install packages positional arguments: spec package spec optional arguments: -h, --help show this help message and exit --only {package,dependencies} select the mode of installation. the default is to install the package along with all its dependencies. alternatively one can decide to install only the package or only the dependencies -u UNTIL, --until UNTIL phase to stop after when installing (default None) -j JOBS, --jobs JOBS explicitly set number of parallel jobs --overwrite reinstall an existing spec, even if it has dependents --fail-fast stop all builds if any build fails (default is best effort) --keep-prefix don't remove the install prefix if installation fails --keep-stage don't remove the build stage if installation succeeds --dont-restage if a partial install is detected, don't delete prior state --use-cache check for pre-built Spack packages in mirrors (default) --no-cache do not check for pre-built Spack packages in mirrors --cache-only only install package from binary mirrors --include-build-deps include build deps when installing from cache, which is useful for CI pipeline troubleshooting --no-check-signature do not check signatures of binary packages --require-full-hash-match when installing from binary mirrors, do not install binary package unless the full hash of the remote spec matches that of the local spec --show-log-on-error print full build log to stderr if build fails --source install source files in prefix -n, --no-checksum do not use checksums to verify downloaded files (unsafe) --deprecated fetch deprecated versions without warning -v, --verbose display verbose build output while installing --fake fake install for debug purposes. --only-concrete (with environment) only install already concretized specs -f SPEC_YAML_FILE, --file SPEC_YAML_FILE install from file. Read specs to install from .yaml files --clean unset harmful variables in the build environment (default) --dirty preserve user environment in spack's build environment (danger!) --test {root,all} If 'root' is chosen, run package tests during installation for top-level packages (but skip tests for dependencies). if 'all' is chosen, run package tests during installation for all packages. If neither are chosen, don't run tests for any packages. --run-tests run package tests during installation (same as --test=all) --log-format {None,junit,cdash} format to be used for log files --log-file LOG_FILE filename for the log file. if not passed a default will be used --help-cdash Show usage instructions for CDash reporting -y, --yes-to-all assume "yes" is the answer to every confirmation request
Alternately, you can use
spack --help in place of
spack help, or
spack <subcommand> --help to get help on a particular subcommand. | https://spack.readthedocs.io/en/latest/basic_usage.html | 2021-02-24T22:54:54 | CC-MAIN-2021-10 | 1614178349708.2 | [] | spack.readthedocs.io |
2.2.1.1 Message Framing
The Peer-to-Peer Graphing Protocol uses TCP, which is a stream-based communication mechanism. However, the protocol is message-oriented. Additionally, the size of a message can be quite large. Thus, the Peer-to-Peer Graphing Protocol defines a framing mechanism to break up messages and to find the boundaries between frames.
Each message is broken into one or more frames. Each frame is described by a frame size and followed by the frame body.
Each message defined by the Peer-to-Peer Graphing Protocol contains the message size within the message, allowing the Peer-to-Peer Graphing Protocol to detect when a complete message has been received.
The framing mechanism can be depicted by the following framing structure.
Frame Size (2 bytes): The total number of bytes in the current frame. This value MUST be at least 1 and the maximum MUST be less than or equal to the size specified in the Max Frame Size element described in section 3.1.1.
Frame Payload (variable): The frame payload. | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-ppgrh/f6dfd995-63bb-4346-b9d8-38dd5ab12b1f | 2021-02-25T01:04:40 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.microsoft.com |
2.2.6.2.10 x-wms-event-subscription
This directive contains a comma-separated list of event type names that the server accepts for the current URL. The list is enclosed in quotation marks. The SendEvent (section 2.2.7.11) request is used to send the remote events to the server.<5>
The syntax of the directive is as follows.
log-event = ( "remote-open" / "remote-close" / "remote-log" ) Eventsub = "x-wms-event-subscription=" %x22 log-event *2( "," log-event ) %x22 | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-rtsp/dc67dae0-7934-4c8b-9c16-b3e45b54a8ef | 2021-02-25T00:12:28 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.microsoft.com |
Add a User
The people in your team need an admin or a moderator account before they can sign into SearchUnify. The easiest way to add a new user account is to use the Invite New User feature.
Adding a User
- Go to Manage Users and select the Admin Users tab.
- Click Invite New User.
- A window with two fields will pop-up:
- User Email. Enter the email of the future user.
- User Role. Select Admin or Moderator from the dropdown.
- A new dropdown, Tab Access, will appear if you have selected Moderator. Use it to define the tabs a Moderator is allowed to access. You can assign a Moderator all tabs other than Manage Users.
- Click Send.
On clicking Send, an email with a registration link will be shared with the user. The link is valid for 24 hours. Other than that, The Added Users List will have a new row and the license count will increment by one.
Last updated: Friday, February 19, 2021
Was this article helpful? Send us your review at [email protected] | https://docs.searchunify.com/Content/Manage-Users/Add-a-User.htm | 2021-02-24T23:53:27 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.searchunify.com |
Perform the environmental stabilization with the assistance of the Teradata team.
If temperature and humidity changes have been extreme during transit, cabinets and components may develop condensation. Before installing or powering on the system, make sure the cabinets and components are free of condensation.
The installation environment must meet the requirements specified in Environmental Requirements.
- After moving the cabinets, use the guidelines in the following table to determine the stabilization time.
- After the stabilization period, remove and discard the desiccant and any remaining plastic bags. Check the inside and top of each cabinet for desiccant.
- Before and during installation, inspect the surfaces of the cabinets and components for condensation. If necessary, continue allowing two-hour stabilization periods until there are no signs of condensation. | https://docs.teradata.com/r/ahO6MgGb70I5JiiWnr1zDA/F0cNv_Hlh7tS_oPCKVJRvw | 2021-02-25T00:26:32 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.teradata.com |
Sending mail from Betty Blocks apps is done through Mandrill (mandrillapp.com). Mandrill requires that the sender-domain is verified before it can be used to send mail. If your domain isn't verified yet, you'll receive
error: unsigned when executing your mail event. Here's how you can tackle this.
The following steps can be followed to get verified:
- In the DNS settings of your domain, add a SPF record (TXT) to the main domain with the value:
v=spf1 include:spf.mandrillapp.com ?all
If a SPF-record already exists, add:
include:spf.mandrillapp.com
2. In the DNS settings of your domain, add a TXT record to subdomain mandrill._domainkey with the. Contact BettyBlocks and give them an email address on which a verification email can be received. This has to be an emailaddress from the domain that you want to send mail from.
4. Send the verification email to [email protected]
After the above steps and verification of the domain you can send mail from the domain with the betty app.
Using your own SMTP server to send mails
If the method above still doesn't suffice, and you want to send mails using your own SMTP server, that's possible too! Although not available through the interface, we can do this for you. Contact support to have it set up. | https://docs.bettyblocks.com/en/articles/1009246-howto-send-emails-from-my-own-domain | 2021-02-24T23:29:53 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.bettyblocks.com |
ListFormat
Synopsis
[Miscellaneous] ListFormat=n
n is an integer in the range 0—3. The default value is 0.
Description
ListFormat determines which values should be compressed within a list. The possible options for ListFormat are:
0 — no compression in a list
1 — $DOUBLE (IEEE) values in a list are compressed
2 — Unicode strings in a list are compressed
3 — Both $DOUBLE and Unicode strings in a listare compressed
If using lists with external clients (Java, C#, etc), ensure that the external client supports the compressed list format.
Changing This Parameter
On the Compatibility page of the Management Portal (System Administration > Configuration > Additional Settings > Compatibility), in the ListFormat row, click Edit. Enter the desired value for this setting.
Instead of using the Management Portal, you can change ListFormat in the Config.Miscellanous class (as described in the class reference) or by editing the CPF in a text editor (as described in the Editing the Active CPF section of the “Introduction to the Configuration Parameter File” chapter in this book). | https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=RACS_LISTFORMAT | 2021-02-24T23:49:41 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.intersystems.com |
When recovering a Storage Node with failed storage volumes, you must identify and unmount the failed volumes. You must verify that only the failed storage volumes are reformatted as part of the recovery procedure.
You must be signed in to the Grid Manager using a supported browser.
You should recover failed storage volumes as soon as possible.
The first step of the recovery process is to detect volumes that have become detached, need to be unmounted, or have I/O errors. If failed volumes are still attached but have a randomly corrupted file system, the system might not detect any corruption in unused or unallocated parts of the disk. While you should run file system checks for consistency on a normal basis, only perform this procedure for detecting failed volumes on a large file system when necessary, such as in cases of power loss.
To correctly recover failed storage volumes, you need to know both the device names of the failed storage volumes and their volume IDs. Grid Manager.
In the following example, device /dev/sdc:
Object stores are numbered in hex notation, from 0000 to 000F. In the example, the object store with an ID of 0000 corresponds to /var/local/rangedb/0 with device name sdc and a size of 107 GB.
If you cannot determine the volume number and device name of failed storage volumes, log in to an equivalent Storage Node and determine the mapping of volumes to device names on that server.
Storage Nodes are usually added in pairs, with identical hardware and storage configurations. Examine the /etc/fstab file on the equivalent Storage Node to identify the device names that correspond to each storage volume. Identify and record the device name for each failed storage volume.
The object_store_ID is the ID of the failed storage volume. For example, specify 0 in the command for an object store with ID 0000. | https://docs.netapp.com/sgws-112/topic/com.netapp.doc.sg-maint/GUID-D2B0E6F0-91E0-484C-9601-B0C55C7208D1.html | 2021-02-25T00:45:18 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.netapp.com |
Compiler implementation of the D programming language.
The Unicode code space is the range of code points [0x000000,0x10FFFF] except the UTF-16 surrogate pairs in the range [0xD800,0xDFFF]
Return !=0 if unicode alpha. Use table from C99 Appendix D.
Returns the code length of c in code units.
Returns the code length of c in code units for the encoding. sz is the encoding: 1 = utf8, 2 = utf16, 4 = utf32.
Decode a UTF-8 sequence as a single UTF-32 code point.
Decode a UTF-16 sequence as a single UTF-32 code point.
© 1999–2019 The D Language Foundation
Licensed under the Boost License 1.0. | https://docs.w3cub.com/d/dmd_utf | 2021-02-24T23:39:33 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.w3cub.com |
Used to notify when a value is added to a property. More...
Constructs a new QPropertyValueRemovedChange with subjectId.
Returns the value removed from the property.
See also setRemovedValue().
Sets the value removed from the property to value.
See also removedValue().
A shared pointer for QPropertyValueRemovedChange.
© The Qt Company Ltd
Licensed under the GNU Free Documentation License, Version 1.3. | https://docs.w3cub.com/qt~5.15/qt3dcore-qpropertyvalueremovedchange | 2021-02-24T23:32:30 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.w3cub.com |
I Publishers
From GCD
- Ianus (1994) - 5000 Iberville St, Studio #332, Montreal, Quebec, H2H 2S6 CANADA.
- I Box Publishing (1994) - P.O. Box 6671, stn A, Toronto, Ontario, M5W 1X5 CANADA.
- IC Graphics - Publisher of Blonde Avenger Comics and related material.
- Illegal Batman (1994) - 43 Finsen Road, London SE5 9AW ENGLAND.
- Illusion Studios - Small publisher in the late 1990's.
- Illustration Studio - Steve Woron's company. CBG#822 pg 55 ad.
- Image Comics - A collective of studios that publish under one imprint.
- Imperial Comics - Late 1980's b/w publisher.
- Independent-Comics.Com - Publisher of Deposit Man.
- Independent Comics Group - Mid 1980's b/w publisher.
- Inferno Studios - Publisher of Zomboy.
- Infinity (1983) - Publisher of Escape To the Stars. (Ad in Glenwood Nov. 1983 catalog.)
- Ink Publishing - Late 1980s publisher.
- Innovation Publishing - Late 1980's/early 1990's b/w and color book publisher.
- Insight Studios
- Irjax Enterprises - An early Schuster Bros. company that published magazines about comic books.
- IW Comics - color reprint publisher of the late 1950's and early 1960's. | https://docs.comics.org/wiki/I_Publishers | 2021-02-24T23:01:36 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.comics.org |
Table of Contents
- Introduction
- Plan your system
- Install and configure SAFR
- Configure for a threat
- Learn/register a person
- Configure person as threat
- Adjust SAFR recognition preferences
- Edit the SAFR Actions config file
- Configure the outbound email server
- Test alerts
- Appendix
Introduction
This tutorial describes how to use SAFR to detect and create an alert if a person who is recorded as a known threat appears at an entrance monitored by video cameras. It describes how to use SAFR to perform threat detection and take action, such as alerting authorities or locking doors to prevent access.
You are the head of security for a small school district. The schools in this district are designed with two entrances in order to ensure all traffic into the school can be tightly managed. The goal is to detect and send an alert for anyone entering the courtyard and to prevent anyone who is banned from the school from entering the buildings.
In this tutorial, you will learn how to setup SAFR to perform the following useful functions:
- Train the system to recognize a specific person
- Register a person as a concern or threat
- Set up cameras to detect any registered persons under a wide range of lighting conditions
- Configure the system to send an alert should a person of concern enter the courtyard
- If cameras are located outside the entrance, configure the system to lock doors if a person of concern is detected as approaching
About cameras
When performing facial recognition, the best results are achieved when faces are clearly visible and have sufficient contrast (the visual difference between the darkest and lightest areas) and sharpness (the amount of relative focus of an image) to allow the recognition system to distinguish the facial features. This can be a challenge with bright backlighting or low light conditions. Cameras need to handle a wide range of challenging environmental conditions, such as direct sunlight during the day and poor lighting conditions at night. So, one of the most important components of a successful facial recognition system for threat detection is the camera. While SAFR works with almost any IP based or USB camera, selection of a high-quality camera is critical for threat detection. Review the Appendix for recommendations on environmental assessment and camera selection.
About SAFR applications
The SAFR Desktop and SAFR Recognition mobile app allow you to connect to a video feed and process video to perform face detection and recognition. The SAFR Desktop and SAFR Recognition mobile app only perform the face detection portion; recognition is done in the SAFR Cloud or SAFR Platform. Face detection involves locating faces within an image. Information about the face is determined, such as its orientation (the direction in which the face is pointed, often referred to in SAFR as Center Pose Quality or CPQ), the sharpness, contrast, and size. This information is used to determine if a suitable face has been found. If so, the face is submitted to SAFR Cloud or Local Server for recognition.
The SAFR Actions application is designed to allow one or more actions to be taken when a recognition event occurs. For example, SAFR Actions can be used to send an email if a stranger is observed at a particular camera. SAFR Actions can be installed on any computer and connects to the SAFR Cloud or Server to monitor for recognition events. Actions can be triggered based upon a wide range of attributes associated with each recognition event, such as person type, ID Class (threat, concern, stranger), site (building), and source (camera). Actions can include sending an email, unlocking a door, or triggering an alarm.
Plan your system
Before you get started, think about how you want to install SAFR. The following components are needed for threat detection:
- SAFR Cloud or Server can be used in the cloud or installed with the server on site; for this tutorial, the SAFR Cloud is used
- SAFR Desktop, SAFR Recognition mobile app, or Virgo for processing video feeds from cameras
- SAFR Actions for processing events
- One or more cameras
- Optionally, one or more door locks to control
The following illustration provides an example of how the components can be laid out:
The SAFR Desktop, SAFR Recognition mobile app, or Virgo and SAFR Actions can run on a single computer, but here they are shown separately. If you have many cameras, you may need to install more than one machine to process the video feeds. Typically one machine can handle up to eight (8) cameras.
In this tutorial, we assume one might want to lock doors and prevent access if a threat is detected.
System requirements
SAFR Desktop and SAFR Server require a fast computer to analyze the video feeds. SAFR Desktop is optimized to work with NVIDIA GPU cards and will support more cameras on a system with an NVIDIA GPU. See the download page for information on system requirements. When planning a system with more than ten cameras, speak to your sales associate for guidance on sizing your SAFR system.
Install and configure SAFR
The following activities need to take place:
- Install cameras
- Install and configure SAFR Edge
- Connect cameras
- Install and configure SAFR Actions
Install your cameras
Before you start, you should have your IP cameras installed and connected to your network. See the Appendix for information on camera selection and location for best performance. Later we'll describe how to get the information for how to connect SAFR to your camera.
Install and configure SAFR Edge
If you have not already done so, from SAFR download portal, download and install SAFR Edge for your operating system.
- For Windows machines, the full version is intended for machines with NVIDIA video cards. With an NVIDIA card installed, you can realize significantly improved performance.
- The SAFR Edge installer includes both SAFR Desktop and SAFR Actions apps.
After installation, do the following:
- Open SAFR Desktop from the Windows Start menu or Mac Applications folder.
- From the menu in the top-right corner of the main window, choose Secure Access or Secure Access with Smile. The modes associated with these menu items have default actions triggered when a user is recognized or smiles.
- In Windows, open SAFR > Tools > Preferences. On Mac, use SAFR > Preferences. Configure the following:
- On the Account tab, update the User Site and User Source fields to a suitable value for your environment. By default, the fields are shown with globally unique IDs.
- (Optional) On the Events tab:
- Check the event types you want to be triggered, or, under the Positive, Neutral and Negative Reply sections, select a voice that sounds when an event is triggered.
- On the Recognition Tab, if there are issues with recognition, adjust the following:
- Minimum required face size / To allow identification: set to 120 (this allows recognition when face image is <220 default)
- Minimum required center pose quality / To allow recognition: Decrease to allow non-frontal recognition
- Minimum required face contrast quality / To allow recognition: Increase if you have a very bright background
- Identify recognition threshold / Camera: Increase value to allow more lenient recognition
Choose a camera
On the SAFR Desktop, from the menu in the top left of the main window, select a camera. By default, the local camera is selected. At any time, you can select another camera from the menu to display its view.
Configure for a threat
Learn/register a person
From photos
To learn a person from a photo of that person, do the following:
- Open SAFR Desktop.
- Choose File > Import Faces.
- Open one or more photos.
- Any photo that has sufficient quality for recognition will show a purple oval around the face with option to click and add a name.
- In each learned photo, type a name if desired.
From video
To learn one or more people from a video, perform the following steps.
- Open SAFR Desktop.
- Choose File > Open.
- Open a video.
- Any face within the video that has sufficient quality for recognition be added to the database.
- Add Names as follows if desired:
- Choose Tools > People (or SAFR > People on Mac).
- Sort by Enrollment Data and set sort order to Descending in order to see the newly-added entries.
- Type names for each.
From SAFR Recognition person as threat
Take following steps to configure a person as a threat or concern:
- Choose Tools > People (or SAFR > People on Mac).
- If you want to see most recently added people, sort by Enrollment Data, and set sort order to Descending in order to see the newly added entries.
- You can remove people already marked as Threat or Concern by filtering on ID Class of No Concern.
- For each user you want to change:
- Double-click user.
- Choose Threat or Concern from the ID Class menu list.
- In the SAFR Desktop app, that person should now be marked with a red oval overlay to indicate a Threat or an amber oval overlay to indicate a Concern.
Adjust SAFR recognition preferences
At a high level, optimizing recognition involves the following steps:
- Adjust image quality for the image being produced by the camera to get the best possible image of a face
- Configure the camera to increase face size, sharpness, and contract
- Adjust image quality thresholds in SAFR for face recognition
- When it is not possible to improve image quality from camera beyond a certain point, adjust adjust SAFR thresholds for face size, center pose, sharpness, and contrast
- This may not be necessary if the image quality from the camera is already above the minimum thresholds set by default in SAFR
- Adjust Identity Recognition Threshold to optimize matching against existing faces
Test recognition results and optimize
The easiest way to determine the quality of the face images is to use the recognition details panel inside the SAFR Desktop application
- On Windows enable "Recognition Candidates" and "Recognition Details" via View menu
- On Mac enable "Recognition Candidates" via the View menu
Enabling this will display a cut-out of each face found in the video as follows:
You can use the information above for two purposes:
- Initially, configure the camera to increase face size, center pose quality, sharpness, and contract (larger values are better)
- Then, if necessary, adjust SAFR Thresholds so that SAFR will attempt to perform recognition for face images that do not meet minimum thresholds
The section below describe these two steps.
Optimize camera image
Using the face size, sharpness, and contrast in the previous display, try adjusting camera settings to improve the values reported by SAFR Desktop. See Camera Selection for information on optimizing camera settings. The important settings consider on the camera are:
- Backlight control (BLC)
- Wide (or High) Dynamic Range (WDR) mode
- Focus (typically let camera autofocus - you may want to set a region to focus thru the camera administration interface
- In extreme cases of lighting, you may need to set manual modes such as Shutter priority.
- For example, you may use manual Iris Priority to let more light in and allow the background to be "washed out" as long as face illumination is good
Adjust SAFR preferences
SAFR settings for face size, center pose quality, sharpness, and contrast are used to filter out certain images from attempting recognition. This is useful when you want to avoid spending processing time on faces that will not lead to satisfactory recognition results or if you want to recognize on people only with a certain range of the camera. These settings should be adjusted if needed to ensure SAFR is attempting to perform recognition.
In addition, there are settings that determine how close two face images need to be before SAFR will consider them matching. These settings are adjusted as described below.
The values reported in the Recognition Candidates panel as described above can be directly compared to the values in the Detection and Recognition Tabs of the SAFR Preferences. The following describes the most important settings to adjust in order to improve recognition results.
Tips
The section below describes adjusting these settings. Tuning for best results takes time. Modify settings methodically and note the results with each change. If results go far off track, reset to defaults and start over. Please keep in mind the following as you make these adjustments.
Most defaults are set intelligently for the mode selected. Change as few settings as you need to accomplish the goal.
Don't hesitate to click the Reset to Defaults button if things get too convoluted.
If the quality or size values are too small, the accuracy will suffer. In some cases, it is better to improve image quality by adjusting or upgrading camera than to lower the quality bar for recognition.
SAFR Allows detection and recognition settings to be customized per mode. There are several different modes in SAFR Desktop. The mode is selected in the upper right of the main window of SAFR Desktop (See below). Make sure you are adjusting settings for the mode you are currently using.
Before changing settings, Set SAFR Desktop to correct mode; in this case, Enrolled and Stranger Monitoring.
Camera Preferences
These settings are specific to each camera input used by SAFR.
Go to the Tools menu. Click Preferences and click the Cameras tab. Enter the following:
- Source: Name for camera (this is what appears in events Source attribute).
- URL: rasp://<username>:<password>@<ipaddress>/h264.
- Check Lens Correction. This straitens video warped due to fish eye distortion. This will improving recognition though usually only slightly.
- If there is not already warp in the video or only very slight warp, do not use this setting.
- Adjust K1 and K2 until straight edges appear straight.
- This is a trial and error process.
Note: These settings are camera-specific and are not available on webcams.
Detection Preferences
These settings that affect how SAFR behaves when searching for a face within each frame of the video.
Go to the Tools menu. Click Preferences and click the Detection tab. Enter the following:
- Reduce vertical input image size to: 1080 - Increasing this can improve ability to detect faces within the image but increases CPU usage. This increases CPU usage and if too large slows down time to detection. Set this to as large as possible w/o making CPU or GPU hit 100%. If your subject is moving through the video field of view too quickly this may give insufficient time for recognition to succeed.
- Minimum searched face size: 50 - Set to slightly smaller than Minimum required face size (e.g. 50) to avoid flipping between showing overlay and not when face in video is exactly same as Min required face size.
- Minimum required face size: 60 - The smallest face that recognition will be attempted on. 60 is typically as small as you want to go in most cases.
- Use this to avoid trying to recognize until a face is close enough thus avoiding false unrecognized events for known users.
- The smaller the value the more processing time to scan a frame for faces.
If you enable "Recognition candidates" from View Menu, SAFR Desktop will display the cropped face with the cropped face size to help in choosing a value.
Tracking Preferences
These settings affect how closely SAFR tracks a face in the video.
Go to the Tools menu. Click Preferences and click the Tracking tab. Enter the following:
- Position: Left default value of 115%. Increase if motion high or frame rate low to improve tracking.
- Size: Left default value of 50%. Increase if motion high or frame rate low to improve tracking.
- Failed recognition back-off interval: Set to .2 (default .34)
Recognition Preferences
These settings that affect when SAFR will attempt to recognize a face and how aggressively it will try to match:
Go to the Tools menu. Click Preferences and click the Recognition tab. Enter the following:
- Minimum required center pose quality: To allow recognition: 0
- Minimum required face sharpness quality: To allow recognition: 0
- Minimum required face contrast quality: To allow recognition: 0
- Proximity Threshold Allowance: 0.13 (increase to 0.2 or 0.3 if matching % is too low).
Settings for Center Pose Quality, Sharpness and Contrast:
Settings for Recognition (Matching):
User Interface Preferences
These settings that affect display in SAFR. For this tutorial the most relevant settings are making it easier to read overlays on the screen.
Go to the Tools menu. Click Preferences and click the User Interface tab. Enter the following:
- Video > Highlight border thickness (all modes): 19 (larger the value the larger the text thus making it easier to read from a distance).
- Video > Overlay text size (all modes): 6 (larger the value the larger the text thus making it easier to read from a distance).
- Video > Speak name display message: Checked (useful to hear when recognition occurs).
It is important to test each modification. This helps to know effect of each setting change. To do this, its helpful to have one person walk in front of the camera(s) while another person monitors the screen to confirm results. In this case, we are looking for location when person res recognized.
If you do not have a 2nd person to act as a subject, another good option is to use a web conferencing solution or screen capture software to record the SAFR Desktop application and then review. This also provides a nice record between each change.
Edit the SAFR Actions config file
These instructions assume you are creating a brand new SAFR Actions configuration file. If you already have modified SAFR Actions configuration file to perform other actions, you must add the Rules section to the file. Simply, these instructions also assume you are editing the configuration file instead of editing through following sample config contents.
- Save and close the file.
- Start SAFR Actions app; ensure the new config is loaded. If not, close SAFR and copy the config again.
- Set the following in SAFR Actions application (so the application performs encoding of the password for you).
- Change environment to your environment:
- If your download portal is safr.int2.real.com, set to INT2 (SAFR Partner Cloud).
- If your download portal is safr.real.com, set to PROD (SAFR Cloud).
- Set userId, userPwd to your SAFR account
- Choose File > Save to load and save changes. Changes should take effect immediately.
SAFR Actions sample config file
{ "directory": "main", "emailDef": [ { "attachments": [ "cvos:\/\/obj\/#x\/face|event_photo.jpg" ], "label": "threatDetected", "message": "<h1>Threat Detected</h1>Classification: #a<br\/>Site: #I<br\/>Camera: #S<br\/>", "recipients": [ "[email protected]" ], "subject": "Threat Detected" } ], "environment": "INT2", "rules": [ { "event": { "hasPersonId": true, "idClass": [ "Threat" ] }, "triggers": [ { "actions": [ "@emailSend threatDetected" ], "daysOfWeek": [ "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun" ], "reply": { "message": "Threat Detected!" }, "timesOfDay": [ { "end": "24:00", "start": "00:00" } ] } ] } ], "userId": "USER", "userPwd": "PWD" } continues .details.
- Keep SAFR Desktop, Mobile, or Virgo running:
- The SAFR Desktop, Mobile, or Virgo connected to the camera must remain running for events to be generated. The app may be minimized, but do not quit the application or recognition stops.
Configure the outbound email server
Configure credentials to allow SAFR to send email through your email server:
- Obtain a SMTP server account you can use for sending emails.
- Important: It is recommended you create a new account with only its default settings, and avoid using an established or personal account. If you are using Gmail, avoid setting 2-step verification (also known as two-factor authentication).
- Open SAFR Actions > Tools > Configure Email Server.
- You are prompted to log into SAFR Cloud if you have not already:
- Choose SAFR Partner Cloud.
- Enter your SAFR cloud account username and password.
- Click Connect.
- In the Configure Email Server dialog, enter all requested info, read from your SAFR cloud account if previously filled out elsewhere:
- Sender Email - Email username for SMTP account, for example, [email protected]
- Sender Name - Display name for the From line
- From Email Address - Email for the From line; this does not work with all email servers; if not supported, the Sender Email is populated
Note: If using Gmail, you may need to change settings to use Less Secure App Access. In the SAFRActions log, if you receive an EMAIL SEND ERROR: <date> 500 Report server internal error, do the following:
- To confirm, log into your Gmail account you have configured to send SMTP. Look for an email with title Critical Security Alert, with a message body that states, Someone just used your password to try to sign in to your account from a non-Google app.
- When you click Check Activity and choose Yes to Were You Recently Prevented from Signing in to your Google Account? a message is displayed saying, Less Secure App Was Blocked.
To resolve this:
- Log into your Google Account. Click your avatar in upper-right, and choose Google Account.
- Go to Security > Less Secure App Access, and change Allow Less Secure Apps to On.
Test alerts
The easiest way to test your SAFR Actions is to use the mobile app to trigger an alert. You can install the application as described previously under From SAFR Mobile App. Register one or more test subjects, and mark the persons as a threat as described previously in Configure Threat Alerts.
Test the alert as follows:
- Open SAFR Mobile app.
- Hide your face from the camera, and then unhide and wait for face to be recognized.
- Upon recognition:
- A red oval should be drawn around your face if the event is triggered.
- You should see the message Threat Detected flash on the screen.
- You should see an email appear in your inbox.
If you do not get a message displayed,, the machine running the SAFR app and the machine running SAFR Actions may not be set to same time. Correct this and try again.
- If you see other errors, check the message and attempt to make corrections, or contact RealNetworks technical support.
Appendix
Camera selection
The goal is to detect a threat and perform alerting when a threat or concern is detected. In some cases, you may want automatic actions to take place upon detection of a threat; in particular, you may want doors to lock where detection occurred or potentially locking down the entire facility in addition to alerting authorities.
In this tutorial, cameras are positioned either inside the building pointed at external doors or mounted just outside external entrances pointed at approaching subjects. The major challenge here is lighting. Conditions can vary from rainy to sunny. You can face the following challenges in both conditions:
- Very high contrast - The subjects are incredibly dark with very bright background.
- Backlight conditions change when doors open.
- Opening doors can change lighting conditions right at the time the subject is walking past. The camera is still trying to adjust to the lighting conditions while the subject is walking past. This is especially true with automatic doors that open very wide and remain open for a period of time.
The application is for threat detection. It is important to have a camera attempt recognition on anyone entering the door.
- Camera recognition accuracy should be maintained under following conditions:
- Bright sunny daytime conditions.
- Cloudy daytime conditions.
- Nighttime conditions (inside of building well-lighted).
- Automatic sliding door:
- Double door slides open as person approaches door.
Camera recommendations
Camera positioning:
- If inside building, place camera 20-30 feet in front of door about 10-12 feet high.
- Subject should be facing nearly straight on.
- Camera resolution should be such that faces are at least 120 pixels high when at door, larger if possible.
Backlighting conditions can be typically overcome with appropriate adjustments to camera settings. The camera in use needs to be advanced enough to offer standard camera adjustments:
- Shutter Priority mode.
- Exposer Compensation.
- Manual Mode with full Shutter Speed, Iris (Aperture), and Gain adjustment.
The Sony 772R is an example of such a camera. In addition to these settings, the 772R also offers various digital tools to enhance the image:
- Visibility Enhancer.
- Backlight compensation.
- Highlight compensation.
- NR (Noise Reduction).
The basic steps in handling a backlight situation are as follows:
- Angle the camera slightly toward the floor to eliminate as much direct light as possible into the cameras sensor.
- Place the bound on the slowest shutter speed allowed. As people are moving through the field of view (FOV), maintaining high shutter speed is important to obtain blur-free images.
- Set the slowest shutter speed to at least 1/90 second.
- In some cameras, if the shutter speed lower bound is not settable for auto mode, you may need to use shutter priority mode.
- Sony 772R camera has both options available.
- Turn on Backlight compensation if available (it is on 772R), and adjust level.
- If faces are still dark, adjust Exposure compensation to brighten the faces. The background becomes overexposed, but that works for FR [What is FR? - sg].
- If there are specular reflections or faces are still too dark, turn on Highlight Compensation.
The previous approach is appropriate for the situation where varying outdoor conditions also vary the amount of light reflected from the face. Light intensity is simply boosted above what the cameras would choose automatically and enhancing the image to reduce exposer variance.
In cases where outdoor conditions only generate backlight (light from behind the subject's face) and there is minimal variation in lighting from inside (few windows so indoor illuminating on the face is mostly constant), it is more appropriate to place the camera in fully manual mode and set Shutter Speed, Iris, and Gain values manually to properly expose the face while allowing the background to be overexposed. In this mode, the camera makes no auto-adjustments and is not be thrown off by momentary bursts of light due to door opening or other momentary reflections. To do this:
- Set shutter speed to 1/90 or higher.
- Open the iris, increasing the f-stop for the iris (aperture) until the face is bright enough.
- Focus the camera on the sweet spot of the recognition where people are most likely to face towards the camera. [Not sure I understand the phrase "sweet spot of the recognition"; are you saying to adjust the camera focus until the facial image is sharpest (clearest?) at the place where most people are going to be directly facing toward the camera? - sg]
Increasing the iris [add "f-stop"? - sg] reduces the depth of field (distance during which face is in focus). Increasing the iris [add "f-stop"? - sg] increases the quality of the image but reduces the amount of time the image is in focus and viability for recognition. In either case, focus the camera on the sweet spot of the recognition [Same as previously noted - sg] where people are most likely to face toward the camera.
Anyone setting up cameras for facial recognition should be fully familiar with digital photography concepts described in the following video (15 minutes duration):
Specifics on backlight compensation:
SONY 772R User Guide:
Experimenting in a similar test environment is important to successfully overcome backlight situations.
Lighting considerations
Success with recognition indoors or outdoors depends on lighting conditions (amount of light being reflected from the face).
The light source:
- Should have a light source that hits the front of the face.
- Outdoors:
- Typically always has more light during daytime unless there is an awning blocking the light from above and front as the person approaches.
- At nighttime needs to have a light source behind the camera to illuminate the front of the face.
- Indoors:
- Still need to contend with the backlight illumination.
- Should have ample ambient lighting conditions and relatively uniform light on the face for best results.
Handling other light sources:
- Avoid a direct line between the sun and camera lens.
- Camera lens should have sun/rain shade for effective operation.
Attachments:
| https://docs.real.com/guides/solution/Threat-Detection.html | 2021-02-25T00:00:21 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['attachments/68649956/68976824.png', None], dtype=object)
array(['attachments/67486990/69370092.jpg', None], dtype=object)
array(['attachments/68649956/69370942.png', None], dtype=object)
array(['attachments/68649956/69370946.png', None], dtype=object)
array(['attachments/68649956/69370945.png', None], dtype=object)
array(['images/icons/bullet_blue.gif', None], dtype=object)
array(['images/icons/bullet_blue.gif', None], dtype=object)
array(['images/icons/bullet_blue.gif', None], dtype=object)
array(['images/icons/bullet_blue.gif', None], dtype=object)
array(['images/icons/bullet_blue.gif', None], dtype=object)
array(['images/icons/bullet_blue.gif', None], dtype=object)] | docs.real.com |
Previous mechanism:
The only way we could have retrieved the number of files/objects in a directory or volume was to do a crawl of the entire directory/volume. That was expensive and was not scalable.
New Design Implementation:
The proposed mechanism will provide an easier alternative to determine the count of files/objects in a directory or volume.
The new mechanism will store count of objects/files as part of an extended attribute of a directory. Each directory extended attribute value will indicate the number of files/objects present in a tree with the directory being considered as the root of the tree.
Inode quota management
setting limits
Syntax: gluster volume quota <volname> limit-objects <path> <number>
Details: <number> is a hard-limit for number of objects limitation for path <path>. If hard-limit is exceeded, creation of file or directory is no longer permitted.
list-objects
Syntax: gluster volume quota <volname> list-objects [path] ...
Details: If path is not specified, then all the directories which has object limit set on it will be displayed. If we provide path then only that particular path is displayed along with the details associated with that.
Sample output:
Path Hard-limit Soft-limit Files Dirs Available Soft-limit exceeded? Hard-limit exceeded? --------------------------------------------------------------------------------------------------------------------------------------------- /dir 10 80% 0 1 9 No No
Deleting limits
Syntax: gluster volume quota <volname> remove-objects <path>
Details: This will remove the object limit set on the specified path.
Note: There is a known issue associated with remove-objects. When both usage limit and object limit is set on a path, then removal of any limit will lead to removal of other limit as well. This is tracked in the bug #1202244 | https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/quota-object-count/ | 2021-02-24T23:39:13 | CC-MAIN-2021-10 | 1614178349708.2 | [] | staged-gluster-docs.readthedocs.io |
OnCommand Workflow Automation (WFA) operates on data that is acquired from data sources. Various versions of OnCommand Unified Manager and VMware vCenter Server are provided as predefined WFA data source types. You must be aware of the predefined data source types before you set up the data sources for data acquisition.
A data source is a read-only data structure that serves as a connection to the data source object of a specific data source type. For example, a data source can be a connection to an OnCommand Unified Manager database of an OnCommand Unified Manager 6.3 data source type. You can add a custom data source to WFA after defining the required data source type.
For more information about the predefined data source types, see the Interoperability Matrix. | https://docs.netapp.com/wfa-42/topic/com.netapp.doc.onc-wfa-isg-rhel/GUID-CC311253-E4A2-4DC9-B24F-6EC1CCC66EA3.html | 2021-02-24T23:24:23 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.netapp.com |
Prudent Expenses
Prudent Expenses is a robust expense recording app for iOS, catering to busy people on the go, parents with family expenses and individuals looking to quickly record their daily expenses.
Privacy first policyPrivacy first policy
Prudent Expenses is designed with data privacy in mind. No data is stored in any cloud. Prudent Expenses does not collect any usage analytics data at all. You can use Prudent Analytics while completely offline.
You can still export expenses data in text format from Prudent Expenses to other apps or to store it securely at your own discretion.
How to useHow to use
Prudent Expenses is designed as the fastest way to enter expenses that can be exported to Ledger journals that can then be used for textual storage of your expenses or used in conjunction with the Prudent desktop app to support your holistic personal finance analysis and planning. It can also be used as a standalone app in everyday situations, for events and festivities or while travelling on holiday.
The minimalist design is easy to understand. You should start a fresh expense recording:
Every month if you're using it to track monthly expenses.
To track your expenses for special events such as Chinese New Year for example.
When on holiday, you can record all your expenses quickly.
To start a fresh expense recording, you can easily tap on the "Export & Delete All" button on the Expenses screen.
Designed for youDesigned for you
Prudent Expenses offers curated expense categories that had been designed to maximize coverage while reducing category clutter.
Actual usage is designed with shortest taps to record in mind.
The entry pad is practical without requiring you to tap 00s for decimals.
The Expenses are summed by category to give you a sense of how much you'd spent without clunky graphics on a mobile screen (you can still visualize with Prudent on Desktop).
Feedback & supportFeedback & support
If you have any questions or feedback, please e-mail [email protected].
Prudent's mission is to help you achieve optimal financial health! | https://docs.prudent.me/docs/expenses | 2021-02-25T01:05:03 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.prudent.me |
Working with risk trends in reports
Risks change over time as vulnerabilities are discovered and old vulnerabilities are remediated on assets or excluded from reports. As system configurations are changed, assets or sites that have been added or removed also will impact your risk over time. Vulnerabilities can lead to asset compromise that might impact your organization’s finances, privacy, compliance status with government agencies, and reputation. Tracking risk trends helps you assess threats to your organization’s standings in these areas and determine if your vulnerability management efforts are satisfactorily maintaining risk at acceptable levels or reducing risk over time.
A risk trend can be defined as a long-term view of an asset’s potential impact of compromise that may change over a time period. Depending on your strategy you can specify your trend data based on average risk or total risk. Your average risk is based on a calculation of your risk scores on assets over a report date range. For example, average risk gives you an overview of how vulnerable your assets might be to exploits whether it’s high or low or unchanged. Your total risk is an aggregated score of vulnerabilities on assets over a specified period. See Prioritize according to risk score for more information about risk strategies.
Over time vulnerabilities that are tracked in your organization’s assets indicate risks that may have be reflected in your reports. Using risk trends in reports will help you understand how vulnerabilities that have been remediated or excluded will impact your organization. Risk trends appear in your Executive Overview or custom report as a set of colored line graphs illustrating how your risk has changed over the report period.
See Selecting risk trends to be included in the report for information on including risk trends in your Executive Overview report.
Events that impact risk trends
Changes in assets have an impact on risk trends; for example, assets added to a group may increase the number of possible vulnerabilities because each asset may have exploitable vulnerabilities that have not been accounted for nor remediated. Using risk trends you can demonstrate, for example, why the risk level per asset is largely unchanged despite a spike in the overall risk trend due to the addition of an asset. The date that you added the assets will show an increase in risk until any vulnerabilities associated with those assets have been remediated. As vulnerabilities are remediated or excluded from scans your data will show a downward trend in your risk graphs.
Changing your risk strategy will have an impact on your risk trend reporting. Some risk strategies incorporate the passage of time in the determination of risk data. These time-based strategies will demonstrate risk even if there were no new scans and no assets or vulnerabilities were added in a given time period. For more information, see Selecting risk trends to be included in the report.
Configuring reports to reflect risk trends
Configure your reports to display risk trends to show you the data you need. Select All assets in report scope for an overall high-level risk trends report to indicate trends in your organization’s exploitable vulnerabilities. Vulnerabilities that are not known to have exploits still pose a certain amount of risk but it is calculated to be much smaller. The highest-risk graphs demonstrate the biggest contributors to your risk on the site, group, or asset level. These graphs disaggregate your risk data, breaking out the highest-risk factors at various asset collection methods included in the scope of your report.
The risk trend settings in the Advanced Properties page of the Report Configuration panel will not appear if the selected template does not include ‘Executive overview’ or ‘Risk Trend’ sections.
You can specify your report configuration on the Scope and Advanced Properties pages of the Report Configuration panel. On the Scope page of the report configuration settings you can set the assets to include in your risk trend graphs. On the Advanced Properties page you can specify on which asset collections within the scope of your report you want to include in risk trend graphs. You can generate a graph representing how risk has changed over time for all assets in the scope of the report. If you generate this graph, you can choose to display how risk for all the assets has changed over time, how the scope of the assets in the report has changed over time or both. These trends will be plotted on two y-axes. If you want to see how the report scope has changed over the report period, you can do this by trending either the number of assets over the report period or the average risk score for all the assets in the report scope. When choosing to display a trend for all assets in the report scope, you must choose one or both of the two trends.
You may also choose to include risk trend graphs for the five highest-risk sites in the scope of your report, or the five highest-risk asset groups, or the five highest risk assets. You can only display trends for sites or asset groups if your report scope includes sites or asset groups, respectively. Each of these graphs will plot a trend line for each asset, group, or site that comprises the five-highest risk entities in each graph. For sites and groups trend graphs, you can choose to represent the risk trend lines either in terms of the total risk score for all the assets in each collection or in terms of the average risk score of the assets in each collection.
You can select All assets in report scope and you can further specify Total risk score and indicate Scope trend if you want to include either the Average risk score or Number of assets in your graph. You can also choose to include the five highest risk sites, five highest risk asset groups, and five highest risk assets depending on the level of detail you want and require in your risk trend report. Setting the date range for your report establishes the report period for risk trends in your reports.
Tip: Including the five highest risk sites, assets, or asset groups in your report can help you prioritize candidates for your remediation efforts.
Asset group membership can change over time. If you want to base risk data on asset group membership for a particular period you can select to include asset group membership history by selecting Historical asset group membership on the Advanced Properties page of the Report Configuration panel. You can also select Asset group membership at the time of report generation to base each risk data point on the assets that are members of the selected groups at the time the report is run. This allows you to track risk trends for date ranges that precede the creation of the asset groups.
Selecting risk trends to be included in the report
You must have assets selected in your report scope to include risk trend reports in your report. See Selecting assets to report on for more information.
To configure reports to include risk trends:
- Select the Executive Overview template on the General page of the Report Configuration panel. (Optional) You can also create a custom report template to include a risk trend section.
- Go to the Advanced Properties page of the Report Configuration panel.
- Select one or more of the trend graphs you want to include in your report: All assets in report scope, 5 highest-risk sites, 5 highest-risk asset groups, and 5 highest-risk assets. To include historical asset group membership in your reports make sure that you have selected at least one asset group on the Scope page of the Report Configuration panel and that you have selected the 5 highest-risk asset group graph.
- Set the date range for your risk trends. You can select Past 1 year, Past 6 months, Past 3 months, Past 1 month, or Custom range. (Optional) You can select Use the report generation date for the end date when you set a custom date range. This allows a report to have a static custom start date while dynamically lengthening the trend period to the most recent risk data every time the report is run.
Your risk trend graphs will be included in the Executive Overview report on the schedule you specified. See Selecting risk trends to be included in the report for more information about understanding risk trends in reports.
Use cases for tracking risk trends
Risk trend reports are available as part of the Executive Overview reports. Risk trend reports are not constrained by the scope of your organization. They can be customized to show the data that is most important to you. You can view your overall risk for a high level view of risk trends across your organization or you can select a subset of assets, sites, and groups and view the overall risk trend across that subset and the highest risk elements within that subset.
Overall risk trend graphs, available by selecting All assets in report scope, provide an aggregate view of all the assets in the scope of the report. The highest-risk graphs provide detailed data about specific assets, sites, or asset groups that are the five highest risks in your environment. The overall risk trend report will demonstrate at a high level where risks are present in your environment. Using the highest-risk graphs in conjunction with the overall risk trend report will provide depth and clarity to where the vulnerabilities lie, how long the vulnerabilities have been an issue, and where changes have taken place and how those changes impact the trend.
For example, Company A has six assets, one asset group, and 100 sites. The overall risk trend report shows the trend covering a date range of six months from March to September. The overall risk graph has a spike in March and then levels off for the rest of the period. The overall report identifies the assets, the total risk, the average risk, the highest risk site, the highest risk asset group, and the highest risk asset.
To explain the spike in the graph the 5 highest-risk assets graph is included. You can see that in March the number of assets increased from five to six. While the number of vulnerabilities has seemingly increased the additional asset is the reason for the spike. After the asset was added you can see that the report levels off to an expected pattern of risk. You can also display the Average risk score to see that the average risk per asset in the report scope has stayed effectively the same, while the aggregate risk increased. The context in which you view changes to the scope of assets over the trend report period will affect the way the data displays in the graphs. | https://docs.rapid7.com/nexpose/working-with-risk-trends-in-reports/ | 2021-02-24T22:47:22 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['/areas/docs/_repos//product-documentation__master/e72e7043e22ca9f0d57e195a9d1aa23334384c48/nexpose/images/s_report_config_risk_trends.jpg',
None], dtype=object) ] | docs.rapid7.com |
Shadow Cascades help solve a problem called perspective aliasing, where real-time shadows from Directional Lights appear pixelated when they are near the Camera.
Shadow Cascades only work with Directional Lights.
A Directional Light typically simulates sunlight, and a single Directional Light can illuminate the entire Scene. This means that its shadow map covers a large portion of the Scene, which can lead to a problem called perspective aliasing. Perspective aliasing means that shadow map pixels close to the Camera look enlarged and chunky compared to those farther away.
Perspective aliasing occurs because.
In this simplified example, the distant end of the frustum is covered by 20 pixels of shadow map, while the near end is covered by only 4 pixels. However, both ends appear the same size on-screen. The result is that the resolution of the map is effectively much less for shadow areas that are close to the Camera.
Perspective aliasing is less noticeable when you use Soft Shadows, and when you use a higher resolution for the shadow map. However, these solutions use more memory and bandwidth while rendering.
When using Shadow Cascades, Unity splits the frustum area into two zones based on distance from the Camera. The zone at the near end uses a separate shadow map at a reduced size (but with the same resolution). These staged reductions in shadow map size are known as cascaded shadow maps (sometimes called Parallel Split Shadow Maps).
When you configure Shadow Cascades in your Project, you can choose to use 0, 2 or 5 cascades. Unity calculates the positioning of the cascades within the Camera’s frustum.
The more cascades you use, the less your shadows are affected by perspective aliasing. Increasing the number increases the rendering overhead. However, this overhead is still less than it would be if you were to use a high resolution map across the whole shadow.
In the Built-in Render Pipeline, configure Shadow Cascades per quality level property in your Project’s Quality Settings.
In the Universal Render Pipeline (URP), configure Shadow Cascades in the Universal Render Pipeline Asset.
In the High Definition Render Pipeline (HDRP), configure Shadow Cascades for each Volume. | https://docs.unity3d.com/es/2020.2/Manual/shadow-cascades.html | 2021-02-25T00:21:12 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.unity3d.com |
Changes related to "iPi Mocap Studio"
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
24 February 2021
- (diff | hist) . . iPi Mocap Studio Release Notes; 10:28 . . (+432) . . Andrew (Talk | contribs) (4.4.1.243) | http://docs.ipisoft.com/index.php?title=Special:RecentChangesLinked&hideanons=0&days=14&limit=500&target=iPi_Mocap_Studio | 2021-02-24T23:53:31 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.ipisoft.com |
The DHTMLX library supplies customizable components to help you build interfaces of different kinds, nicely present data and work with it. There are layouts, data-processing components, typical form-inhabitants, handy navigation elements for surfing an app and fairly all-sufficient macro widgets.
The Suite package contains a large set of components your need to create a user-friendly and attractive application.
This is documentation for Suite v6.0 and upper! You can also read documentation for version 5.X.
Task-oriented complex UI components will help you to accomplish a particular goal much easier.
These are separate components that are not included into the Suite library. | https://docs.dhtmlx.com/ | 2021-02-24T23:06:54 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.dhtmlx.com |
Install Agent Helper in Salesforce Console
Agent Helper is an addon. It enables support reps to solve cases quickly by presented four key pieces of information right on a case page:
- Cases: A list of related cases.
- Agents: The reps who worked on related cases.
- Articles: A set of help articles that reps shared with customers.
- User Journey: A summary of user activities.
The information is found based on a set of keywords. As an admin, you can select the Salesforce objects and fields where keyword matches occur. For instance, you can configure Agent Helper to look into the title of the current case page and find cases with similar titles. For increased efficiency, you can further select object properties or fields where searches are performed. Finally, a powerful feature named Filter Condition to select data allows you to keep information overload at bay; especially if your org has millions of records.
Installing Agent Helper
- Go to Addons and open Add New SearchUnify Addon.
- Install Agent Helper.
- Go to Search Clients and open any Service Console search client for editing. If you can see a new tab (Agent Helper), then the add-on was successfully installed.
Activating Agent Helper
- Go to Search Clients and open a Salesforce Service Console by clicking
.
- Navigate to Agent Helper.
- Select a Salesforce object from the Source Object dropdown.
- The dropdown will expand into a form after your have inserted the object. As can be seen in the next image, an object case has been selected for demonstration. Using the Input Fields dropdown, select object properties. You can select more than one property.
- OPTIONAL. Enter your company's domain to identify agents. All comments from the entered domain will be agent comments.
- OPTIONAL. Click Filter Condition to select data to open a window where you can further refine training data. For instance, in the next image you can see the settings if your goal is to train Agent Helper only on cases from SearchUnify or cases whose account ID series is less than 1932093420 (probably another indication that the case is from SearchUnify.)
- Click Update to start training.
Log into Salesforce Service Console to use agent helper.
Last updated: Friday, September 25, 2020 | https://docs.searchunify.com/Content/Addons/Agent-Helper.htm?TocPath=Addons%7C_____5 | 2021-02-24T23:27:26 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.searchunify.com |
Use the Firmware Version Checker provides a list of all firmware, including status of Mismatched (out-of-date) highlighted in yellow, Unknown highlighted in gray, and OK (not highlighted). It also displays corresponding information for firmware that can be flashed through server management.command to check the status of firmware (for example, out-of-date firmware) and also to flash firmware.
The Firmware Version Checker feature only reports on versions that can be flashed through server management. You can obtain the firmware version information of chassis flashed with other applications through Asset information.
You can also flash firmware from the SMClient Functions menu. | https://docs.teradata.com/r/ULK3h~H_CWRoPgUHHeFjyA/AC9ke8NKJ2uuwywszZuBAg | 2021-02-24T23:56:30 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.teradata.com |
Verify the recovery plan for the operations management layer.
Performing a test recovery of the operations management recovery plan ensures that the virtual machines are being replicated correctly and the power on order is accurate with the correct timeout values and dependencies. Site Recovery Manager runs the analytic cluster nodes and the vRealize Suite Lifecycle Manager on an isolated test network using a temporary snapshot of replicated data while performing test recovery.
Procedure
- Log in to the Management vCenter Server by using the vSphere Client.
- Open a Web browser and go to.
- Log in using the following credentials.
- From the Home menu, select Site Recovery.
- Click Open Site Recovery for the sfo01m01vc01.sfo01.rainpole.local site.
A new Site Recovery page opens for the sfo01m01vc01.sfo01.rainpole.local site.
- If you are logging in for the first time, enter the following credentials and click Login.
- On the Site Recovery page, click the number link next to Recovery Plans.
- On the Recovery Plans page, click the SDDC Operations Management RP recovery plan.
- On the SDDC Operations Management RP page, click the Test Recovery Plan icon to run a test recovery.
- On the Confirmation options page of the Test wizard, leave the Replicate recent changes to recovery site check box selected and click Next.
- On the Ready to complete page, click Finish to start the test recovery.
- Click the Recovery Steps tab and follow the progress of the test recovery.
- After the test recovery completes, click the Cleanup Recovery Plan icon to clean up all the created test VMs.
Note:
- On the Confirmation options page of the Cleanup wizard, click Next.
- On the Ready to complete page, click Finish to start the clean-up process.
If the protected vRealize Operations Manager and vRealize Suite Lifecycle Manager virtual machines are located in Region B, log in to the lax01m01vc01.lax01.rainpole.local vCenter Server and follow the procedure.
What to do next
If you encounter issues while performing this procedure, use the following troubleshooting tips: | https://docs.vmware.com/en/VMware-Validated-Design/5.0/com.vmware.vvd.sddc-verify.doc/GUID-A0216791-C5A5-4C8B-AED3-E566A256DC1B.html | 2021-02-25T00:26:33 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.vmware.com |
WikkaWiki 1.4.2 Features
See also:
General
- Programming language: PHP. | (PHP 7.0 SystemRequirements required)
- Storage: MySQL. | (MySQL 5.7+ SystemRequirements required)
- content can be displayed as rendered, source and raw source.
Installation
- 100% web-based upgrading from WakkaWiki.
- Shell/root access not required;
Security and antispam features
- Fine-grained access control through:
- folder-level .htaccess files.
- Advanced referrer management with blacklist support.
- More secure password hashing using PHP password_hashsince 1.4.2
Page editing
- A fast, fully redesigned editor toolbar (WikkaEdit) with more Wikka-specific functionality and improved cross-browser compatibility.
- Page preview.
- Edit notes support.
- Search & replace handler
- Universal Edit Button support.
Formatting
- A large selection of wiki markup options, with support for
- text styling;
- headings;
- multiple types of lists;
- code blocks;
- floats;
- threaded comments;
- notes.
- HTML can be embedded in pages in a safe manner.
Links
- Support for:
- automatically parsed links;
- image-links;
- forced links (with link text);
- plain HTML <a href="..."> links.
- Several shortcuts for interwiki linking.
- CSS-driven link rendering.
Media & files
Advanced features
- A large selection of plugins and user contributions, including: calendar, feedback form, Google search form among others.
- Advanced code highlighting (using GeSHi):
- support for several languages;
-).
- Various formatters to format output (currently CSVsince 1.4.2and HTML)
User-related features
- Configurable user login and registration screens.
- Password retrieval utilities.
- Lists of pages owned or recently edited by specific users.
Statistics and information
- Detailed system information.
- Tools for displaying statistics on pages and users.
- User statistics on edits, comments and page creation
Revision control tools
- Revision management tools:
- Page history easily viewed at a glance.
- Full comparison of page revisions (between any two versions) with highlighting of differences.
- Fast switch between simple and contextual diff mode
- RSS feeds for global changes and for page revisions.
- Tools to track pages without links and missing pages.
Administration tools
- Modules for user- and page administration.
- A large number of system configuration settings.
- Admin users and advanced ACL support for managing user privileges at individual page-level (read/write/comment).
- Comments can be deleted by page owner, comment poster, or admins.
- Optional gzip page compression.
CategoryEN | http://docs.wikkawiki.org/WikkaFeatures/show?time=2020-04-20+02%3A22%3A07 | 2021-02-24T23:53:43 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['http://docs.wikkawiki.org/images/features/sr.jpg',
'Search and Replace handler Search and Replace screenshot'],
dtype=object)
array(['http://docs.wikkawiki.org/images/features/rss.jpg',
'RSS autodiscovery support RSS autodiscovery screenshot'],
dtype=object) ] | docs.wikkawiki.org |
Step 3: Creating a configuration and a configuration profile
A configuration is a collection of settings that influence the behavior of your application. For example, you can create and deploy configurations that carefully introduce changes to your application or turn on new features that require a timely deployment, such as a product launch or announcement. Here's a very simple example of an access list configuration.
{ "AccessList": [ { "user_name": "Mateo_Jackson" }, { "user_name": "Jane_Doe" } ] }
A configuration profile enables AWS AppConfig to access your configuration from a source location. You can store configurations in the following formats and locations:
YAML, JSON, or text documents in the AWS AppConfig hosted configuration store
Objects in an Amazon Simple Storage Service (Amazon S3) bucket
Documents in the Systems Manager document store
Parameters in Parameter Store
Any integration source action supported by AWS CodePipeline
A configuration profile includes the following information.
The URI location where the configuration is stored.
The AWS Identity and Access Management (IAM) role that provides access to the configuration.
A validator for the configuration data. You can use either a JSON Schema or an AWS Lambda function to validate your configuration profile. A configuration profile can have a maximum of two validators.
For configurations stored in the AWS AppConfig hosted configuration store or SSM documents, you can create the configuration by using the Systems Manager console at the time you create a configuration profile. The process is described later in this topic.
For configurations stored in SSM parameters or in S3, you must create the parameter or object first and then add it to Parameter Store or S3. After you create the parameter or object, you can use the procedure in this topic to create the configuration profile. For information about creating a parameter in Parameter Store, see Creating Systems Manager parameters in the AWS Systems Manager User Guide.
About configuration store quotas and limitations
AWS AppConfig-supported configuration store have the following quotas and limitations.
About the AWS AppConfig hosted configuration store
AWS AppConfig includes an internal or YAML, JSON, or as text documents.
There is no cost to use the store.
You can create a configuration and add it to the store when you create a configuration profile.
Creating a configuration and a configuration profile
Before you begin
Read the following related content before you complete the procedure in this section.
The following procedure requires you to specify an IAM service role so that AWS AppConfig can access your configuration data in the configuration store you choose. This role is not required if you use the AWS AppConfig hosted configuration store. If you choose S3, Parameter Store, or the Systems Manager document store, then you must either choose an existing IAM role or choose the option to have the system automatically create the role for you. For more information, about this role, see About the configuration profile IAM role.
If you want to create a configuration profile for configurations stored in S3, you must configure permissions. For more information about permissions and other requirements for using S3 as a configuration store, see About configurations stored in Amazon S3.
If you want to use validators, review the details and requirements for using them. For more information, see About validators.
Creating an AWS AppConfig configuration profile (console)
Use the following procedure to create an AWS AppConfig configuration profile and (optionally) a configuration by using the AWS Systems Manager console.
To create a configuration profile
Open the AWS Systems Manager console at
.
On the Applications tab, choose the application you created in Create an AWS AppConfig configuration and then choose the Configuration profiles tab.
Choose Create configuration profile.
For Name, enter a name for the configuration profile.
For Description, enter information about the configuration profile.
On the Select configuration source page, choose an option.
If you selected AWS AppConfig hosted configuration, then choose either YAML, JSON, or Text, and enter your configuration in the field. Choose Next and go to Step 10 in this procedure.
If you selected Amazon S3 object, then enter the object URI. Choose Next.
If you selected AWS Systems Manager parameter, then choose the name of the parameter from the list. Choose Next.
If you selected AWS CodePipeline, then choose Next and go to Step 10 in this procedure.
If you selected AWS Systems Manager document, then complete the following steps.
In the Document source section, choose either Saved document or New document.
If you choose Saved document, then choose the SSM document from the list. If you choose New document, the Details and Content sections appear.
In the Details section, enter a name for the new application configuration.
For the Application configuration schema section, either choose the JSON schema using the list or choose Create schema. If you choose Create schema, Systems Manager opens the Create schema page. Enter the schema details in the Content section, and then choose Create schema.
For Application configuration schema version either choose the version from the list or choose Update schema to edit the schema and create a new version.
In the Content section, choose either YAML or JSON and then enter the configuration data in the field.
Choose Next.
In the Service role section, choose New service role to have AWS AppConfig create the IAM role that provides access to the configuration data. AWS AppConfig automatically populates the Role name field based on the name you entered earlier. Or, to choose a role that already exists in IAM, choose Existing service role. Choose the role by using the Role ARN list.
On the Add validators page, choose either JSON Schema or AWS Lambda. If you choose JSON Schema, enter the JSON Schema in the field. If you choose AWS Lambda, choose the function Amazon Resource Name (ARN) and the version from the list.
Important
Configuration data stored in SSM documents must validate against an associated JSON Schema before you can add the configuration to the system. SSM parameters do not require a validation method, but we recommend that you create a validation check for new or updated SSM parameter configurations by using AWS Lambda.
In the Tags section, enter a key and an optional value. You can specify a maximum of 50 tags for a resource.
Choose Create configuration profile.
If you created a configuration profile for AWS CodePipeline, then after you create a deployment strategy, as described in the next section, you must create a pipeline in CodePipeline that specifies AWS AppConfig as the deploy provider. For information about creating a pipeline that specifies AWS AppConfig as the deploy provider, see Tutorial: Create a Pipeline That Uses AWS AppConfig as a Deployment Provider in the AWS CodePipeline User Guide.
Proceed to Step 4: Creating a deployment strategy.
Creating an AWS AppConfig configuration profile (commandline)
The following procedure describes how to use the AWS CLI (on Linux or Windows) or AWS Tools for PowerShell to create a AWS AppConfig configuration profile.
To create a configuration profile step by step
Install and configure the AWS CLI or the AWS Tools for PowerShell, if you have not already.
For information, see Install or upgrade AWS command line tools.
Run the following command to create a configuration profile.
- Linux"
- Windows"
- PowerShell
New-APPCConfigurationProfile ` -Name
A_name_for_the_configuration_profile` -ApplicationId
The_application_ID` -Description
Description_of_the_configuration_profile` -LocationUri
A_URI_to_locate_the_configuration` -RetrievalRoleArn
The_ARN_of_the_IAM_role_with_permission_to_access_the_configuration_at_the_specified_LocationUri` -Tag
Hashtable_type_user_defined_key_value_pair_metadata_of_the_configuration_profile` -Validators "Content=
JSON_Schema_content_or_the_ARN_of_an_AWS_Lambda_function,Type=
validators_of_type_JSON_SCHEMA_and_LAMBDA"
The system returns information like the following.
- Linux
{ "ApplicationId": "The application ID", "Id": "The configuration profile ID", "Name": "The name of the configuration profile", "Description": "The configuration profile description", " } ] }
- Windows
{ "ApplicationId": "The application ID", "Id": "The configuration profile ID", "Name": "The name of the configuration profile", "Description": "The configuration profile description", "Id": "The configuration profile ID", " } ] }
- PowerShell
ApplicationId : The application ID ContentLength : Runtime of the command Description : The configuration profile description HttpStatusCode : HTTP Status of the runtime Id : The configuration profile ID LocationUri : The URI location of the configuration Name : The name of the configuration profile ResponseMetadata : Runtime Metadata RetrievalRoleArn : The ARN of an IAM role with permission to access the configuration at the specified LocationUri Validators : {Content: The JSON Schema content or the ARN of an AWS Lambda function, Type : Validators of type JSON_SCHEMA and LAMBDA} | https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-creating-configuration-and-profile.html | 2021-02-24T23:42:18 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.aws.amazon.com |
🚀New (ES) 7.11.0, 7.10.2, 6.8.14 support
🧐Enhancement (KBN) X-Forwarded-For copied from incoming request (or filled with source IP) before forwarding to ES
🧐Enhancement (KBN) Kibana logout event generates a special audit log entry in ROR audit logs index
🧐Enhancement (KBN) ROR panel shows "reports" button if kibana:management app is hidden
🐞Fix (ES) SQL API - better handling of invalid query
🐞Fix (ES) wrong behaviour of
kibana_access rule for ROR actions when ADMIN value is set
🚨Security Fix (ES) CVE-2020-35490 & CVE-2020-35490 (removed Jackson dependency from ROR core)
🚀New (ES) New response_fields rule
🧐Enhancement (ES) Full support for ILM API
🧐Enhancement (KBN) Enforce read-after-write consistency between kibana nodes
🧐Enhancement (KBN ENT) OIDC custom claims incorporated in "assertion" claim
🧐Enhancement (KBN ENT) OIDC support for configurable kibanaExternalHost (good for Docker)
🧐Enhancement (KBN ENT) ROR adds "ror-user_" class to "body" tag for easy per-user CSS/JS
🧐Enhancement (KBN ENT/PRO) ROR adds "ror-group_" class to "body" tag for easy per-group CSS/JS
🐞Fix (ES) ROR authentication endpoint action
🐞Fix (ES) "username" in audit entry when request is rejected
🐞Fix (ES) removed verbose logging
🚨Security Fix (ES) CVE-2020-25649
🚀New (ES) 7.10.1 support
🚨Security Fix (ES) Common Vulnerabilities and Exposures (CVE)
🚀New (ES) 7.10.0 support
🚀New (ES) auth_key_pbkdf2 rule
🧐Enhancement (ES) Fields rule performance improvement
🧐Enhancement (ES) Resolved index API support
🐞Fix (ES) index resolve action should be treated as readonly action
🐞Fix (ES) /_snapshot and /_snapshot/_all should behave the same
🚨Security Fix (ES) search template handling fix
🚀New (ES) 7.9.3 & 6.8.13 support
🧐Enhancement (ES) full support for ES Snapshots and Restore APIs
🐞Fix (KBN) fix crash in error handling
🐞Fix (ES) don't remove ES response warning headers
🐞Fix (ES) issue when entropy of /dev/random could have been exhausted when using JwtToken rule
🚀New (ES) 7.9.2 support
🐞Fix (KBN) fix code 500 error on login in Kibana
🚀New (ES) introduced must_involve_indices option for indices rule
🧐Enhancement (ES) negation support in headers rules
🧐Enhancement (ES) x-pack rollup API handling
🐞Fix (KBN) deep links query parameters are now handled
🐞Fix (KBN) make sure default kibana index is always discovered (fixes reporting in 6.x)
🐞Fix (ES) /_cluster/allocation/explain request should not be forbidden if matched block doesn't have indices rules
🐞Fix (ES) remote address extracting issue
🐞Fix (ES) fixed TYP audit field for some request types
🐞Fix (ES) missing handling of aliases API for ES 7.9.0
🚀New (ES) 7.9.0 support
🧐Enhancement (ES) aliases API handling
🧐Enhancement (ES) dynamic variables support in fields rule
🐞Fix (ES) adding aliases issue
🐞Fix (ES) potential memory leak for ES 7.7.x and above
🐞Fix (ES) cross cluster search issue fix for X-Pack _async_search action
🐞Fix (ES) XFF entry in audit issue
🐞Fix (KBN) SAML certificate loading
🐞Fix (KBN) SAML loading groups from assertion
🐞Fix (KBN) fix reporting in pre-7.7.0
🧐Enhancement (ES) cluster API support improvements
🐞Fix (ES) X-Pack _async_search support
🐞Fix (ES) _rollover request handling
🐞Fix (KBN) multitenancy+reporting regression fix (for 7.6.x and earlier)
🐞Fix (KBN) "x-" headers should be forwarded in /login route when proxy passthrough is enabled
🐞Fix (KBN) SAML metadata.xml endpoint not responding
🐞Fix (KBN) NAT/reverse proxy support for SAML
🐞Fix (KBN) SAML login redirect error
🐞Fix (ES) _readonlyrest/metadata/current_user should be always allowed by filter/fields rule
🚀New 7.7.1, 7.8.0 support
🧐Enhancement (KBN) tidy up audit page
🧐Enhancement (KBN FREE) clearly inform when features are not available
🧐Enhancement (KBN) ship license report of libraries
🧐Enhancement (ES) filter rule performance improvement
🐞Fix (KBN) proxy_auth: avoid logout-login loop
🐞Fix (KBN) 404 error on font CSS file
🐞Fix (ES) wildcard in filter query issue
🐞Fix (ES) forbidden /_snapshot issue
🐞Fix (ES) /_mget handling by indices rule when no index from a list is found
🐞Fix (ES) available groups order in metadata response should match the order in which groups appear in ACL
🐞Fix (ES) .readonlyrest and audit index - removed usage of explicit index type
🐞Fix (ES) tasks leak bug
🚀New 7.7.0, 7.6.2, 6.8.9, 6.8.8 support
🧐Enhancement (ES/KBN) kibana_access can be explicitly set to unrestricted
🧐Enhancement (ES) LDAP connection pool improvement
🐞Fix (ES) better LDAP request timeout handling
🐞Fix (ES) remote indices searching bug
🐞Fix (ES) cross cluster search support for _field_caps request
🚨Security Fix (ES) create and delete templates handling
🐞Fix (KBN) Regression in proxy_auth_passthrough
🧐Enhancement (KBN) whitelistedPaths now accepts basic auth credentials
🧐Enhancement (KBN) Dump logout button, new ROR Panel
🧐Enhancement (KBN) removed ROR from Kibana sidebar. Admins have a link in new panel.
🧐Enhancement (KBN) avoid show login form redirecting from SAML IdP
🚀New (KBN) OpenID Connect (OIDC) authentication connector
🚨Security Fix (KBN) server-side navigation prevention to hidden apps
🐞Fix (ES) Interpolating config with environment variables in SSL section
🐞Fix (KBN Ent 6.x) Fixed default space creation in
🐞Fix (KBN 6.x) Fixed error toast notification not showing
🐞Fix (KBN Ent) Fixed missing Axios dependency
🐞Fix (KBN Ent) Fixed SAML connector
🐞Fix (KBN) Toast notification overlap with logout bar
🧐Enhancement (KBN) Restyled logout bar
🧐Enhancement (KBN) Configurable periodic session checker
🚀New (ES/KBN) 7.6.1 compatibility
🚀New (ES) customizable name of settings index
🧐Enhancement (KBN) configurable ROR cookie name
🧐Enhancement (ES/KBN) handling of encoded ROR headers in Authorization header values
🧐Enhancement (KBN) user feedback on why login failed
🐞Fix (ES) support for multiple header values
🐞Fix (ES) releasing LDAP connection pool on reloading ROR settings
🐞Fix (KBN) multitenancy issue with 7.6.0+
🐞Fix (KBN) creation of default space for new tenant
🐞Fix (KBN 6.x) in RO mode, don't hide add/remove over fields in discovery
🐞Fix (KBN 6.x) index template & in-index session manager issues
🚀New (KBN) 7.6.0 support
🧐Enhancement (KBN) less verbose info logging
🧐Enhancement (KBN) start up time semantic check for settings
🐞Fix (KBN Free) missing logout button
🐞Fix (KBN) error message creating internal proxy
🐞Fix (KBN 6.x) add field to filter button invisible in RO mode
🎁Product (KBN) Launched ReadonlyREST Free for Kibana!
🚀New (ES) 7.6.0 support, Kibana support coming soon
🚀New (KBN) Audit log dashboard
🚀New (KBN) Template index can now be declared per tenant instead of globally
🚀New (ES) custom trust store file and password options in ROR settings
🧐Enhancement (ES) When "prompt_for_basic_auth" is enabled, ROR is going to return 401 instead of 404 when the index is not found or a user is not allowed to see the index
🧐Enhancement (ES) literal ipv6 with zone Id is acceptable network address
🧐Enhancement (ES) LDAP client cache improvements
🐞Fix (ES) /_all/_settings API issue
🐞Fix (ES) Index stats API & Index shard stores API issue
🐞Fix (ES) readonlyrest.force_load_from_file setting decoding issue
🐞Fix (KBN) allowing user to be logged in in two tabs at the same time
🐞Fix (KBN) logging with JWT parameter issue
🐞Fix (KBN) parsing of sessions fetched from ES index
🐞Fix (KBN) logout issue
🚀New (KBN) Configurable option to delete docs from tenant index when not present in template
🧐Enhancement (ES) Less verbose logging of blocks history
🧐Enhancement (ES) Enriched logs and audit with attempted username
🧐Enhancement (ES) Better settings validation - only one authentication rule can be used in given block
🧐Enhancement (ES/KBN) Plugin versions printing in logs on launch
🧐Enhancement (ES) When user doesn't have access to given index, ROR pretends that the index doesn't exist and return 404 instead of 403
🐞Fix (ES) Searching for nonexistent/forbidden index with wildcard mirrors default ES behaviour instead of returning 403
🐞Fix (KBN) Switching groups bug
🚀New (ES/KBN) Support v6.8.6, v7.5.0, v7.5.1
🚀New (KBN) Group names can now be mapped to aliases
🚀New (ES) New, more robust and simple method of creating custom audit log serializers
🚀New (ES) Example projects with custom audit log serializers
🐞Fix (KBN) Prevent index migration after kibana startup
🧐Enhancement (KBN) If default space doesn't exist in kibana index then copy from default one
🧐Enhancement (KBN) Crypto improvements - store init vector with encrypted data as base64 encoded json.
🧐Enhancement (ES) Better settings validation - prevent duplicated keys in readonlyrest.yml
🚀New (ES/KBN) Support v7.4.1, v7.4.2
🚀New (KBN) Kibana sessions stored in ES index
🐞Fix (ES) issue with in-index settings auto-reloading
🐞Fix (ES) _cat/indices empty response when matched block doesn't contain 'indices' rule
🚀New (ES/KBN) Support v7.4.0
🚀New (ES) Elasticsearch SQL Support
🚀New (ES) Internode ssl support for es5x, es60x, es61x and es62x
🚀New (ES) new runtime variable @{acl:current_group}
🚀New (ES) namespace for user variable and support for both versions: @{user} and @{acl:user}
🚀New (ES) support for multiple values in uri_re rule
🧐Enhancement (ES) more reliable in-index settings loading of ES with ROR startup
🧐Enhancement (ES) less verbose logs in JWT rules
🧐Enhancement (ES) Better response from ROR API when plugin is disabled
🧐Enhancement (ES) Splitting verification ssl property to client_authentication and certificate_verification
🐞Fix (ES) issue with backward compatibility of proxy_auth settings
🐞Fix (ES) /_render/template request NPE
🐞Fix (ES) _cat/indices API bug fixes
🐞Fix (ES) _cat/templates API return empty list instead of FORBIDDEN when no indices are found
🐞Fix (ES) updated regex for kibana access rule to support 7.3 ES
🐞Fix (ES) proper resolving of non-string ENV variables in readonlyrest.yml
🐞Fix (ES) lang-mustache search template handling
🚀New (ES) Field level security (FLS) supports nested JSON fields
🐞Security Fix (ES) Authorization headers appeared in clear in logs
🧐Enhancement (KBN) Don't logout users when they are not allowed to search a index-pattern
🧐Enhancement (ES) Headers obfuscation is now case insensitive
🚀New (ES/KBN) Support v7.3.1, v7.3.2
🚀New (ES) Configurable header names whose value should be obfuscated in logs
🚀New (KBN) Dynamic variables from user identity available in custom_logout_link
🧐Enhancement (ES) Richer logs for JWT errors
🧐Enhancement (ENT) nextUrl works also with SAML now
🧐Enhancement (ENT) SAML assertion object available in ACL dynamic variables
🧐Enhancement (KBN) Validate LDAP server(s) before accepting new YAML settings
🧐Enhancement (KBN) Ensure a read-only UX for 'ro' users in older Kibana
🐞Fix (ES) Fix memory leak from dependency (snakeYAML)
🐞Security Fix (ES) indices rule can now properly handle also the templates API
🧐Enhancement (ES) Array dynamic variables are serialized as CSV wrapped in double quotes
🧐Enhancement (ES) Cleaner debug logs (no stacktraces on forbidden requests)
🧐Enhancement (ES) LDAP debug logs fire also when cache is hit
🚀New (ES/KBN) Support v7.2.1, v7.3.0
🐞Fix (PRO) PRO plugin crashing for some Kibana versions
🐞Fix (ENT) SAML library wrote a too large cookie sometimes
🐞Fix (ENT) SAML logout not working
🐞Fix (ENT) JWT fix exception "cannot set requestHeadersWhitelist"
🐞Fix (PRO/ENT) Hide more UI elements for RO users
🐞Fix (PRO/ENT) Sometimes not all the available groups appear in tenancy selector
🐞Fix (PRO/ENT) Feature "nextUrl" broke
🐞Fix (PRO/ENT) prevent user kick-out when APM is not configured and you are not an admin
🚀New (PRO/ENT) Kibana request path/method now sent to ES (good for policing dev-tools)
🚀New (ES) User impersonation API
🚀New (ES) Support latest 6.x and 5.x versions
🐞Security Fix (ES) filter/fields rules leak
🐞Fix (KBN/ENT) allow more action for kibana_access, prevent sudden logout
🐞Fix (KBN/ENT) temporarily roll back "support for unlimited tenancies"
🚀New Support added for ES/Kibana 6.8.1
🧐Enhancement (ES) Crash ES on invalid settings instead of stalling forever
🧐Enhancement (ES) Better logging on JWT, JSON-paths, LDAP, YAML errors
🧐Enhancement (ES) Block level settings validation to user with precious hints
🧐Enhancement (ES) If force_load_from_file: true, don't poll index settings
🧐Enhancement (ES) Order now counts declaring LDAP Failover HA servers
🐞Fix (ES) "EsIndexJsonContentProvider" had a null pointer exception
🐞Fix (ES) "es.set.netty.runtime.available.processors" exception
🧐Enhancement (KBN) Collapsible logout button
🧐Enhancement (KBN) ROR App now uses a HA http client
🧐Enhancement (KBN) Automatic logout for inactivity
🧐Enhancement (KBN) Support unlimited amount of tenancies
🐞Fix (KBN/ENT) concurrent multitenancy bug
🐞Fix (KBN) Avoid sporadic errors on Save/Load buttons
🚀New Support for Elasticsearch & Kibana 7.2.0
🐞Fix (ES) restore indices ("IDX") in audit logging
🧐Enhancement (ES) New algorithm of setting evaluation order
🚀New (ES) JWT claims as dynamic variables. I.e. "@{jwt:claim.json.path}"
🚀New (ES) "explode" dynamic variables. I.e. indices: ["@explode{x-indices}"]
🐞Fix (PRO/Enterprise) preserve comments and formatting in YAML editor
🐞Fix (PRO/Enterprise) Print error message when session is expired
🐞Regression (PRO/Enterprise) Redirect to original link after login
🐞Regression (PRO/Enterprise) Broken CSV reporting
🧐Enhancement (PRO/Enterprise) Prevent navigating away from YAML editor w/ unsaved changes
🐞Fix (Enterprise) Exception when SAML connectors were all disabled
🐞Fix (Enterprise) Concurrent tenants could mix up each other kibana index
🐞Fix (Enterprise) Cannot inject custom JS if no custom CSS was also declared
🐞Fix (Enterprise) Injected JS had no effect on ROR logout button
🐞Fix (Enterprise) On narrow screens, the YAML editor showed buttons twice
🐞Fix (Elasticsearch) Reindex requests failed for a regression in indices extraction
🐞Fix (Elasticsearch) Groups rule erratically failed
🐞Fix (Elasticsearch) JWT claims can now contain special characters
🧐Enhancement (Elasticsearch) Better ACL History logging
🧐Enhancement (Elasticsearch) QueryLogSerializer and old custom log serializers work again
🐞Fix (PRO/Enterprise) ReadonlyREST icon in Kibana was white on white
🐞Fix (Enterprise) SAML connectors could not be disabled
🐞Fix (Enterprise) SAML connector "buttonName" didn't work
🚀New Support for Elasticsearch & Kibana 7.0.1
🧐Enhancement (Elasticsearch) empty array values in settings are invalid
🐞Security Fix (Elasticsearch) arbitrary x-cluster search referencing local cluster
🐞Fix (Elasticsearch) ArrayOutOfBoundException on snapshot operations
🧐Enhancement (PRO/Enterprise) History cleaning can now be disabled ("clearSessionOnEvents")
🚀New Support for Elasticsearch 7.0.0 (Kibana is coming soon)
🧐Enhancement (Elasticsearch) rewritten LDAP connector
🧐Enhancement (Elasticsearch) new core written in Scala is now GA
🐞Fix (Enterprise) devtools requests now honor the currently selected tenancy
🐞Security Fix (Enterprise/PRO) Fix "connectorsService" error in installation
🚀New Support for Kibana/Elasticsearch 6.7.1
🧐Enhancement (Enterprise >= Kibana 6.6.0) Multiple SAML identity provider
🐞Security Fix (Enterprise/PRO) Don't pass auth headers back to the browser
🐞Fix (Enterprise/PRO) Missing null check caused error in reporting (CSV)
🐞Fix (Enterprise) Don't reject requests if SAML groups are not configured
🐞Fix filter/fields rules not working in msearch (in 6.7.x)
🧐Enhancement Print whole LDAP search query in debug log
🚀New Support for Kibana/Elasticsearch 6.7.0
🧐Enhancement (PRO/Enterprise) JWT query param is the preferred credentials provider
🧐Enhancement (PRO/Enterprise) admin users can use indices management
🧐Enhancement (PRO/Enterprise) ro users can dismiss telemetry form
🐞Fix Audit logging in 5.1.x now works again
🐞Fix unpredictable behaviour of "filter" and "fields" when using external auth
🐞Fix LDAP ConcurrentModificationException
🐞Fix Audit logging in 5.1.x now works again
🐞Fix (PRO/Enterprise) JWT deep-link works again
1.17.2 went unreleased, all changes have been merged in 1.17.3 directly
🐞Fix (Enterprise) Tenancy selector showing if user belonged to one group
🐞Fix (PRO/Enterprise) RW buttons not hiding for RO users in React Kibana apps
🐞Fix (Enterprise) Tenancy templating now works much more reliably
🐞Fix (Enterprise) Missing tenancy selector icon after switching tenancy
🐞Fix (PRO/Enterprise) barring static files requests caused sudden logout
🐞Fix Numerous fixes to better support Kibana 6.6.x
🐞Fix Critical fixes in new Scala core
🐞Fix Exception in reindex requests caused tenancy templating to fail
🧐Enhancement Bypass cross-cluster search logic if single cluster
🐞Fix (PRO/Enterprise) SAML now works well in 6.6.x
🐞Fix (PRO/Enterprise) "undefined" authentication error before login
🐞Fix (Enterprise) Default space creation failures for new tenants
🐞Fix (Enterprise) Icons/titles CSS misalignment in sidebar (Firefox)
🧐Enhancement(Enterprise) UX: Larger tenancy selector
🐞Security Fix (Enterprise) Privilege escalation when changing tenancies under monitoring
🐞Fix (Elasticsearch) compatibility fixes to support new Kibana features
🧐Enhancements (Elasticsearch) New core and LDAP connector written in Scala is finished, now under QA.
🚀New Feature Support for Kibana/Elasticsearch 6.6.0, 6.6.1
🚀New Feature Internode SSL (ES 6.3.x onwards)
🧐Enhancement(PRO/Enterprise) UI appearence
🧐Enhancement Made HTTP Connection configurable (PR #410)
🐞Fix slow boot due to SecureRandom waiting for sufficient entropy
🐞Fix Enable kibana_access:ro to create short urls in es6.3+ (PR #408)
🧐Enhancement X-Forwarded-For header in printed es logs ("XFF")
🧐Enhancement kibanaindex: ".kibana@{user}" when user is "John Doe" becomes .kibana_john_doe
🐞Fix (Enteprise) parse SAML groups from assertion as array of strings
🐞Fix (Enteprise) SAMLRequest in location header was URLEncoded twice, broke on some IdP
🐞Fix (PRO/Enteprise) "cookiePass" works again, no more need for sticky cookies in load balancers!
🐞Fix (PRO/Enteprise) fix redirect loop with JWT deep linking when JWT token expires
🧐Enhancement (PRO/Enteprise) fix audit demo page CSS
🧐Enhancement (Enteprise) SAML more configuration parameters available
🚀New Feature (PRO/Enteprise) set ROR to debug mode (readonlyrest_kbn.logLevel: "debug")
🐞Fix(PRO/Enteprise) compatibility problems with older Kibana versions
🐞Fix(PRO/Enteprise) compatibility problems with OSS Kibana version
🚀New Feature "kibanaIndexTemplate": default dashboards and spaces for new tenants
🧐Enhancement Support for ES/Kibana 6.5.4
🧐Enhancement Upgraded LDAP library
🧐Enhancement (Enterprise) Now tenants save their CSV exports in their own reporting index
🐞Fix(PRO/Enteprise) Support passwords that start and/or end with spaces
🐞Fix (PRO/Enterprise) Now reporting works again
🧐Enhancement Support for ES/Kibana 6.5.2, 6.5.3
🚧WIP: Laid out the foundation for LDAP HA support
🧐Enhancement Support for ES/Kibana 6.4.3
🚀New Feature (PRO/Enterprise) configurable server side session duration
🚀New Feature [LDAP] High Availability: Round Robin or Failover
🧐Enhancement Support for ES/Kibana 6.4.2
🐞Fix (Enterprise) Multi tenancy: sometimes changing tenancy would not change kibana index
🐞Security Fix (Enterprise/PRO) Avoid echoing Base64 encoded credentials in login form error message
🧐Enhancement (Enterprise/PRO) Remove latest search/visualization/dashboard history on logout
🧐Enhancement (Enterprise/PRO) Clear transient authentication cookies on login error to avoid authentication deadlocks
🐞Fix: External JWT verification may throw ArrayOutOfBoundException
🚧WIP: Laid out the foundation for internode SSL transport (port 9300)
🚀New Feature [JWT] external validator: it's now possible to avoid storing the private key in settings
🧐Enhancement Support for ES/Kibana 6.4.1
🧐Enhancement Rewritten big part of ES plugin documentation
🧐Enhancement SAML Single log out flow
🐞Fix (Enterprise/PRO) cookiePass works again, but only for Kibana 5.x. Newer Kibana needs sticky sessions in LB.
🧐Enhancement (Enterprise/PRO) much faster logout
🐞 Fix (PRO/Enterprise) bugs during plugin packaging and installation process
🚀New Feature Users rule: easily restrict external authentication to a list of users
🧐Enhancement Support for ES 5.6.11
🐞Hot Fix (Enterprise/PRO) Error 404 when logging in with older versions of Kibana
🚀New Feature (Enterprise) SAML Authentication
🚀New Feature Support for Elasticsearch and Kibana 6.4.0
🚀New Feature Headers rule now split in headers_or and headers_and
🧐Enhancement Headers rule now allows wildcards
🚀New Feature (Enterprise) Multi-tenancy now works also with JSON groups provider
🐞 Fix Multi-tenancy (Enterprise) incoherent initial kibana_index and current group
🧐Enhancement Support for Elastic Stack 6.3.1 and 5.6.10
🚀New Feature (Enterprise) Custom CSS injection for Kibana
🚀New Feature (Enterprise) Custom Javascript injection for Kibana
🚀New Feature (PRO/Enterprise) access paths without need to login (i.e. /api/status)
🐞Fix (PRO/Enterprise) Navigating to X-Pack APM caused hidden Kibana apps to reappear
🚀New Feature: map LDAP groups to local groups (a.k.a. role mapping)
🐞 Fix (Elasticsearch) wildcard aliases resolution not working in "indices" rule.
🧐Enhancement: it is now possible now to use JDK 9 and 10
🐞 Fix (PRO/Enterprise) wait forever for login request (i.e. slow LDAP servers)
🐞 Fix (PRO/Enterprise) add spinner and block UI if login request is being sent
🐞 Fix (PRO/Enterprise) if user is logged out because of LDAP cache expiring + slow authentication, redirect to login.
🐞 Fix (PRO/Enterprise) let RO users delete/edit search filters
🚀New Feature: Introducing support for Elasticsearch and Kibana v6.3.0
🐞 Fix (Enterprise) multi tenancy - switching tenancy does not always switch kibana index
🧐 Enhancement: when login, forward "elasticsearch.requestHeadersWhitelist" headers. (useful for "headers" rule and "proxy_auth" to work well.)
🚀 New feature: Field level security
🚀 New rules: Snapshot, Repositories, Headers
🧐 Enhancement: custom audit serializers: the request content is available
🐞 Fix readonlyrest.yml path discovery
🐞 Fix: LDAP available groups discovery (tenancy switcher) corner cases
🐞 Fix: auth_key_sha1, auth_key_sha256 hashes in settings should be case insensitive
🐞 Fix: LDAP authentication didn't work with local group | https://docs.readonlyrest.com/changelog | 2021-02-25T00:13:39 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.readonlyrest.com |
Use the SMWeb Services Bootp/DHCP Packet Monitor to obtain or verify the MAC address of a chassis or chassis component.
- Access the web page of the managing CMIC.
- From the web page of the managing CMIC, select The .Bootp/DHCP Packet Monitor appears on the CMIC web page.
- Power cycle the chassis.
- Write down or verify the primary and secondary MAC addresses (if applicable) displayed in the Bootp/DHCP Packet Monitor. | https://docs.teradata.com/r/EeEFmq4RLQF2eA6U5ZM6Qg/lj1pIhkqpRantvWlSQbDxQ | 2021-02-25T00:14:30 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.teradata.com |
WORM (Write Once Read Many)
This features enables you to create a
WORM volume using gluster CLI.
Description
WORM (write once,read many) is a desired feature for users who want to store data such as
log files and where data is not allowed to get modified.
GlusterFS provides a new key
features.worm which takes boolean values(enable/disable) for volume set.
Internally, the volume set command with 'feature.worm' key will add 'features/worm' translator in the brick's volume file.
This change would be reflected on a subsequent restart of the volume, i.e gluster volume stop, followed by a gluster volume start.
With a volume converted to WORM, the changes are as follows:
- Reads are handled normally
- Only files with O_APPEND flag will be supported.
- Truncation,deletion wont be supported.
Volume Options
Use the volume set command on a volume and see if the volume is actually turned into WORM type.
# features.worm enable
Fully loaded Example
WORM feature is being supported from glusterfs version 3.4 start glusterd by using the command
# service glusterd start
Now create a volume by using the command
# gluster volume create <vol_name> <brick_path>
start the volume created by running the command below.
# gluster vol start <vol_name>
Run the command below to make sure that volume is created.
# gluster volume info
Now turn on the WORM feature on the volume by using the command
# gluster vol set <vol_name> worm enable
Verify that the option is set by using the command
# gluster volume info
User should be able to see another option in the volume info
# features.worm: enable
Now restart the volume for the changes to reflect, by performing volume stop and start.
# gluster volume <vol_name> stop # gluster volume <vol_name> start
Now mount the volume using fuse mount
# mount -t glusterfs <vol_name> <mnt_point>
create a file inside the mount point by running the command below
# touch <file_name>
Verify that user is able to create a file by running the command below
# ls <file_name>
How To Test
Now try deleting the above file which is been created
# rm <file_name>
Since WORM is enabled on the volume, it gives the following error message
rm: cannot remove '/<mnt_point>/<file_name>': Read-only file system
put some content into the file which is created above.
# echo "at the end of the file" >> <file_name>
Now try editing the file by running the commnad below and verify that the following error message is displayed
rm: cannot remove '/<mnt_point>/<file_name>': Read-only file system
# sed -i "1iAt the beginning of the file" <file_name>
Now read the contents of the file and verify that file can be read.
cat <file_name>
Note: If WORM option is set on the volume before it is started, then volume need not be restarted for the changes to get reflected. | https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/worm/ | 2021-02-24T23:09:19 | CC-MAIN-2021-10 | 1614178349708.2 | [] | staged-gluster-docs.readthedocs.io |
An internationally recognized pediatric ophthalmologist with expertise in strabismus, amblyopia, pediatric cataracts and glaucoma has joined CHOC Children’s. Dr. Rahul Bhola is the newest division chief of ophthalmology with CHOC Children’s Specialists.
“The biggest reason I was inspired to join CHOC was the mission of the hospital. I feel that CHOC’s mission to nurture, advance and protect the health and well-being of children is in close alignment with my personal goals as a physician,” Bhola says. “I seek to nurture the health care of children by delivering state-of-the-art ophthalmology care to our community. CHOC has the resources, reputation and experience to provide excellent care.”
Dr. Bhola comes from a family of physicians. His parents practiced internal medicine for more than 40 years in India, and the empathetic and holistic care they provided to their patients inspired him to pursue a career in medicine.
“Very early on in medical school, I developed a special interest in pediatrics, and the surgical finesse of ophthalmology later cemented my passion for pediatric ophthalmology. The gift of vision is the most important sense a child can have,” Dr. Bhola says. “Giving a ray of light to those who struggle with vision is very gratifying to me. Treating children is important to me because they have their entire lives ahead of them, and improving their vision positively impacts their entire family.”
Dr. Bhola attended medical school and completed an internship at University College of Medical Sciences in Delhi, India. He completed two residencies in ophthalmology at Maulana Azad Medical College in New Delhi, India and the University of Louisville, Kentucky. He pursued fellowships in pediatric ophthalmology at the Jules Stein Eye Institute at the University of California Los Angeles and the University of Iowa.
Dr. Bhola has received numerous awards both nationally and internationally and has extensively published in peer-reviewed journals. He has participated as an investigator in many NIH-sponsored trials and has been named to the “Best Doctors in America” and “America’s Top Ophthalmologists” lists consecutively for many years. Dr. Bhola recently started studying the ocular effect of excessive smart device usage in children. His research includes tear film composition in children who are consistently overexposed to smart devices, thereby establishing a link between dry eyes in children and excessive smart device usage.
At CHOC, Dr. Bhola will provide comprehensive eye care, treating patients with a variety of eye diseases and disorders. In addition to treating refractive errors (the need for glasses), Dr. Bhola will provide more specialized care for diseases like amblyopia (lazy eyes), pediatric and adult strabismus (crossing or drifting of eyes), blocked tear duct, diplopia (double vision), pediatric cataracts, pediatric glaucoma, tearing eyes, retinopathy of prematurity, ptosis (droopy eyelids), traumatic eye injuries and uveitis.
Dr. Bhola is among the very few surgeons nationally skilled in treating pediatric glaucoma surgically using the illuminated microcatheter. This highly-specialized, minimally-invasive approach of canaloplasty has been used for treating pediatric glaucoma only within the last few years. Childhood glaucoma, though uncommon, can be a blinding disease causing severe visual impairment if not detected early and treated promptly. The onset of juvenile glaucoma often occurs between the ages of 10 and 20 and can be multifactorial. Glaucoma in pediatric population can also be secondary to trauma occurring from any form of injury including sports injuries.
As a Level II pediatric trauma center, and the only one in Orange County dedicated exclusively for kids, CHOC’s trauma team treats a variety of critically injured children from across the region. This includes children who have sustained sports injuries, during which damage to the structure of the eye can cause glaucoma.
Dr. Bhola is very passionate about educating primary care physicians on the need for regular pediatric vision screenings. For example, children complaining of headaches may be taken to a neurologist. However, eye problems such as refractive errors, convergence insufficiency and strabismus can result in headache from excessive straining of the eyes, which may affect school performance and even social withdrawal in some children. These conditions are likely to be identified at regular vision screenings.
Dr. Bhola’s philosophy of care is to treat his patients as if they were his own children.
“My main philosophy is to deliver patient-centered care with compassion and excellence. I remember their life events and celebrate their achievements with them. It’s important that a patient remembers you in order to start to build trust with them. I love when my patients send me holiday cards and copies of their school photos and let me know how they are doing. They became part of my family. I always treat every patient like they are my own child,” Bhola says.
He also focuses on treating the whole person rather than the disease, and involving patients in their care.
“I don’t treat the disease, I treat the individual. Healing is more than treating the disease. I want to be at their level so I always talk to them directly and not only talk to their parents. I involve their entire group during treatment,” he says.
At CHOC, Dr. Bhola is eager to provide holistic eye care for his patients.
“My practice will offer complete comprehensive vision care to all patients, which includes both medical as well as surgical care. Our patients come to us for glasses, contacts, regular ocular screenings, and we also provide more specialized care like glaucoma, cataract and strabismus surgeries,” Bhola says. “A lot of systemic disorders such as diabetes, sickle cell anemia, juvenile rheumatic disease and lupus, have co-occurring eye issues that may go undetected if children aren’t seen for regular eye screenings. CHOC patients with systemic disorders such as diabetes now have better access to holistic care.”
As division chief for CHOC Children’s Specialists ophthalmology, Dr. Bhola is passionate about providing state-of-the-art care to patients and training the next generation of pediatric ophthalmologists.
“My main goal is to build a leading ophthalmology division, not only delivering excellent patient care but also engaging in cutting-edge research and disseminating education to the next generation of ophthalmologists and referring providers,” Bhola says.
When not treating patients, Dr. Bhola enjoys cooking, practicing yoga and meditation, and spending time with his wife and two daughters.
To contact Dr. Bhola or refer a patient, please call 888-770-2462.
Learn more about ophthalmology at CHOC Children’s.
Related posts:
CHOC Children’s Grand Rounds Video: Optic Neuritis in Pediatric PatientsIn this CHOC Children’s grand rounds video, Dr. Chantal Boisvert, neuro-ophthalmologist, addresses optic neuritis in pediatric patients. Specifically, she discusses how the presentation and outcome can be different for children ... | https://docs.chocchildrens.org/tag/ophthalmology/ | 2019-03-18T20:12:28 | CC-MAIN-2019-13 | 1552912201672.12 | [array(['https://docs.chocchildrens.org/wp-content/uploads/2017/08/Bhola-Rahul_0128-411x576.jpg',
'Dr. Rahul Bhola'], dtype=object) ] | docs.chocchildrens.org |
pgr_apspJohnson - Deprecated function¶
Warning
This function is deprecated!!!
- It has been replaced by a new functions, is no longer supported, and may be removed from future versions.
- All code that uses this function should be converted to use its replacement: pgr_johnson.
Synopsis¶
Johnson’s algorithm is a way to find the shortest paths between all pairs of vertices in a sparse, edge weighted, directed graph. Returns a set of pgr_costResult (seq, id1, id2, cost) rows for every pair of nodes in the graph.
pgr_costResult[] pgr_apspJohnson(sql text);
Description¶
Returns set of pgr_costResult[]:
History
- Deprecated in version 2.2.0
- New in version 2.0.0
Examples¶
SELECT * FROM pgr_apspJohnson( 'SELECT source::INTEGER, target::INTEGER, cost FROM edge_table WHERE id < 5' ); NOTICE: Deprecated function: Use pgr_johnson instead seq | id1 | id2 | cost -----+-----+-----+------ 0 | 1 | 2 | 1 1 | 1 | 5 | 2 2 | 2 | 5 | 1 (3 rows)
The query uses the Sample Data network. | https://docs.pgrouting.org/2.3/en/src/apsp_johnson/doc/pgr_apspJohnson.html | 2019-03-18T19:22:17 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.pgrouting.org |
Availability: Macintosh.
This module provides an interface to the Macintosh Domain Name Resolver. It is usually used in conjunction with the mactcp module, to map hostnames to IP addresses. It may not be available in all Mac Python versions.
The macdnr module defines the following functions:
HInforecord for host hostname. These records contain hardware and software information about the machine in question (if they are available in the first place). Returns a dnr result object of the ``hinfo'' variety. | http://docs.python.org/release/2.1.1/mac/module-macdnr.html | 2012-05-27T05:13:24 | crawl-003 | crawl-003-023 | [] | docs.python.org |
Write a Python search command
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
Write a Python search command
This topic discusses how your Python script should handle inputs and arguments. The search command script should be located in
$SPLUNK_HOME/etc/apps/<app_name>/bin/ and named
<command>.py. Also, when naming the search command:
- Use only alphanumeric (a-z, A-Z, and 0-9) characters.
- Do not use a search command name that already exists.
Handling inputs
The input to the script should be in pure CSV format or in Intersplunk format, which is a header section followed by a blank line followed by pure CSV body.
To indicate whether or not your script expects a header, use the 'enableheader' key. The 'enableheader' key defaults to true, which means that the input will contain the header section and you are using the Intersplunk format.
If 'enableheader' is false, your script does not expect a header section and the input will be pure CSV. In this case,.
The output of your script is expected to be pure CSV. For an error condition, simply return a CSV with a single "ERROR" column and a single row (besides the header row) with the contents of the message.
Another method of interpreting.
Handling arguments
The arguments that are passed to your script (in sys.argv) will be the same arguments that are used to invoke your command in the search language.
The exception is if.
Following is the definition of
outputInfo() and its arguments. You can also specify each of these arguments statically within commands.conf.
def outputInfo(streaming, generating, retevs, reqsop, preop, timeorder=False)
streaming
Is your command streamable?
streaming = true indicates that your script can be applied separately for each chunk of results processed in the search pipeline. Otherwise, your script will only be called a single time with all of the input results that it will ever see.
If your script is not streaming, you may specify a search (that contains only streaming commands) to be executed before your script, if your script is the very first non-streaming command in the pipeline or if you have
requires_preop set to 'true' (it's 'false' by default).
generating
Does you command generate new events? A generating command is one that must be the first command specified in a search. Generating commands do not expect any input and generating output that depends only on command line arguments.
retevs
Does your command retain events (sort, dedup, cluster) or does it transform (stats) them? This argument is
retainsevents in
commands.conf.
This argument indicates whether this script, if given 'events' as input will return 'events' as output. By default this is 'false', meaning that the timeline will never represent the output of this command. In general, if you retain the '_raw' and '_time' fields, you can set
retevs to 'true'.
reqsop
Does your command require pre-streaming operations? This argument is
requires_preop in
commands.conf.
Basically, this argument indicates whether the string in the 'preop' variable must be executed, regardless if this script is the first non-streaming command in a search pipeline or not.
preop
This argument is the
streaming_preop key in
commands.conf. If
reqsop = true, this argument is the string that denotes the requested pre-streaming search string.
timeorder
If
generating = true, does your command generate events in descending time order, or does your command change the time order of events?
This argument represents both
generates_timeorder and
overrides_timeorder in
commands.conf.
overrides_timeorder indicates whether or not, if the input to this script is in descending time order, the output will also be in descending time order.
generates_timeorder applies when this script is a generating command and indicates if this script will issue output in descending time order (and the output is always regardless of. | http://docs.splunk.com/Documentation/Splunk/4.0.1/SearchReference/WriteaPythonsearchcommand | 2012-05-27T05:31:10 | crawl-003 | crawl-003-023 | [] | docs.splunk.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.