content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Multiple news/weblog sections¶
Your project can host multiple independent news/weblog sections, each with their own items. For example, a company website might have a news section for publishing press releases, and a weblog for publishing more informal articles.
To do this, you need to create a django CMS page for each news/weblog section, and add an apphook for Aldryn News & Blog to each of them. You will also need to create a separate apphook configuration for each of them - apphook configurations cannot be shared between apphook instances.
Creating a new News & Blog section¶
The quickest way to do this is:
- Create the new page.
- In its Advanced settings, choose Newsblog in the Application field. A new field, Application configurations, will appear immediately below it.
- Add a new application configuration, by selecting the + icon.
Fields¶
Instance namespace - a unique (and slug-like) name for this configuration. Note that this cannot be subsequently changed.
Application title - A human-readable name for the configuration, that helps explain its purpose to the users of the system. For example, if this news section will publish press releases, call it Press releases. The name will be reflected in the django CMS toolbar when you’re on that page.
Permalink type - the format of canonical URLs for articles in this section.
Non-permalink-handling - For convenience, the system can optionally resolve URLs that are not in
the canonical format. For example, if the canonical URL is
2016/11/27/man-bites-dog, the URL
man-bites-dog can redirect to it. This behaviour is the default, but optional.
Prefix for template directories - If you’d like this news section to use custom templates, create
a set in a new directory. So for example, instead of using the default
aldryn_newsblog/article_list.html, it will look for
aldryn_newsblog/custom-directory/article_list.html.
Include in search index - see Per-apphook indexing.
Other fields are self-explanatory.
Apphook configurations can also be created and edited in other ways:
- from the Django Admin, in Aldryn News & Blog > Application configuration
- from the option Configure addon... in the apphook’s menu in the django CMS toolbar
Access to application configuration¶
Typically, you will not provide most content editors of your site with admin permissions to manage apphooks - this should be reserved for site managers. | https://aldryn-newsblog.readthedocs.io/en/latest/how-to/apphook_configurations.html | 2020-02-17T01:16:42 | CC-MAIN-2020-10 | 1581875141460.64 | [] | aldryn-newsblog.readthedocs.io |
If there is a specific call you're trying to review but can't find it, we're here to help! There are three methods you can employ to locate a specific call.
1. Was it a call or meeting that you owned?
The best place to locate calls you've personally had is the My Recordings page. This page will showcase any recordings you own, and will also display your upcoming meetings.
On the left of your My Recordings page, you will see the most recent meetings you have had. You can click into them from here to review.
To the right, you will see upcoming calls -- both those that are scheduled to be recorded, as well as those that are not scheduled to be recorded. Quickly take a peak by hovering over the toggle button to proactively detect why the call may not be scheduled to record.
If you swore your call should have been recorded, click here to read more about how to find it, or click here to read about what needs to be in place for your calls to record properly.
2. Was it a call that someone on your team had?
The best way to locate calls made by a team-member would be to go to the Recordings page. Here you can filter by different criteria such as Team, specific Rep, Date, and deal stage at time of call.
Pro Tip: Click "Save View" on the far right if this is a search you frequently make.
3. Do you only remember a couple key words?
For searches that rely more on keywords such as customer name or name of the meeting, we recommend using the magnifying glass search function. It can be used to search for customer and prospect names, rep names, and even the title of your calendar meeting. Note: this view is currently in Beta and will be released soon!
Please sign in to leave a comment. | https://docs.chorus.ai/hc/en-us/articles/360036951234-How-to-Find-A-Call-Recording | 2020-02-17T02:24:09 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/hc/article_attachments/360049198133/my_recordings_page_find_call.png',
'my_recordings_page_find_call.png'], dtype=object)
array(['/hc/article_attachments/360048316474/Image_2019-10-15_at_4.08.48_PM.png',
'Image_2019-10-15_at_4.08.48_PM.png'], dtype=object) ] | docs.chorus.ai |
Advanced Synchronization Techniques
Multithreaded applications often use wait handles and monitor objects to synchronize multiple threads. These sections explain how to use the following .NET Framework classes when synchronizing threads: AutoResetEvent, Interlocked, ManualResetEvent, Monitor, Mutex, ReaderWriterLock, Timer, WaitHandle. time-out is specified) or wait indefinitely (if no time-out is specified) for the other thread to release the wait handle. If a time-out is specified and the wait handle is released before the time-outAny, or WaitAll: mutex objects, ManualResetEvent, and AutoResetEvent. The last two are often referred to as synchronization events.
Mutex Objects
Mutex objects are synchronization objects that can be owned by only a single thread at a time. notify other threads that something has occurred or that a resource is available. Despite the term including the word "event," synchronization events are unlike other Visual Basic events—they are really wait handles. Like other wait handles, synchronization events have two states, signaled and nonsignaled.
Threads that call one of the wait methods of a synchronization event must wait until another thread signals the event by calling the Set method. There are two synchronization event classes: ManualResetEvent and AutoResetEvent. has become provides the SyncLock and End SyncLock statements to simplify access to monitor objects. Visual C#
ReaderWriter Locks
In some cases, you may want to lock a resource only when data is being written and permit multiple clients to simultaneously read data when data is not being updated. The ReaderWriterLock.
Deadlocks.
See Also
Concepts
Advanced Multithreading with Visual Basic
Multithreaded Applications
Reference
Other Resources
Multithreading in Components | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/fxy8dte8(v=vs.90)?redirectedfrom=MSDN | 2020-02-17T01:47:00 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
Install Visual C++ for Cross-Platform Mobile Development
Note
This article applies to Visual Studio 2015. If you're looking for the latest Visual Studio documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2019. Download it here
Visual C++ for Cross-Platform Mobile Development]() is an installable component of Visual Studio 2015. It includes cross-platform Visual Studio templates and installs the cross-platform tools and SDKs to get started quickly, without having to locate, download, and configure them yourself. You can use these tools in Visual Studio to easily create, edit, debug and test cross-platform projects. This topic describes how to install the tools and third-party software required to develop cross-platform apps using Visual Studio. For an overview of the component, see Visual C++ Cross-Platform Mobile
Requirements
Get the tools
Install the tools
Install tools for iOS
Install or update dependencies manually
Requirements
For installation requirements, see Visual Studio 2015 System Requirements.
Important
If you are using Windows 7 or Windows Server 2008 R2, you can develop code for Classic Windows applications, Android Native Activity apps and libraries, and apps and code libraries for iOS, but not Windows Store or Universal Windows apps.
To build apps for specific device platforms, there are some additional requirements:
Windows Phone emulators and the Microsoft Visual Studio Emulator for Android require a computer that can run Hyper-V. The Hyper-V feature in Windows must be enabled before you can install and run the emulators. For more information, see the emulator's system requirements.
The x86 Android emulators that come with the Android SDK work best on computers that can run the Intel HAXM driver. This driver requires an Intel x64 processor with VT-x and Execute Disable Bit support. For more information, see Installation Instructions for Intel® Hardware Accelerated Execution Manager - Microsoft Windows.
Building code for iOS requires an Apple ID, an iOS Developer Program account, and a Mac computer that can run Xcode 6 or later on OS X Mavericks or later versions. For simple installation steps, see Install tools for iOS.
Get the tools
Visual C++ for Cross-Platform Mobile Development is an installable component included in Visual Studio Community, Professional, and Enterprise editions. To get Visual Studio, go to the Visual Studio 2015 Downloads page and download Visual Studio 2015 with Update 2 or later.
Install the tools
The installer for Visual Studio 2015 includes an option to install Visual C++ for Cross-Platform Mobile Development. This installs the required C++ language tools, templates and components for Visual Studio, the GCC and Clang toolsets needed for Android builds and debugging, and components to communicate with a Mac for iOS development. It also installs all the third-party tools and software development kits that are required to support iOS and Android app development. Most of these third-party tools are open-source software required for Android platform support.
Android Native Development Kit (NDK) is required to build C++ code that targets the Android platform.
Android SDK, Apache Ant, and Java SE Development Kit are required for the Android build process.
Microsoft Visual Studio Emulator for Android is an optional high-performance emulator useful for testing and debugging your code.
To install Visual C++ for Cross-Platform Mobile Development and the third-party tools
Run the Visual Studio 2015 installer that you downloaded following the link in Get the tools. To install optional components, choose Custom as the type of installation. Choose Next to select the optional components to install.
In Select features, expand Cross Platform Mobile Development and check Visual C++ Mobile Development.
By default, when you select Visual C++ Mobile Development, the Programming Languages option is set to install Visual C++, and the Common Tools and Software Development Kits options are set to install required third-party components. You can choose additional components if you need them. By default, the Microsoft Visual Studio Emulator for Android is also selected. Components that are already installed appear inactive in the list.
To build Universal Windows apps and share code between them and your Android and iOS projects, in Select features, expand Windows and Web Development and check Universal Windows App Development Tools. If you don't plan to build Universal Windows apps, you can skip this option.
Choose Next to continue.
The third-party components have their own license terms. You can view the license terms by choosing the License Terms link next to each component. Choose Install to add the components and install Visual Studio and Visual C++ for Cross-Platform Mobile Development.
When installation is complete, close the installer and then restart your computer. Some setup actions for the third-party components do not take effect until the computer is restarted.
Important
You must restart to make sure everything is installed correctly.
If the Microsoft Visual Studio Emulator for Android component failed to install, your computer may not have Hyper-V enabled. Use the Turn Windows features on or off Control Panel app to enable Hyper-V, and then run the Visual Studio installer again.
Note
If your computer or your version of Windows does not support Hyper-V, you can't use the Microsoft Visual Studio Emulator for Android component. The Home Edition of Windows does not include Hyper-V support.
Open Visual Studio. If this is the first time that you have run Visual Studio, it may take some time to configure and sign in. When Visual Studio is ready, on the Tools menu, select Extensions and Updates, Updates. If there are Visual Studio updates available for Visual C++ for Cross-Platform Mobile Development or for Microsoft Visual Studio Emulator for Android, install them.
Install tools for iOS
You can use Visual C++ for Cross-Platform Mobile Development to edit, debug and deploy iOS code to the iOS Simulator or to an iOS device, but because of licensing restrictions, the code must be built remotely on a Mac. To build and run iOS apps using Visual Studio, you must set up and configure the remote agent on your Mac. For detailed installation instructions, prerequisites and configuration options, see Install And Configure Tools to Build using iOS. If you're not building for iOS, you can skip this step.
Install or update dependencies manually
If you decide not to install one or more third-party dependencies using the Visual Studio installer when you install the Visual C++ Mobile Development option, you can install them later by using the steps in Install the tools. You can also install or update them independently of Visual Studio.
Caution
You can install the dependencies in any order, except for Java. You must install and configure the JDK before you install the Android SDK.
Read the following information and use these links to install dependencies manually.
By default, the installer puts the Java tools in C:\Program Files (x86)\Java.
During the installation, update the APIs as recommended. Make sure that at least the SDK for Android 5.0 Lollipop (API level 21) is installed. By default, the installer puts the Android SDK in C:\Program Files (x86)\Android\android-sdk.
You can run the SDK Manager app in the Android SDK directory again to update the SDK and install optional tools and additional API levels. Updates may fail to install unless you use Run as administrator to run the SDK Manager app. If you have problems building an Android app, check the SDK Manager for updates to your installed SDKs.
To use some of the Android emulators that come with the Android SDK, you must install the optional Intel HAXM drivers. You may have to remove the Hyper-V feature from Windows to install the Intel HAXM drivers successfully. You must restore the Hyper-V feature to use the Windows Phone emulators and the Microsoft Visual Studio Emulator for Android.
By default, the installer puts the Android NDK in C:\ProgramData\Microsoft\AndroidNDK. You can download and install the Android NDK again to update the NDK installation.
By default, the installer puts Apache Ant in C:\Program Files (x86)\Microsoft Visual Studio 14.0\Apps.
Microsoft Visual Studio Emulator for Android
You can install and update the Microsoft Visual Studio Emulator for Android from the Visual Studio Gallery.
In most cases, Visual Studio can detect the configurations for the third-party software you’ve installed, and maintains the installation paths in internal environment variables. You can override the default paths of these cross-platform development tools in the Visual Studio IDE.
To set the paths for third-party tools
On the Visual Studio menu bar, select Tools, Options.
In the Options dialog box, expand Cross Platform, C++, and select Android.
To change the path used by a tool, check the checkbox next to the path, and edit the folder path in the textbox. You can also use the browse button (...) to open a Select location dialog to choose the folder.
Choose OK to save the custom tool folder locations.
See Also
Install And Configure Tools to Build using iOS
Visual C++ Cross-Platform Mobile | https://docs.microsoft.com/en-us/visualstudio/cross-platform/install-visual-cpp-for-cross-platform-mobile-development?view=vs-2015&redirectedfrom=MSDN | 2020-02-17T02:23:28 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
The live activity feed allows agents to keep up-to-date on what leads are currently doing on your website, any communication with you or your agents and any actions the Lead Manager may have automated on the leads. It is available on the Lead Manager's dashboard and everywhere else in the header bar.
Open the live activity feed by clicking on the 3 person icon.
There are three main parts to the live activity feed
Within an activity, there are four pieces of information.
The live activity feed allows you to display and receive certain activities based on their activity type. This can be accomplished by selecting any filter in the Filters select box.
Filters include:
Toggle between viewing activities from leads that are assigned to you and activities from all leads within your company. This feature is only available to Administrators and Team Leaders. This works hand-in-hand with the filters as well. So an Admin can view all Property Searches done by any leads for the company. | https://docs.realgeeks.com/live_feed | 2020-02-17T01:14:53 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.realgeeks.com |
Learn how to schedule materialization refresh of a view to keep it in sync with the data that makes it up.Estimated reading time: 1 minute
To keep the data in a view up-to-date, you can schedule periodic refreshes from the underlying table(s).
To schedule materialization of a view:
To find your view, click Data in the top menu, and choose Views.
Click the name of your view.
Click Schema.
Under Materialization, click the link next to Update Schedule.
In the Schedule Data Updates dialog, select an option for Repeats (Monthly, Weekly, or Daily).
Fill in the schedule details:
Click SCHEDULE.
Note: Refresh works only if it is scheduled in the refresh window set for the cluster (default: 8:00 PM - 4:00 AM). Only the start time of the refresh window is configurable using the flag
orion.materializationConfig.refreshWindowStartTimewhich can be set to values such as
12:00PMor
01:00AMetc. Example: To set the cluster window from 2:00 AM to 10:00 AM you can set the flag as
orion.materializationConfig.refreshWindowStartTime "02:00AM". | https://docs.thoughtspot.com/5.0/admin/loading/schedule-materialization.html | 2020-02-17T00:10:02 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.thoughtspot.com |
This library provides a tool for changing parameters of different types at runtime. The parameter_handler allows you to
The ROS implementation additionally features:
The source code is available at.
This library is Free Software and is licensed under BSD 3-Clause.
Involved people:
Contact: Christian Gehring (gehrinch ( at ) ethz.ch) | https://docs.leggedrobotics.com/parameter_handler_doc/ | 2020-02-17T02:07:50 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.leggedrobotics.com |
Button
Base. Image Property
Definition
Gets or sets the image that is displayed on a button control.
public: property System::Drawing::Image ^ Image { System::Drawing::Image ^ get(); void set(System::Drawing::Image ^ value); };
public System.Drawing.Image Image { get; set; }
member this.Image : System.Drawing.Image with get, set
Public Property Image As Image
Property Value Image property is set, the ImageList property will be set to
null, and the ImageIndex property will be set to its default, -1.
Note
If the FlatStyle property is set to
FlatStyle.System, any images assigned to the Image property are not displayed. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.buttonbase.image?redirectedfrom=MSDN&view=netframework-4.8 | 2020-02-17T02:02:23 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
changes.mady.by.user Krishan Wijesena
Saved on Nov 22, 2017
changes.mady.by.user Mariangela Hills
Saved on Nov 27, 2017
...
Change the execution code for each state transition
For all state transitions, the same execution class is used (org.wso2.carbon.apimgt.impl.executors.APIExecutor). However, you can plug your own execution code when modifying the life cycle configuration. For example, if you want to add notifications for a specific state transition, you can plug your own custom execution class for that particular state in the API life cycle. Any changes are updated in the Lifecycle tab accordingly.
org.wso2.carbon.apimgt.impl.executors.APIExecutor
Powered by a free Atlassian Confluence Community License granted to WSO2, Inc.. Evaluate Confluence today. | https://docs.wso2.com/pages/diffpages.action?pageId=80718686&originalId=76753234 | 2020-02-17T02:11:37 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.wso2.com |
File:Win7nodeservice.jpg
From CSLabsWiki
Size of this preview: 800 × 515 pixels. Other resolutions: 320 × 206 pixels | 1,444 × 930 pixels.
Original file (1,444 × 930 pixels, file size: 159 KB, MIME type: image/jpeg)
File history
Click on a date/time to view the file as it appeared at that time.
- You cannot overwrite this file.
File usage
There are no pages that link to this file. | http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=File:Win7nodeservice.jpg&oldid=4951 | 2019-12-05T23:09:00 | CC-MAIN-2019-51 | 1575540482284.9 | [array(['/mediawiki/images/8/8c/Win7nodeservice.jpg',
'File:Win7nodeservice.jpg'], dtype=object) ] | docs.cslabs.clarkson.edu |
Extending delta data migration to include customizations
You can add custom forms to the Delta Data Migration package and migrate the data in these custom forms. You can migrate the data manually or with the Customer Form Instruction Generation tool.
You can also update a field mapping file to correct customer-defined fields in the BMC Remedy reserved range.
The following topics are provided:
Manually adding custom forms to package
- Open the Custom_Form_Instructions.xml instruction file in the <MigratorInstallDirectory>\DeltaDataMigration\Packages\Custom directory.
The file contains information that is similar to the instructions XML example.
- Provide your custom form name and the unique field IDs (unique index field IDs) in their respective tags.
Follow the same process for all of the forms that you want to add.
- Open the Custom_Form_Package.xml package file in the <MigratorInstallDirectory>\DeltaDataMigration\Packages\Custom folder.
Provide the instruction XML file names in the package XML file:
<instructions file="Custom_Form_Instructions.xml <instruction name="Custom_Form_Instructions"> </instructions>
- Save the instruction and package XML files.
You are now ready to run the migrate and compare scripts for the custom package. The new package will run in parallel in a separate command window in the same way as the Delta Data Migration out-of-the-box package files.
Adding custom forms to package by using the Customer Form Instruction Generation tool
If you do not have the list of your custom (non-BMC) regular forms, run the following batch files as outlined in the procedure for adding custom forms to a package by using the Custom Form Instruction Generation tool.
- The migratorFindCustomForms.bat utility — Finds all of your custom forms on the AR System server that are not recognized as BMC Software forms. The utility generates a CSV file that includes a list of all custom form names with their unique indexes.
- The migratorCSV2Instructions.bat utility — Uses the generated CSV file as the input, and creates a Custom_Form_Instructions.xml file for the custom forms in the CSV file.
To add custom forms to the package by using a Customer Form Instruction Generation tool
- Navigate to <MigratorInstallDirectory>\Migrator\migrator\DeltaDataMigration\Utilities\migratorUtilities folder.
Run the migratorFindCustomForms utility by using the following syntax:
migratorFindCustomForms.bat -s <sourceARServerName> -u <adminUserID> -p <adminPassword> -P <ARServerPort>
For example:
migratorFindCustomForms.bat -s test.bmc.com -u Demo -p "" -P 2020
- Open the Customforms.csv output file in a text editor or spreadsheet application.
- If a form is included in the list but should not be migrated, remove the entire line.
Do not include forms that are used for testing or to keep temporary data. If you are not sure, it is better to include the form in the migration. Migrating a form multiple times is permitted.
Note
You can save the names of forms to be excluded in a separate file, and then use that file the next time you run migratorFindCustomForms.
- Save the changes you made to the Customforms.csv file.
Run the migratorCSV2Instructions utility by using the following syntax:
migratorCSV2Instructions.bat -i Customforms.csv
For example:
migratorCSV2Instructions.bat -i Customforms.csv
This utility generates an instruction file that BMC Remedy Migrator reads and uses for the migration.
- Verify that the output file is Custom_Form_Instructions.xml.
- Open the Custom_Form_Instructions.xml file, and ensure that the name inside of the xml file has the same name "Custom_Form_Instructions."
- Copy the Custom_Form_Instructions.xml files to the Packages\Custom directory. (You can overwrite the same file in the directory.)
The custom package is now ready to be used. On the Delta Data Migration Tool user interface, when you select Custom, this custom package is selected, and the migration for the custom forms is executed.
Updating a field mapping file if you ran ARCHGID.
- In the packages folder, open the custom package folder.
- Open the instruction file for which you will update form mapping information
If you have more than one instruction file, open the file that contains the form for which delta data mapping is available.
- Create a mapping (.arm) file and map the custom source field ID to the destination field ID (which has a new ID after you run the
ARCHGIDutility).
The following figure shows an example of a mapping file:
- Add the mapping file name to the form reference in the instruction file, as shown in the following example: | https://docs.bmc.com/docs/brid81/extending-delta-data-migration-to-include-customizations-538378540.html | 2019-12-05T23:37:35 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.bmc.com |
Getting Started with Exosuit
Installation
The Exosuit CLI currently only works on macOS. The Exosuit CLI can be installed using Homebrew.
$ brew tap jasonswett/exosuit && brew install exosuit $ exo --version
Usage
As of this writing Exosuit can:
- Launch an EC2 instance
- SSH into an EC2 instance
- Terminate EC2 instances
- List EC2 instances
Exosuit does not yet do anything to do with installing Rails, although that of course is in the plans.
Launching an EC2 instance
You can launch an EC2 instance by using the
launch command:
$ exo launch
Within a few seconds, your EC2 instance will be ready.
SSHing into your EC2 instance
The next thing you might want to do is SSH into the instance you just created. You can do that like this:
$ exo ssh
Listing your EC2 instances
Want to see all the instances you’ve launched? Run this command.
$ exo instances
Terminating an EC2 instance
To terminate an instance, run:
$ exo terminate
You will be given a prompt of all your running instances. You can select any number of instances to terminate.
Getting help
If you run the
exo command, you’ll see some help output, which will look something like this:
Usage: exo [command] These are the commands you can use: launch Launch a new EC2 instance ssh SSH into an EC2 instance terminate Terminate an EC2 instance instances Show a summary of all running EC2 instances instances:all Show a summary of EC2 instances (all states) | https://docs.exosuit.io/getting-started.html | 2019-12-05T22:38:29 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.exosuit.io |
Paginated collections¶
To reduce load on the service, list operations return a maximum number
of items at a time. The maximum number of items returned is determined
by the compute provider. To navigate the collection, the
limit and
marker parameters can be set in the URI. For example:
?limit=100&marker=1234
The
marker parameter is the ID of the last item in the previous
list. By default, the service sorts items by create time in descending order.
When the service cannot identify a create time, it sorts items by ID. The
limit parameter sets the page size. Both parameters are optional. If the
client requests a
limit beyond one that is supported by the deployment
an overLimit (413) fault may be thrown. A marker with an invalid ID returns
a badRequest (400) fault.
For convenience, collections should contain atom
links. They may optionally also contain
previous links but the current
implementation does not contain
previous links. The last
page in the list does not contain a link to “next” page. The following examples
illustrate three pages in a collection of servers. The first page was
retrieved through a GET to. In these
examples, the ``limit`` parameter sets the page size to a single item.
Subsequent links honor the initial page size. Thus, a client can follow
links to traverse a paginated collection without having to input the
marker parameter.
Example: Servers collection: JSON (first page)
{ "servers_links":[ { "href":"", "rel":"next" } ], "servers":[ { "id":"fc55acf4-3398-447b-8ef9-72a42086d775", "links":[ { "href":"", "rel":"self" }, { "href":"", "rel":"bookmark" } ], "name":"elasticsearch-0" } ] }
In JSON, members in a paginated collection are stored in a JSON array
named after the collection. A JSON object may also be used to. The approach allows for extensibility of paginated collections by
allowing them to be associated with arbitrary properties. | https://docs.openstack.org/api-guide/compute/paginated_collections.html | 2019-12-05T22:17:07 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.openstack.org |
Warning: You're not viewing the latest Calico documentation.
Calico on OpenStack
There are many ways to try out Calico with OpenStack, because OpenStack is a sufficiently complex system that there is a small industry concerned with deploying it correctly and successfully.
We provide instructions for the following methods:
Package-based install for Ubuntu
RPM-based install for Red Hat Enterprise Linux (RHEL)
DevStack (for development purposes only—not recommended for production!)
In all cases, except DevStack, you will need at least two or three servers to get going: one OpenStack controller and one or more OpenStack compute nodes. | https://docs.projectcalico.org/v3.9/getting-started/openstack/installation/ | 2019-12-05T22:16:24 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.projectcalico.org |
Icon:
Function: R_NNET
Property window:
Short description:
Compute a Neural Network Model or a Multi Nomial Logistic Model
Long Description:
Compute a Neural Network Model or a Multi Nomial Logistic Model
Neural Networks are popular in datamining and analytics, mainly thanks to some of its promoters (i.e. Google) and their increased popularity in image pattern recognition. There now exists a new type of Neural Network algorithms (that is named “deep neural network”) that seems to perform reasonably well on image classification and segmentation tasks.
The
action described in this section is not a “deep” neural network: it’s an “old-school” Neural Network algorithm and it’s included in Anatella mainly because of completeness (and for explanatory/teaching purposes). More precisely, “old-school” Neural Network algorithms are usually not very useful because they are notoriously difficult to adjust properly to get a correct classification accuracy (although it’s sometime possible to get good results, it’s quite difficult).
Parameters:
List of Predictors: Select independent variables
Target: Select the variable you want to predict
Model Output: Set the file name for the model results
Select Classification Model: either Multinomial Logit or Neural Network
Base: set the base category
Number of perceptrons for Neural Networks: Manually set the perceptrons. This implementation uses only one layer
Linear Model: Specify if the perceptrons should use linear models instead of logistics
Softmax: Switch to Log Likelihood criteria for more than two categories in the target | http://docs.timi.eu/anatella/5_11_4__neural_network__multinomial_logistic_pred.html | 2019-12-05T23:00:33 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.timi.eu |
Upgrade process overview for the TrueSight Operations Management solution
Before starting the upgrade process, review the process flow diagram and the upgrade details described in the table that follows. Upgrade all components in your current environment before installing any new components that are licensed for your use.
Make sure that you back up the current version of the product before you start the upgrade process.
Related topics
Upgrade process
Upgrade process for TrueSight Operations Management.
Steps and references to upgrade the solution
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/TSOperations/113/upgrade-process-overview-for-the-truesight-operations-management-solution-843620608.html | 2019-12-05T23:43:27 | CC-MAIN-2019-51 | 1575540482284.9 | [array(['/docs/TSOperations/113/files/843620608/860483738/1/1553274126159/upgrade_tsps_11301.png',
'upgrade_tsps_11301'], dtype=object) ] | docs.bmc.com |
[−][src]Crate goblin
libgoblin
.
Example
use goblin::{error, Object}; use std::path::Path; use std::env; use std::fs::File; use std::io::Read; fn run () -> error::Result<()> { for (i, arg) in env::args().enumerate() { if i == 1 { let path = Path::new(arg.as_str()); let mut fd = File::open(path)?; let mut buffer = Vec::new(); fd.read_to_end(&mut buffer)?; match Object::parse(&buffer)? { Object::Elf(elf) => { println!("elf: {:#?}", &elf); }, Object::PE(pe) => { println!("pe: {:#?}", &pe); }, Object::Mach(mach) => { println!("mach: {:#?}", &mach); }, Object::Archive(archive) => { println!("archive: {:#?}", &archive); }, Object::Unknown(magic) => { println!("unknown magic: {:#x}", magic) } } } } Ok(()) }
Feature Usage
libgoblin is engineered to be tailored towards very different use-case scenarios, for example:
- a no-std mode; just simply set default features to false
- a endian aware parsing and reading
- for binary loaders which don't require this, simply use
elf32and
elf64(and
stdof endian aware reading, and you don't use
default, then you need to opt in as normal
via
endian_fd | https://docs.rs/goblin/0.1.2/goblin/ | 2019-12-05T22:14:13 | CC-MAIN-2019-51 | 1575540482284.9 | [array(['https://s-media-cache-ak0.pinimg.com/736x/1b/6a/aa/1b6aaa2bae005e2fed84b1a7c32ecb1b.jpg',
'say the right words'], dtype=object) ] | docs.rs |
URL Maps API¶
Last Updated: September 2019
A
UrlMap is a mapping between a URL and a function or class that is responsible for handling a request. When a request is submitted to Tethys, it matches the URL of that request against a list of
UrlMaps and calls the function or class that the matching
UrlMap points to.
Tethys usually manages
url_maps from the
app.py file of each individual app using a
UrlMap constructor. This constructor normally accepts a
name, a
url, and a
controller. However, there are other parameters such as
protocol,
regex,
handler, and
handler_type. This section provides information on how to use the
url_maps API.
URL Maps Contructor¶
- class
tethys_apps.base.url_map.
UrlMapBase(name, url, controller, protocol='http', regex=None, handler=None, handler_type=None)¶
Abstract URL base class for Tethys app controllers and consumers
__init__(name, url, controller, protocol='http', regex=None, handler=None, handler_type=None)¶
Constructor
- Parameters
name (str) -- Name of the url map. Letters and underscores only (_). Must be unique within the app.
url (str) -- Url pattern to map the endpoint for the controller or consumer.
controller (str) -- Dot-notation path to the controller function or consumer class.
protocol (str) -- 'http' for consumers or 'websocket' for consumers. Default is http.
regex (str or iterable, optional) -- Custom regex pattern(s) for url variables. If a string is provided, it will be applied to all variables. If a list or tuple is provided, they will be applied in variable order.
handler (str) -- Dot-notation path a handler function. A handler is associated to a specific controller and contains the main logic for creating and establishing a communication between the client and the server.
handler_type (str) -- Tethys supported handler type. 'bokeh' is the only handler type currently supported.
URL Maps Methods¶
The
url_maps methods is tightly related to the App Base Class API.
TethysBase.
url_maps()
Override this method to define the URL Maps for your app. Your
UrlMapobjects must be created from a
UrlMapclass that is bound to the
root_urlof your app. Use the
url_map_maker()function to create the bound
UrlMapclass. If you generate your app project from the scaffold, this will be done automatically. Starting in Tethys 3.0, the
WebSocketprotocol is supported along with the
HTTPprotocol. To create a
WebSocket UrlMap, follow the same pattern used for the
HTTPprotocol. In addition, provide a
Consumerpath in the controllers parameter as well as a
WebSocketstring value for the new protocol parameter for the
WebSocket UrlMap. Alternatively, Bokeh Server can also be integrated into Tethys using
Django Channelsand
Websockets. Tethys will automatically set these up for you if a
handlerand
handler_typeparameters are provided as part of the
UrlMap.
- Returns
A list or tuple of
UrlMapobjects.
- Return type
iterable
Example:
from tethys_sdk.base import url_map_maker class MyFirstApp(TethysAppBase): def url_maps(self): """ Example url_maps method. """ # Create UrlMap class that is bound to the root url. UrlMap = url_map_maker(self.root_url) url_maps = ( UrlMap(name='home', url='my-first-app', controller='my_first_app.controllers.home', ), UrlMap(name='home_ws', url='my-first-ws', controller='my_first_app.controllers.HomeConsumer', protocol='websocket' ), UrlMap(name='bokeh_handler', url='my-first-app/bokeh-example', controller='my_first_app.controllers.bokeh_example', handler='my_first_app.controllers.bokeh_example_handler', handler_type='bokeh' ), ) return url_maps
Websockets¶
Tethys Platform supports WebSocket connections using Django Channels. The WebSocket protocol provides a persistent connection between the client and the server. In contrast to the traditional HTTP protocol, the webscoket protocol allows for bidirectional communication between the client and the server (i.e. the server can trigger a response without the client sending a request). Django Channels uses Consumers to structure code and handle client/server communication in a similar way Controllers are used with the HTTP protocol.
Note
For more information about Django Channels and Consumers visit the Django Channels docummentation.
Note
For more information on establishing a WebSocket connection see the JavaScript WebSocket API. Alternatively, other existing JavaScript or Python WebSocket clients can we used.
Tip
To create a URL mapping using the WebSocket protocol see the example provided in the App Base Class API documentation.
Tip
For an example demonstrating all the necessary components to integrating websockets into your app see This Websockets Tutorial.
Bokeh Integration¶
Bokeh Integration in Tethys takes advantage of
Websockets and
Django Channels to leverage Bokeh's flexible architecture. In particular, the ability to sync model objects to the client allows for a responsive user interface that can receive updates from the server using Python. This is referred to as
Bokeh Server in the Bokeh Documentation.
Tethys facilitates the use of the
Bokeh Server component of
Bokeh by taking care of creating the routings necessary to link the models and the front end bokeh models. This is done by providing a
handler in addition that the other common parameters in a
UrlMap.
Note
Interactive
Bokeh visualization tools can be entirely created using only Python with the help of
Bokeh Server. However, this usually requires the use of an additional server (
Tornado). One of the alternatives to
Tornado is using
Django Channels, which is already supported with Tethys. Therefore, interactive
Bokeh models along with the all the advantages of using
Bokeh Server can be leveraged in Tethys without the need of an additional server.
class MyFirstApp(TethysAppBase): def url_maps(self): """ Example url_maps method. """ # Create UrlMap class that is bound to the root url. UrlMap = url_map_maker(self.root_url) url_maps = ( ... UrlMap(name='bokeh_handler', url='my-first-app/bokeh-example', controller='my_first_app.controllers.bokeh_example', handler='my_first_app.controllers.bokeh_example_handler', handler_type='bokeh' ), ) return url_maps
A
Handler in this context represents a function that contains the main logic needed for a Bokeh model to be displayed. It contains the model or group of models as well as the callback functions that will help link them to the client.
Handlers are added to the
Bokeh Document, the smallest serialization unit in
Bokeh Server. This same
Document is later retrieved and added to the template variables in the
Controller that will be linked to the
Handler function using Bokeh's server_document function.
A
Bokeh Document comes with a ``Bokeh Request. This request contains most of the common attibutes of a normal
HTTPRequest, and can be easily converted to HTTP using the
with_request decorator from
tethys_sdk.base. A second handler decorator named
with_workspaces can be used to add
user_workspace and
app_workspace to the
Bokeh Document. This latter decorator will also convert the
Bokeh Request of the
Document to an
HTTPRequest, meaning it will do the same thing as the ``with_request decorator besides adding workspaces.
The example below adds a column layout containing a slider and a plot. A callback function linked to the slider value change event and a demonstration of how to use the
with_workspaces decorator are also included.
from tethys_sdk.base import with_workspaces ... @with_workspaces def home_handler(doc): # create data source for plot data = {'x': [0, 1, 2, 3], 'y': [0, 10, 20, 30]} source = ColumnDataSource(data=data) # create plot plot = figure(x_axis_type="linear", y_range=(0, 30), title="Bokeh Plot") plot.line(x="x", y="y", source=source) # callback function def callback(attr: str, old: Any, new: Any) -> None: if new == 1: data['y'] = [0, 10, 20, 30] else: data['y'] = [i * new for i in [0, 10, 20, 30]] source.data = ColumnDataSource(data=data).data plot.y_range.end = max(data['y']) # create slider and add callback to it slider = Slider(start=1, end=5, value=1, step=1, title="Bokeh Slider") slider.on_change("value", callback) # attributes available when using "with_workspaces" decorator request = doc.request user_workspace = doc.user_workspace app_workspace = doc.app_workspace # add layout with bokeh models to document doc.add_root(column(slider, plot))
The
controller from the same
UrlMap where the
handler is defined needs to provide a mechanism to load the
Bokeh models to the client.
def home(request): ... script = server_document(request.build_absolute_uri()) context = { 'script': script } return render(request, 'test_app/home.html', context)
Tip
For more information regarding Bokeh Server and available models visit the Bokeh Server Documentation and the Bokeh model widgets reference guide. | http://docs.tethysplatform.org/en/latest/tethys_sdk/url_maps.html | 2019-12-05T22:52:48 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.tethysplatform.org |
Content
Element. Command Bindings Property
Definition
Gets a collection of CommandBinding objects that are associated.
Remarks
A CommandBinding enables command handling of a specific command for this element and declares the linkage between a command, its events, and the handlers that are attached by this element.. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.contentelement.commandbindings?view=netframework-4.7.2 | 2019-12-05T22:51:19 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.microsoft.com |
Product Updates - June 2018
Overview of Changes
New Features
Copy Dashboard
To help accelerate the creation of new dashboards, we built the new Copy Dashboard capability.
On any dashboard, click the overflow menu in the upper-right, and then click Copy Dashboard. A new dashboard opens with “Copy of” inserted into the title, and all of the widgets are copied as well. You can also copy any dashboard that other users have shared as Public.
Save Charts to New and Existing Dashboards
To facilitate the addition of new widgets onto new and existing dashboards, we have created the save chart capability.
On Data Visualization, KPI Visibility, Statistical Process Control, and Raw Data Monitoring pages, after creating a visualization, click the Save button. A new widget is automatically created when you click either:
- Add a New Dashboard
- Add to Existing Dashboard
Enhancements
AI Data Pipeline - Advanced Editor
In any model (Facility, Machine Type, or Machine) in the AI Data Pipeline, the new Advanced Editor button appears in the upper-right.
Click the Advanced Editor button to toggle the model editor into Advanced Mode.
On the Advanced Mode page, you can edit the configuration file directly.
When you make changes, several layers of validation run. The first checks that your configuration can be parsed properly. If it cannot be parsed, a JSON Error appears in the console, and the auto-save feature and Deploy button become disabled.
Additional layers of validation that flag issues in the configuration file allow for auto-saving, but disable the Deploy button.
Cycle Time Unit Conversion
You can now instantly convert Cycle Times to different units in the following locations:
- Data Visualizations: X-Axis and Y-Axis labels
- Data Visualizations: Add Filter menu
- Data Tables: Column header
On any Data Visualization page, on an axis label, click the displayed units (i.e., Seconds, Hours, etc.), and then select a new option.
On any Data Visualization page, on the Add Filter menu, click Cycle Time, and then make a selection in the new Units drop-down list.
On any Data Table page, under a Cycle Time column header, select the new units from the drop-down list. | https://docs.sightmachine.com/article/93-product-updates-june-2018 | 2019-12-05T21:59:06 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.sightmachine.com |
Quick Start
Hover automates existing USSD sessions in the background of Android applications.
Our Android SDK can run virtually any USSD interaction on any mobile operator globally. This includes mobile money payments, banking services, airtime topup, bill pay, bundle purchases, and more. Hover enables developers to make USSD-based services more accessible to users with visual impairments or impairments related to literacy or numeracy.
Prerequisites
- Target Android API level 18 or higher.
- A USSD service you wish to integrate.
- A SIM card that can dial the USSD service.
If you are new to Android or starting your app from scratch, we have a pre-configured app on Github that can help you more quickly test and customize Hover. Follow the instructions in the README to get started.
1. Create an Action
Actions instruct the Hover SDK how to integrate with a USSD service. To create actions sign in to your Hover account then go to the actions tab and click on
+ New Action
Actions consist of
- Name of your choice, for example “Send Money”.
- Mobile networks/SIM cards that can run the USSD service.
- Root code, the shortcode used to dial the USSD service.
- The USSD menu steps. See below.
Your configuration should look something like this:
Steps can be one of three types
- Numbers for constant choices such as entering “1” to reach My Account.
- Variables for entries that change at runtime, such as amount to be sent.
- PIN prompt to display a PIN entry for the user.
- Press OK for menus where no choice or entry is made.
2. Install the SDK
As of the current version of the Hover SDK is
If you have issues with your app building and you are using Android X dependencies then we have an SDK variant for that: `-androidx`
Create an App
Create an API key for your app by clicking “New app” on the left side of your dashboard. Enter the app name, package name, and an optional webhook url. The package name MUST match the applicationId found in your app-level build.gradle file.
After you click save the new API token can be found under your app’s package name in the left-side of your dashboard:
Add the Hover repo to your root build.gradle repositories:
allprojects { repositories { ... mavenCentral() maven { url "" } } }
Add the SDK to your app-level build.gradle dependencies:
dependencies { ... implementation 'com.hover:android-sdk:' }
Include your API token in your AndroidManifest.xml:
<application> ... <meta-data android: </application>
Initialize
Your actions are downloaded and the SDK is initialized by calling
Hover.initialize(). This needs to be done once, ideally in your main launch activity. Please do not do this in your Application class.
import com.hover.sdk.api.Hover; ... public class MainActivity extends AppCompatActivity { ... @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ... Hover.initialize(this); } ... }
3. Run your Action
Make the USSD request
When the user clicks a button or takes another action, start the USSD session. Specify the
action_id, and names and values for any variables.
Button button= (Button) findViewById(R.id.action_button); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent i = new HoverParameters.Builder(this) .request("action_id") .extra("step_variable_name”, variable_value_as_string) // Only if your action has variables .buildIntent(); startActivityForResult(i, 0); } });
The Hover SDK will only run an action on a SIM card of the network(s) it is configured for. We provide helper methods for checking a user’s SIMs;.
@Override protected void onActivityResult (int requestCode, int resultCode, Intent data) { if (requestCode == 0 && resultCode == Activity.RESULT_OK) { String[] sessionTextArr = data.getStringArrayExtra("session_messages"); String uuid = data.getStringExtra("uuid"); } else if (requestCode == 0 && resultCode == Activity.RESULT_CANCELED) { Toast.makeText(this, "Error: " + data.getStringExtra("error"), Toast.LENGTH_LONG).show(); } }
4. (Optional) Parse the Result
To categorize transactions as “succeeded” or “failed”, you can parse the final USSD or SMS message using a regular expression. Hover provides a number of helpers for parsing the content and result of a USSD session, or you can do it yourself. To use Hover’s parsers add a parser to an action, set the status you want assigned to this particular type of result (e.g. status: “failed”, category: “wrong PIN”) and write the regular expression for the message as it appears in the final SMS or USSD message.
| https://docs.usehover.com/ | 2019-12-05T22:11:41 | CC-MAIN-2019-51 | 1575540482284.9 | [array(['/assets/images/action-form-example.png', None], dtype=object)
array(['/assets/images/api-key-location.png', None], dtype=object)
array(['/assets/images/parser-form-example.png', None], dtype=object)] | docs.usehover.com |
This sample is a demonstration on how to configure SAML2 SSO using conflict and restart the WSO2 Application Server.
<DelegatedEnvironment> <Name>Carbon</Name> <DelegatedPackages>,!org.springframework.,!org.slf4j.*</DelegatedPackages> </DelegatedEnvironment>
Configuring the web app
Check out the source from the repository location which contains the samples.
svn co
Go to
<HOME>/sso/SSOAgentSamplein the checked out folder and build the sample with following command..
- Start Identity Server and access management console using
- Login to management console using default administrator credentials (the username and password are both "admin").
- In the management console found on the left of your screen, navigate to Main > Manage > SAML SSO.
- Click on Register New Service Provider.
- = wso2carbon)
- Enable Single Logout: Set this as true by selecting the checkbox
- After providing above values click Register.
After successfully registering the service provider, log. | https://docs.wso2.com/pages/viewpage.action?pageId=38473455&navigatingVersions=true | 2019-12-05T22:26:58 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.wso2.com |
Media Manager
Media Files
- Media Files
- Upload
- Search
Upload to [root]
Sorry, you don't have enough rights to upload files.
File
- Date:
- 2021/08/03 12:06
- Filename:
- parks1029.png
- Format:
- PNG
- Size:
- 67KB
- Width:
- 482
- Height:
- 894
- References for:
- Open Labs Layout | https://docs.cs.byu.edu/doku.php?id=runaway-processes&tab_files=upload&do=media&tab_details=view&image=parks1029.png&ns= | 2022-09-25T04:18:34 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.cs.byu.edu |
Applies to: Premium Members
Child's device: iOS or Android phone or tablet
Article type: Advanced options
At home, you may want to turn off filtering if your Child needs access to blocked content.
For example, some keywords associated with current events may be part of a broader group of blocked content. You may want to allow your Child to access the web pages and information to allow them to complete a homework assignment.
Turn Off Filtering for a Set Time
On your Child's phone or tablet:
- In the Connect App, tap the gear icon
- Enter in your 4-digit parent PIN
- At Family Zone enabled, tap the toggle until it is gray (off)
- Select how long you want filtering turned off
(30 Minutes, 1 Hour, 2 Hours, 4 Hours, 8 Hours)
- In the upper-left, tap the ← back arrow
You are done. When the timer runs out, Family Zone filtering will turn back on. One quick note, if your Child is connected to their School network, disabling the filtering does not override the Schools content filtering.
Turn Filtering Back On
The filtering will turn back on when the timer runs out. The filtering can be turned back on with your Child's phone or tablet.
On your Child's phone or tablet:
- Open the Connect App
A filtering off icon and countdown timer will be visible
Tap the gray banner Tap to reactivate
- At the confirmation message, tap YES
- Your Child's daily Routine is active
Your Child's filtering and rules return to the regularly scheduled access Routine (Play, Study, School or Sleep). | https://docs.familyzone.com/help/temporarily-turn-off-filtering | 2022-09-25T05:17:18 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/617a6eb4f2fb8b7c317b2800/n/fza-adr-4-3-1-turn-filter-off-001.gif',
None], dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/617b8e8f4cd96ed0217b23c8/n/fza-adr-4-3-1-turn-filter-off-002.gif',
None], dtype=object) ] | docs.familyzone.com |
network interface migrate-all
Contributors
Migrate all data logical interfaces away from the specified node
Availability: This command is available to cluster administrators at the admin privilege level.
Description
The
network interface migrate-all command migrates all data logical interfaces from the node you specify.
Parameters
-node <nodename>- Node
Use this parameter to specify the node from which all logical interfaces are migrated. Each data logical interface is migrated to another node in the cluster, assuming that the logical interface is configured with failover rules that specify an operational node and port.
[-port {<netport>|<ifgrp>}]- Port
Use this parameter to specify the port from which all logical interfaces are migrated. This option cannot be used with asynchronous migrations. If this parameter is not specified, then logical interfaces will be migrated away from all ports on the specified node.
Examples
The following example migrates all data logical interfaces from the current (local) node.
cluster1::> network interface migrate-all -node local | https://docs.netapp.com/us-en/ontap-cli-93/network-interface-migrate-all.html | 2022-09-25T05:28:21 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.netapp.com |
Overview
This article briefly explains the specifics of RadSpreadStreamProcessing - what is spread streaming, how it works compared to the RadSpreadProcessing library and when to use it.
RadSpreadStreamProcessing is part of the Telerik Document Processing libraries. The full documentation for this component is available at.
What is Spread Streaming?
Spread streaming is a document processing paradigm that allows you to create or read.
While reading, only the required chunk of information is parsed to ensure there are no application resources kept without user need.
Key Features
Some of the features you can take advantage of are:
Export to XLSX or CSV files
Import from XLSX or CSV files
Writing directly into a stream; or parsing required data only
Following are the main differences between the two spreadsheet processing libraries.
- RadSpreadStreamProcessing writes directly into a stream, unlike RadSpreadProcessing, which creates models for the elements in the document. This is why the memory used with the spread streaming library is significantly lower than when using RadSpreadProcessing.
- RadSpreadStreamProcessing does not perform any formula or other layout-related calculations, which makes its file generation performance much better compared to RadSpreadProcessing.
When to Use RadSpreadStreamProcessing
You can use the RadSpreadStreamProcessing library to create or read large amount of data with a low memory footprint and great performance. You can also append data to an already existing document stream. The generated document can be exported directly to a file on the file system or to a stream (for example, to send it to the client). | https://docs.telerik.com/devtools/aspnet-ajax/integration/telerik-products/document-processing/spreadstreamprocessing/overview | 2022-09-25T05:15:32 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['images/SpreadStreamProcessing-Overview_01.png',
'SpreadStreamProcessing Fast Export image'], dtype=object)] | docs.telerik.com |
Whenever you launch the Unity editor, the Home Screen displays. If you have no existing Unity projects on your computer, or Unity doesn’t know where they are, it asks you to create a project.
To get started, you can click on New project which will take you to the Home Screen’s Create Project view. See the section on this in Creating a Project to find out more. Alternatively, if you already have a Unity project on your computer, you can open it from this screen. See Opening a Project to find out more.
Whenever you start the Unity editor, the Home Screen displays. From here, you can select NEW in the top right corner, to switch to the Create Project view.
To bring up the Home Screen’s Create Project view when you are already in the Unity editor, select New Project… from the File menu.
From the Home Screen’s Create Project view, you can name, set options, and specify the location of your new project.
To create a new project:
The name defaults to New Unity Project but you can change it to whatever you want. Type the name you want to call your project into the Project name field.
The location defaults to your home folder on your computer but you can change it. EITHER (a) Type where you want to store your project on your computer into the Location field. OR (b) Click on the three blue dots ‘…’. This brings up your computer’s Finder (Mac OS X) or File Explorer (Windows OS).
Then, in Finder or File Explorer, select the project folder that you want to store your new project in, and select “Choose”.
Select 3D or 2D for your project type. The default is 3D, coloured red to show it is selected. (The 2D option sets the Unity Editor to display its 2D features, and the 3D option displays 3D features. If you aren’t sure which to choose, leave it as 3D; you can change this setting later.)
There is an option to select Asset packages… to include in your project. Asset packages are pre-made content such as images, styles, lighting effects, and in-game character controls, among many other useful game creating tools and content. The asset packages offered here are free, bundled with Unity, which you can use to get started on your project. EITHER: If you don’t want to import these bundled assets now, or aren’t sure, just ignore this option; you can add these assets and many others later via the Unity editor. OR: If you do want to import these bundled assets now, select Asset packages… to display the list of assets available, check the ones you want, and then click on Done.
Now select Create project and you’re all set!
When you start the Unity editor, the Home Screen’s Open Project view displays. From here you can choose the project you want to open. To bring up the Home Screen’s Open Project view when you are already in the Unity editor, select Open Project from the File menu.
The Home Screen’s Open Project view lists all the projects the Unity editor knows about. (If the editor is newly installed and doesn’t know the location of your existing projects, it prompts you to create a new project. See Starting Unity for the First Time to find out more.)
Click on any of the projects listed to open them. If your project is not listed, you need to tell the editor where it is.
To locate and open an existing project which isn’t listed:
Select Open. This brings up your computer’s Finder (Mac OS X) or File Explorer (Windows OS).
In Finder or File Explorer, select the project folder that you want to open and select “Open”.
(NOTE: To open a Unity project, there is no specific Unity project file that you select. A Unity project is a collection of files, so you need to tell the Unity editor to open a folder, rather than a specific file.) | https://docs.unity3d.com/550/Documentation/Manual/GettingStarted.html | 2022-09-25T06:19:15 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.unity3d.com |
Sample data
Use sample data to familiarize yourself with time series data and InfluxDB. InfluxData provides many sample time series datasets to use with InfluxDB. You can also use the Flux InfluxDB sample package to view, download, and output sample datasets.
- Air sensor sample data
- Bird migration sample data
- NOAA sample data
- USGS Earthquake data
Air sensor sample data
Size: ~600 KB • Updated: every 15m
Air sensor sample data represents an “Internet of Things” (IoT) use case by simulating temperature, humidity, and carbon monoxide levels for multiple rooms in a building.
To download and output the air sensor sample dataset, use the
sample.data() function.
import "influxdata/influxdb/sample" sample.data(set: "airSensor")
Companion SQL sensor data
The air sensor sample dataset is paired with a relational SQL dataset with meta information about sensors in each room. These two sample datasets are used to demonstrate how to join time series data and relational data with Flux in the Query SQL data sources guide.
Download SQL air sensor data
Bird migration sample data
Size: ~1.2 MB • Updated: N/A
Bird migration sample data is adapted from the Movebank: Animal Tracking data set and represents animal migratory movements throughout 2019.
To download and output the bird migration sample dataset, use the
sample.data() function.
import "influxdata/influxdb/sample" sample.data(set: "birdMigration")
The bird migration sample dataset is used in the Work with geo-temporal data guide to demonstrate how to query and analyze geo-temporal data.
NOAA sample data
There are two National Oceanic and Atmospheric Administration (NOAA) datasets available to use with InfluxDB.
NOAA NDBC data
Size: ~1.3 MB • Updated: every 15m
The NOAA National Data Buoy Center (NDBC) dataset provides the latest observations from the NOAA NDBC network of buoys throughout the world. Observations are updated approximately every 15 minutes.
To download and output the most recent NOAA NDBC observations, use the
sample.data() function.
import "influxdata/influxdb/sample" sample.data(set: "noaa")
Store historical NOAA NDBC data
The NOAA NDBC sample dataset only returns the most recent observations;
not historical observations.
To regularly query and store NOAA NDBC observations, add the following as an
InfluxDB task.
Replace
example-org and
example-bucket with your organization name and the
name of the bucket to store data in.
import "influxdata/influxdb/sample" option task = { name: "Collect NOAA NDBC data" every: 15m, } sample.data(set: "noaa") |> to( org: "example-org", bucket: "example-bucket" )
NOAA water sample data
Size: ~10 MB • Updated: N/A
The NOAA water sample dataset is static dataset extracted from NOAA Center for Operational Oceanographic Products and Services data. The sample dataset includes 15,258 observations of water levels (ft) collected every six minutes at two stations (Santa Monica, CA (ID 9410840) and Coyote Creek, CA (ID 9414575)) over the period from August 18, 2015 through September 18, 2015.
Store NOAA water sample data to avoid bandwidth usage
To avoid having to re-download this 10MB dataset every time you run a query,
we recommend that you create a new bucket
(
noaa) and write the NOAA sample water data to it.
import "experimental/csv" csv.from(url: "") |> to(bucket: "noaa", org: "example-org")
The NOAA water sample dataset is used to demonstrate Flux queries in the Common queries and Common tasks guides.
USGS Earthquake data
Size: ~6 MB • Updated: every 15m
The United States Geological Survey (USGS) earthquake dataset contains observations collected from USGS seismic sensors around the world over the last week. Data is updated approximately every 15m.
To download and output the last week of USGS seismic data, use the
sample.data() function.
import "influxdata/influxdb/sample" sample.data(set: "usgs"). | https://test2.docs.influxdata.com/influxdb/v2.3/reference/sample-data/ | 2022-09-25T05:49:47 | CC-MAIN-2022-40 | 1664030334514.38 | [] | test2.docs.influxdata.com |
Why do we need AlphaDAO in the first place?
Dollar-pegged stablecoins have become an important part of crypto due to their lack of volatility as compared to tokens such as Bitcoin and Ether. Users are confident.
AlphaDAO
aims to solve this by creating a free-floating reserve currency,
OX
, that is backed by a basket of assets. By concentrating on supply growth rather than price appreciation,
AlphaDAO
hopes that
OX
can function as a currency that is able to maintain its purchasing power despite the consequences of market volatility.
Is OX a stable coin?
No, OX is not a stable coin..
You might say that the OX floor price or intrinsic value is 1 BUSD. We believe that the actual price will always be 1 BUSD + premium, but in the end that is up to the market to decide.
How does it work?
At a high level, AlphaDAO consists of its protocol managed treasury, protocol owned liquidity (POL), bond mechanism, and staking rewards that are designed to control supply expansion.
Bond sales generate profit for the protocol, and the treasury uses the profit to mint OX and distribute them to stakers. With liquidity bonds, the protocol is able to accumulate its own liquidity. Check out the entry below on the importance of POL.
What is the deal with (3,3) and (1,1)?
(3,3) is the idea that, if everyone cooperated in Alpha, it would generate the greatest gain for everyone (from a game theory standpoint). Currently, there are three actions a user can take:
· Staking (+2)
· Bonding (+1)
· Selling (-2)
Staking and bonding are considered advantageous to the protocol, while selling is considered detrimental. Staking and selling will also cause a price move, while bonding does not (we consider buying
OX OX?
PCV:
Protocol Controlled Value
In Alpha, PCV refers to the assets owned by the protocol (smart contract’s code). They are primarily used to:
Back OX tokens in circulation
Provide a permanent source of liquidity for OX tokens
Generate passive income through deployment to other protocols
As the protocol controls the funds in its treasury, OX can only be minted or burned by the protocol. This also guarantees that the protocol can always back 1 OX with 1 BUSD. You can easily define the risk of your investment because you can be confident that the protocol will indefinitely buy OX?
Alpha owns
most of its liquidity
thanks to its bond mechanism. This has several benefits:
Alpha does not have to pay out high farming rewards to incentivize liquidity providers a.k.a renting liquidity.
Alpha guarantees the market that the liquidity is always there to facilitate
Sell or Buy transaction.
By being the largest LP (liquidity provider), it earns most of the LP fees which
Represents another source of income to the treasury.
All POL can be used to back OX. The LP tokens are marked down to their risk-free value for this purpose. You can read more about the rationale behind this in this
What will happen if there is a bank run on Alpha?
Fractional reserve banking works because depositors don’t withdraw their funds all at once. A depositor’s faith in the banking system rests on regulations and agencies like Federal Deposit Insurance Corporation (FDIC).
OX does not have FDIC insurance but it has an incentive structure that protects stakers. Let’s take a look at how it performs during a hypothetical bank run. In this scenario, we assume the majority of stakers would panic and unstake their tokens from Alpha - the staking percentage which stands at 92% now quickly collapses to 3.3%, leaving only 55,000 OX staked.
Next, we assume the Risk-Free Value (RFV) inflows to the treasury completely dry up. For context, lets say the RFV is currently growing at about $1 million every 2 days. However, during a bank run this growth will likely stop.
Finally, we assume that those last standing stakers bought in at a price of $500 per OX. The initial investment of these stakers would be:
$500 / OX + 55,000 OX = $27.5 million
As an example, if the total OX supply is 2,082,553 and the RFV is $47,041,833. Remember that 1 OX is backed by 1 BUSD. By subtracting these two numbers, we know 44,959,280 OX will eventually get issued to the remaining stakers. In roughly a year, these stakers who are holding 55,000 OX will have:
55,000 + 44,959,280 = 45,014,280 OX
$27.5 million investment made by these stakers will turn into about $45 million based on cash flow alone if they stay staked (recall that 1 OX is backed by 1 BUSD). In this bank run scenario, the stakers who stay staked not only get their money back, but also make some profit. Therefore, (3,3) isn’t just a popular meme, it is actually a dominant strategy.
The above scenario is unlikely to play out because when other people find out that extremely high rewards are being paid to the stakers, they will copy the strategy by buying and staking OX. This is also why the percentage of OX staked in Alpha has consistently remained over 90% since launch.
Note: Most of the data referenced above are taken from this Dune Analytics page.
Why is the market price of OX so volatile?
It is extremely important to understand how early in development the AlphaDAO protocol is. A large amount of discussion has centered around the current price and expected a stable value moving forward. The reality is that these characteristics are not yet determined. The network is currently tuned for expansion of OX supply, which when paired with the staking, bonding, and yield mechanics of AlphaDAO, result in a fair amount of volatility.
OX could trade at a very high price because the market is ready to pay a hefty premium to capture a percentage of the current market capitalization. However, the price of OX could also drop to a large degree if the market sentiment turns bearish. We would expect significant price volatility during our growth phase so please
do your own research
whether this project suits your goals.
What is the point of buying it now when OX trades at a very high premium?
When you buy and stake OX, you capture a percentage of the supply (market cap) which will remain close to a constant. This is because your staked OX balance also increases along with the circulating supply. The implication is that if you buy OX when the market cap is low, you would be capturing a larger percentage of the market cap.
What is a Rebase?
Rebase is a mechanism by which your staked OX balance increases automatically. When new OX are minted by the protocol, a large portion of it goes to the stakers. Because stakers only see staked OX balance instead of OX, the protocol utilizes the rebase mechanism to increase the staked OX balance so that 1 staked OX is always redeemable for 1 OX.
What is reward yield?
Reward yield is the percentage by which your staked OX balance increases on the next epoch. It is also known as rebase rate. You can find this number on the Alpha staking page.
What is APY?
APY stands for annual percentage yield. It measures the real rate of return on your principal by taking into account the effect of compounding interest. In the case of AlphaDAO, your staked OX represents your principal, and the compound interest is added periodically on every epoch (2200 Ethereum blocks, or around 8 hours) thanks to the rebase mechanism.
One interesting fact about APY is that your balance will grow not linearly but exponentially over time! Assuming a daily compound interest of 2%, if you start with a balance of 1 OX on day 1, after a year, your balance will grow to about 1377. That is a lot!
How is the APY calculated?
The APY is calculated from the reward yield (a.k.a rebase rate) using the following equation:
APY = (1 + rewardYield)1095
It raises to the power of 1095 because a rebase happens 3 times daily. Consider there are 365 days in a year, this would give a rebase frequency of 365 * 3 = 1095.
Reward yield is determined by the following equation:
rewardYield = OX distributed/OX totalStaked
The number of OX distributed to the staking contract is calculated from OX total supply using the following equation:
OX distributed = OX totalSupply x rewardRate
Note that the reward rate is subject to change by the protocol. For example, it has been revised due to this latest proposal.
Why does the price of OX become irrelevant in long term?
As illustrated above, your OX balance will grow exponentially over time thanks to the power of compounding. Let's say you buy an OX for $400 now and the market decides that in 1 year time, the intrinsic value of OX will be $2. Assuming a daily compound interest rate of 2%, your balance would grow to about 1377 OXs by the end of the year, which is worth around $2754. That is a cool $2354 profit! By now, you should understand that you are paying a premium for OX now in exchange for a long-term benefit. Thus, you should have a long time horizon to allow your OX balance to grow exponentially and make this a worthwhile investment.
What will be OX's intrinsic value in the future?
There is no clear answer for this, but the intrinsic value can be determined by the treasury performance. For example, if the treasury could guarantee to back every OX with 100 BUSD, the intrinsic value will be 100 BUSD. It can also be decided by the DAO. For example, if the DAO decides to raise the price floor of OX, its intrinsic value will rise accordingly.
How does the protocol manage to maintain the high staking APY?
Let’s say the protocol targets an APY range of 1,000% to 10,000% (see OIP-18 for more details), this would translate to a minimum reward yield of about 0.2105%, or a daily growth of about 0.6328%. Please refer to the equation above to learn how APY is calculated from the reward yield.
If there are 100,000 of OX staked right now, the protocol would need to mint an additional 632.8 OX to achieve this daily growth. This is achievable if the protocol can bring in at least $632.80 of daily revenue from bond sales. Even if the protocol doesn't bring in that much revenue, it can still sustain 1,000% APY for a considerable amount of time (see the runway chart for instance) due to the excess reserve in the treasury.
Do I have to unstake and stake OX on every epoch to get my rebase rewards?
No. Once you have staked OX with AlphaDAO, your staked OX balance will auto-compound on every epoch. That increase in balance represents your rebase rewards.
How do I track my rebase rewards?
You can track your rebase rewards by calculating the increase in your staked OX balance.
Record down the Current Index value on the staking page when you first stake your OX. Let's call this the Start Index.
After staking for some time, if you want to determine by how much your balance has increased, check the Current Index value again. Let's call this the End Index.
By dividing the End Index by Start Index, you would get the ratio by which your staked OX balance has increased.
Ratio = enIndex / startIndex
In this example, the OX balance has grown by 1.5 times.
Ratio = 13.2 / 8.8 =1.5
Staking
(3,3)
Last modified
4mo ago
Copy link
Outline
Why do we need AlphaDAO in the first place?
Is OX a stable coin?
OX is backed, not pegged
How does it work?
What is the deal with (3,3) and (1,1)?
Why is PCV important?
Why is POL important?
What will happen if there is a bank run on Alpha?
$500 / OX + 55,000 OX = $27.5 million
55,000 + 44,959,280 = 45,014,280 OX
Why is the market price of OX so volatile?
What is the point of buying it now when OX trades at a very high premium?
What is a Rebase?
What is reward yield?
What is APY?
How is the APY calculated?
APY = (1 + rewardYield)1095
rewardYield = OX distributed/OX totalStaked
OX distributed = OX totalSupply x rewardRate
Why does the price of OX become irrelevant in long term?
What will be OX's intrinsic value in the future?
How does the protocol manage to maintain the high staking APY?
Do I have to unstake and stake OX on every epoch to get my rebase rewards?
How do I track my rebase rewards?
Ratio = enIndex / startIndex
Ratio = 13.2 / 8.8 =1.5 | https://docs.alphadao.financial/alpha-omega-dao/faq | 2022-09-25T05:01:58 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.alphadao.financial |
- Username
- New Password
- Real Name
- Company
- Website
- Location
- Date of Birth
Update a User’s Authentication Modes
Remove a User’s Authentication Modes
Update a User’s Alterego Preferences
Update a User’s Notifications
Update a User’s Expertise Topics
Edit a User’s External Application
Delete a User’s External Application
Disable or Enable a User’s Notifications
Add an External Application to a User
Suspend a User from the Profile User Interface | https://docs.dzonesoftware.com/articles/8024/update-a-users-details.html | 2022-09-25T05:11:23 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.dzonesoftware.com |
OKD is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic Block Store (EBS).
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.
To create CSI-provisioned PVs that mount to AWS EBS storage assets, OKD installs the AWS EBS CSI Driver Operator and the AWS EBS CSI driver by default in the
openshift-cluster-csi-drivers namespace.
The AWS EBS CSI Driver Operator provides a StorageClass by default that you can use to create PVCs. You also have the option to create the AWS EBS StorageClass as described in Persistent storage using AWS Elastic Block Store.
The AWS EBS CSI driver enables you to create and mount AWS EBS.
For information about dynamically provisioning AWS EBS persistent volumes in OKD, see Persistent storage using AWS Elastic Block Store.
Persistent storage using AWS Elastic Block Store | https://docs.okd.io/4.7/storage/container_storage_interface/persistent-storage-csi-ebs.html | 2022-09-25T05:02:58 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.okd.io |
Bash - Tests¶
Objectives: In this chapter you will learn how to:
work with the return code;
test files and compare them;
test variables, strings and integers;
perform an operation with numeric integers;
linux, script, bash, variable
Knowledge:
Complexity:
Reading time: 10 minutes
Upon completion, all commands executed by the shell return a return code (also called status or exit code).
- If the command ran correctly, the convention is that the status code will be zero.
- If the command encountered a problem during its execution, its status code will have a non-zero value. There are many reasons for this: lack of access rights, missing file, incorrect input, etc.
You should refer to the manual of the
man command to know the different values of the return code provided by the developers.
The return code is not visible directly, but is stored in a special variable:
$?.
mkdir directory echo $? 0
mkdir /directory mkdir: unable to create directory echo $? 1
command_that_does_not_exist command_that_does_not_exist: command not found echo $? 127
Note
The display of the contents of the
$? variable with the
echo command is done immediately after the command you want to evaluate because this variable is updated after each execution of a command, a command line or a script.
Tip
Since the value of
$? changes after each command execution, it is better to put its value in a variable that will be used afterwards, for a test or to display a message.
ls no_file ls: cannot access 'no_file': No such file or directory result=$? echo $? 0 echo $result 2
It is also possible to create return codes in a script.
To do so, you just need to add a numeric argument to the
exit command.
bash # to avoid being disconnected after the "exit 2 exit 123 echo $? 123
In addition to the correct execution of a command, the shell offers the possibility to run tests on many patterns:
- Files: existence, type, rights, comparison;
- Strings: length, comparison;
- Numeric integers: value, comparison.
The result of the test:
$?=0: the test was correctly executed and is true;
$?=1: the test was correctly executed and is false;
$?=2: the test was not correctly executed.
Testing the type of a file¶
Syntax of the
test command for a file:
test [-d|-e|-f|-L] file
or:
[ -d|-e|-f|-L file ]
Note
Note that there is a space after the
[ and before the
].
Options of the test command on files:
Example:
test -e /etc/passwd echo $? 0 [ -w /etc/passwd ] echo $? 1
An internal command to some shells (including bash) that is more modern, and provides more features than the external command
test, has been created.
[[ -s /etc/passwd ]] echo $? 1
Note
We will therefore use the internal command for the rest of this chapter.
Compare two files¶
It is also possible to compare two files:
[[ file1 -nt|-ot|-ef file2 ]]
Testing variables¶
It is possible to test variables:
[[ -z|-n $variable ]]
Testing strings¶
It is also possible to compare two strings:
[[ string1 =|!=|<|> string2 ]]
Example:
[[ "$var" = "Rocky rocks!" ]] echo $? 0
Comparison of integer numbers¶
Syntax for testing integers:
[[ "num1" -eq|-ne|-gt|-lt "num2" ]]
Example:
var=1 [[ "$var" -eq "1" ]] echo $? 0
var=2 [[ "$var" -eq "1" ]] echo $? 1
Note
Since numeric values are treated by the shell as regular characters (or strings), a test on a character can return the same result whether it is treated as a numeric or not.
test "1" = "1" echo $? 0 test "1" -eq "1" echo $? 0
But the result of the test will not have the same meaning:
- In the first case, it will mean that the two characters have the same value in the ASCII table.
- In the second case, it will mean that the two numbers are equal.
Combined tests¶
The combination of tests allows you to perform several tests in one command. It is possible to test the same argument (file, string or numeric) several times or different arguments.
[ option1 argument1 [-a|-o] option2 argument 2 ]
ls -lad /etc drwxr-xr-x 142 root root 12288 sept. 20 09:25 /etc [ -d /etc -a -x /etc ] echo $? 0
With the internal command, it is better to use this syntax:
[[ -d "/etc" && -x "/etc" ]]
Tests can be grouped with parentheses
(
) to give them priority.
(TEST1 -a TEST2) -a TEST3
The
! character is used to perform the reverse test of the one requested by the option:
test -e /file # true if file exists ! test -e /file # true if file does not exist
Numerical operations¶
The
expr command performs an operation with numeric integers.
expr num1 [+] [-] [\*] [/] [%] num2
Example:
expr 2 + 2 4
Warning
Be careful to surround the operation sign with a space. You will get an error message if you forget.
In the case of a multiplication, the wildcard character
* is preceded by
\ to avoid a wrong interpretation.
The
typeset command¶
The
typeset -i command declares a variable as an integer.
Example:
typeset -i var1 var1=1+1 var2=1+1 echo $var1 2 echo $var2 1+1
The
let command¶
The
let command tests if a character is numeric.
Example:
var1="10" var2="AA" let $var1 echo $? 0 let $var2 echo $? 1
Warning
The
let command does not return a consistent return code when it evaluates the numeric
0.
let 0 echo $? 1
The
let command also allows you to perform mathematical operations:
let var=5+5 echo $var 10
let can be substituted by
$(( )).
echo $((5+2)) 7 echo $((5*2)) 10 var=$((5*3)) echo $var 15
Author: Antoine Le Morvan
Contributors: Steven Spencer | https://docs.rockylinux.org/books/learning_bash/05-tests/ | 2022-09-25T04:06:09 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.rockylinux.org |
All Stock Items in Vinsight have a Units in Stock UOM (unit of measure) that defines the unit the stock is measured in. Vinsight is also able to convert quantities of stock into different units of measure to give you extra flexibility in your sales, operations and reporting. For example, you can count your stock in 12x750ml units, but sell it in 750ml units. Similarly you can sell stock in 330ml, 6x330ml and 12x330ml units but view a Sales report in litre quantities. While Vinsight comes with default Units of Measure set up, you are also able to add your own. Read more about adding, setting and changing units of measure here.
In this document:
You can find the Units of Measure list at Settings > Count > Units of Measure. All units of measure that you intend to use in Vinsight will need to be in this list.
When you start a new Vinsight subscription, a default list of Units of Measure will be provided. You can add to this list by inserting a new Unit of Measure, together with its Conversion Rate at the bottom of the page.
Vinsight units of measure are based on ‘mililitres’ (ml), ‘grams’ (gm) and ‘each’. These basic units have a conversion rate of 1 and are used to determine the conversion rate of any new units of measure that you add.
When determining the Conversion Rate for a new unit of measure, looking at the existing conversion rates should help, as you should be able to see the pattern they follow. For example, a 1.5L unit (equivalent to 1500mls) would have a conversion rate of 1500 and a 750ml unit a conversion rate of 750. Similarly, a 1 kg unit (equivalent to 1000 grams) would have a conversion rate of 1000 and a 500gm unit, a conversion rate of 500.
In the following example list, I have a 500gm bag with a conversion rate of 500 and a 1 kg bag with a conversion rate of 1000. However there is also a generic ‘bag’ with a conversion rate of 1. This generic ‘bag’ unit is added so that the unit is available to be customised within different stock items. You can see an example of this being done in ‘Stock Item Conversion Rates‘ below.
In the following example, I am adding a new unit of 24x330ml. This equates to 24 x 330 = 7920mls, so the conversion rate will be 7920. First, enter the new unit into the ‘Code’ field. If you are lucky you may find that Vinsight has auto-generated the Conversion Rate for you. (This happens when the format of the code gives Vinsight some clues at to the likely conversion rate required). If it hasn’t been auto-generated, add the Conversion rates to the ‘Conversion Rate’ field
Note that you can type simple equations into the ‘Conversion Rate’ field (e.g. you can type ’24 x 330′ and Vinsight will calculate the result for you).
You will need to click ‘insert’ to add the new unit of measure to the list.
If you are not using metric measurements, some units of measure that may be useful, together with their conversions, are listed below. Again, the simple rule of thumb is that for volume, convert the amount to mililitres and for weights convert the amount to grams.
There may be a situation where you want to remove a Unit of Measure from the list. This issue often arises when you have two uoms in the list that are essentially used to describe the same thing. For example, the 12-pack unit listed below may have made sense when you only sold 12-pack cartons of wine, but now you sell other things in 12-packs and the unit causes confusion. Sometimes the same unit has been added twice but described slightly differently (e.g. 750ML and 750ml) and you just want to clean up the list for consistency.
If the unit has recently been added and has not yet been used, you can delete it by selecting it from the Units Of Measure list and clicking ‘Delete’.
However, if you have already used the unit, you will likely get a message like the following.
This is because Vinsight will not allow you to delete a Unit of Measure that is being used. In this situation, the best way to get rid of the unit of measure is to ‘Merge’ it with another equivalent unit on this list. In the following example, I can see that the ’12-pack’ unit is equivalent to the ’12x750ml’ unit, as they both relate to items measured in mls and have a Conversion Rate of 9000.
After clicking ‘Merge’, ensure that the unit you want to keep is displayed in the ‘Winner’ box. If it is not, click the ‘Swap’ button to makes sure that you keep the right one. Then click ‘Merge’ to complete the process.
All Stock Items that previously had ’12-pack’ as their Units In Stock UOM, will now have ’12x750ml’ instead.
Note, if you simply want to re-name an existing Unit of Measure, create a new unit with the new name, then merge the old unit into it.
Every stock item in Vinsight is required to have a Units in Stock Unit of Measure. This is what you measure your units in stock in, if you were counting your stock in the warehouse. For example, you would probably measure your stock of additives such as PMS in kg rather than grams. Your stock of Finished Items to Sell should be measured in bottles (750mL) or cases (12x750mL) rather than single litres.
The Base UOM, Inner UOM, Outer UOM and Pallet Count are. So for beer, your UOMs might be:
Base: 330ml, Inner:6x330ml, Outer: 24x330ml, Pallet Count: 50
whereas for Wine, they might be:
Base: 750ml, Inner: 12x750ml, Outer: 12x750ml. Pallet Count: 56
It is fine for the Inner and Outer UOM to be the same value. 12x750ml cartons of wine may not be generally bundled or shrink-wrapped into groupings of cartons in the way that beer might be.
Ensure that you add the units you use for the Base, Inner and Outer UOMs to the conversion rates table described in this next section.
This section of the Stock Item is where you list the conversion rates you require for your use of this stock item. The two examples below help illustrate how this works:
Example 1: The stock item in question is 2021 Chardonnay which you sell in individual 750ml bottles but also in cases of 6 and 12 bottles.
In this instance, you Units in Stock UOM may be 12x750ml. However you will need to insert each of these variants into the ‘Conversion Rate’ table to make them available for selection when creating sales orders for this stock item.
As you select each unit from the drop-down list, the Conversion rate will auto-populate with the value set in the Units of Measure list. If you change any of the conversion rates here, those changes will override the default values in the Units of Measure list for that Stock Item.
To view a brewery example see Example Stock Item Brewery Conversion.
Note: You should not include an item in the conversion rate table for a stock item unless it is substitutable for that product. For example, while a customer may be happy to receive two 6 x 750ml cases as a substitute for one 12 x 750ml case, they would not accept two 12 x 375ml cases, even though the volume would be the same. You would accordingly need to set up a separate stock item to deal with 375 ml bottles and their pack size variants.
In addition, you should keep the stock items separate if you want to track them separately. For example, if you want to be able to bring up a report that shows you have exactly ten 6x750mls units in stock and five 12x750ml units, you will need to keep these as separate stock items. If have this stock but are using a single stock item with conversion rates, the report would be able to show that you had 120 750ml bottles or 10 12x750ml units or 20 6x750ml units, depending on which unit you select in the report, but it could not show you that ten are pre-packaged as 6-packs and five as 12-packs.
Example 2: The stock item in question is yeast, which you commonly purchase in 250g bags and add to your tanks in various quantities, measured by weight.
In this case, your Units In Stock UOM may be ‘kg’. However, you will want to additionally enter conversion rates to grams and milligrams or ppm, as you use these rates to add or quantify the product in operations (See Stock Item Operation UOMS). As you select these units from the drop down list, the conversion rate will be automatically suggested to you.
Here, we also want to define a custom measurement. I selected ‘bag’ from the drop-down list. This unit has a default conversion of ‘1’ in my main Units of Measure list. However, for this particular stock item, I want a ‘bag’ to mean 250 grams. I accordingly add 250 to the ‘Conversion Rate’ for bag.
The last column (with heading “1 Kg =” in above example) will then be auto-generated. It is always worth looking at this column for a quick sanity check. Here it states that 1 Kg of yeast will be the equivalent of 4 bags. This sounds right, so I can be confident I entered the conversion rate correctly.
This means that if I create a purchase order for 4 bags of yeast and receive it into stock, this will equate to an increase of 1 Kg in stock inventory for that product.
The final place you use units of measure in the Stock items is in the ‘Operations’ area. This section is really only relevant to stock items that are additives or packaging materials as it deals with the units of measure required to add this stock item to a vessel or product in an operation.
The Default Rate UOM is the Unit of Measure that is applied when you are adding this item to a vessel or product. For example if adding PMS (Potassium Metabisulphite) to a vessel, you might want to add it at a rate of 1 mg per litre. If so, you should choose ‘mg’ as your Default Rate UOM. You will need to ensure whichever unit you choose is present in the conversion table for that stock item.
The Default Quantity UOM is the unit of measure in which you will view the total amount added. For example, if you selected ‘gram’ as the Default Quantity UOM for your PMS, when you add 1 mg per litre to a 1000L tank, the operation will show that 1 gram has been added in total (rather than 1000mg).
For something like caps, you may choose ‘each’ for both the Default Rate UOM and the Default Quantity UOM. This would mean when packaging twenty 12x750ml stock items, you would add 12 ‘each’ to every stock item being created and the total used for the operation would be recorded as 240 ‘each’.
You need to take special care when editing the Units in Stock UOM in a Stock Item as changing the unit could impact your stock quantities. Before changing the Units in Stock UOM, you need to understand whether you want the units in stock to change with the change in units of measure or not.
Scenario 1: Current stock levels are accurate in their current UOM and you simply want to measure the stock in a different UOM going forward.
In the following example, it is correct that I currently have 504 12x 750ml units in stock. However I want to change the Units In Stock UOM from 12x750ml to 750ml and convert my stock quantities to reflect this new UOM.
To do this, click the ‘Edit’ button at the bottom of the page and select the new Unit of Measure you require from the drop-down list. A pop-up message will appear asking you whether you want to update your standard cost based on the new unit of measure. In this situation, I would select the green ‘Keep Updated Standard Cost’ option, so that my standard cost changes to reflect the new uom.
A further pop-up message will then appear requiring you to decide whether to ignore the conversion and keep the existing stock numbers or convert to the new measure and change the stock quantity to 6048 units. Again in this situation I would select the green ‘Convert to 6048 750ml units in stock option.
Once you have made these selections, you need to click ‘Save’ at the bottom of the page to lock in the changes made.
Scenario 2: Current Stock Levels are Inaccurate “because of” the Existing Unit of Measure.
You may encounter a situation where due to an input error, the current stock levels are wrong because of the unit of measure. For example, in your initial stock item import, you may have recorded 11.8 grams of sugar when you meant to record 11.8 kg. In this situation you want to change the Units In Stock UOM without changing current stock quantities recorded.
Here you go through the same process of clicking ‘Edit’ and then selecting the desired new Unit of Measure from the Units In Stock UOM drop down list. However, this time, I select the grey options to keep the standard cost at $3 per kg rather than changing it to $3000.
Again, in this case, I would choose to ‘ignore the conversion’ and keep 11.8 Kg units in stock.
Please note, it is important to read the message options carefully, and click the one that makes sense for your situation. Just because you chose the grey option for the first question does not necessarily mean you will select the grey option for the second. It really depends on the state your stock item was in before the change in uom.
Again, you will need to click the ‘Save’ button to lock in any changes. If your changes result in a significant change in the Standard Cost, once you click ‘Save’ a pop-up will appear, giving you the option to update your cost usages in current vessels and historical operations.
Once you have ‘ignored’ or updated the cost usages, you should be able to save the Stock Item.
You can bulk edit units of measure, but you need to be careful as (as with individual stock item edits as described above) changing the unit of measure of an item can impact stock quantities. Before starting, you need to understand whether you want the units in stock to change with the change in units of measure or not. When bulk editing you cannot decide this stock item by stock item. Either they will all change or they will all stay the same.
If you have items in both categories, you will need to do these as separate bulk edits, or simply edit them individually in the stock item (Editing Units In Stock UOM).
In the following example, all finished item stock item units of measure have been mistakenly entered as ‘each’. We want to change this so that the units of measure reflect the actual volume of the stock produced. This will give us much more flexibility in our reporting as we can look at total volume of product sold as well as individual units.
We start by downloading a spreadsheet of the relevant items from the Stock Items list.
We then change the Units in Stock UOM and the Base UOM from ‘each’ to the correct unit. In the following example, we are changing the’ 2021 Plaintree Chard 750ml’ to “750mL” and the ‘2021 Plaintree Chard Magnum’ to ‘1500mL’
When we upload this back into Vinsight, we get the following message. . . .
In this situation, we select the option that changes units but does not convert the quantity. Note, while the message includes an example relating to one of the two lines we are importing in this example, this decision will be made the same for all stock items being imported with UOM changes.
We then click the ‘Import to Vinsight’ button to import the changed data back into Vinsight.
This will change the Units In Stock UOM while keeping the quantity in stock the same.
As with any bulk editing job, it is always a good idea to just import one or two rows from the spreadsheet as a first stop to ensure that you are comfortable with what it is doing. | https://docs.vinsight.net/units-of-measure | 2022-09-25T04:56:42 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.vinsight.net |
Keyboard Support
This article explains the keyboard shortcuts present in RadNumericUpDown as well as the properties that can be used for keyboard navigation.
Keyboard Shortcuts
In order to change the value without clicking on the repeat buttons of the control, you can also use the following keyboard shortcuts:
MouseWheel: whenever a RadNumericUpDown is focused you can change its value by using the mouse wheel. The step change will be equal to the value of SmallChange.
PageUp: increments the value by one step equal to LargeChange.
PageDown: decrements the value by one step equal to LargeChange.
Up: increments the value by one step equal to SmallChange.
Down: decrements the value by one step equal to SmallChange.
Tab Navigation
TabNavigationExtensions.IsTabStop attached property indicates whether RadNumericUpDown is included in the tab navigation cycle. Example 1 illustrates how to set that property in order to exclude the control from the tab navigation. The property is available since R3 2016.
Example 1: RadNumericUpDown with TabNavigationExtensions.IsTabStop
<telerik:RadNumericUpDown telerik:TabNavigationExtensions. | https://docs.telerik.com/devtools/wpf/controls/radnumericupdown/features/navigation | 2019-09-15T16:08:41 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.telerik.com |
Linker Tools Error LNK2011
precompiled object not linked in; image may not run
If you use precompiled headers, LINK requires that all of the object files created with precompiled headers must be linked in. If you have a source file that you use to generate a precompiled header for use with other source files, you now must include the object file created along with the precompiled header.
For example, if you compile a file called STUB.cpp to create a precompiled header for use with other source files, you must link with STUB.obj or you will get this error. In the following command lines, line one is used to create a precompiled header, COMMON.pch, which is used with PROG1.cpp and PROG2.cpp in lines two and three. The file STUB.cpp contains only
#include lines (the same
#include lines as in PROG1.cpp and PROG2.cpp) and is used only to generate precompiled headers. In the last line, STUB.obj must be linked in to avoid LNK2011.
cl /c /Yccommon.h stub.cpp cl /c /Yucommon.h prog1.cpp cl /c /Yucommon.h prog2.cpp link /out:prog.exe stub.obj prog1.obj prog2.obj
Feedback | https://docs.microsoft.com/en-us/cpp/error-messages/tool-errors/linker-tools-error-lnk2011?view=vs-2019 | 2019-09-15T17:39:04 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.microsoft.com |
Contents Now Platform Capabilities Previous Topic Next Topic Activate client software distribution Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Activate client software distribution Client software distribution requires the Orchestration - Client Software Distribution plugin (com.snc.orchestration.client_sf_distribution), which is available by request with a subscription to Orchestration. About this task The Orchestration - Client Software Distribution plugin activates the Orchestration - System Center Configuration Manager plugin that contains the custom SCCM activities used to deploy or revoke software using an SCCM server. For additional plugin dependencies, see Plugins installed with client software distribution.Note: The Orchestration - Client Software Distribution plugin runs in its own application scope. Procedure In the HI Service Portal, click Service Requests > Activate Plugin. On the form, fill in the fields. Table 1. Plugin activation form Field Description Target Instance Instance on which to activate the plugin. Plugin Name Name of the plugin to activate. Specify the date and time you would like this plugin to be enabled Date and time that Information that would be helpful for the ServiceNow personnel activating the plugin. For example, if you need the plugin activated at a specific time instead of during one of the default activation windows. Click Submit. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-servicenow-platform/page/product/orchestration/task/t_ActivateClientSWDistribution.html | 2019-09-15T16:42:08 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.servicenow.com |
Viewing the audit log
The following procedure describes how to view the audit log for a job.
DMT users can view audit log records for their jobs that provide information about when (date and time) the following events have occurred:
- A job was created.
- The job engine resequenced the order of steps.
- A job ran and completed.
- A user clicked Continue.
- Job notifications were sent for a job.
- The Wait state was enabled or removed for a step.
- A step caused an error.
- Notifications were sent for a step.
- A job schedule was created, modified, or deleted.
- For Error Management, changes were made using Field Maintenance functionality.
- For Error Management, tree data records were abandoned at the time of automated cleanup (based on application preferences).
- For staging forms, staging records were abandoned at the time of automated cleanup (based on application preferences).
To view the audit log
- In the Job Console, double-click the job for which you want to view the audit log.
- In the navigation pane of the Job window, select Links > View Audit Log. The Job Audit Log window is displayed.
- Select the Notification Audits tab to view the audit log records.
- If you are a DMT Admin, you can select the Execution Audit tab to view the sequence of steps for your job (when jobs started and ended). | https://docs.bmc.com/docs/itsm81/viewing-the-audit-log-229802476.html | 2019-09-15T17:20:57 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.bmc.com |
In block configure page, click on DruapalExp Block Settings => Animate
Block appears effect: Choose the animation you want to apply for the block
Animate duration: The duration of animate in millisecond. Default is 1000 (1s)
Animate delay: The delay time before animate starts. It's helpful when you have some blocks in a row and want to appear one by one. | https://docs.drupalexp.com/block-settings/animation-settings | 2019-09-15T17:03:40 | CC-MAIN-2019-39 | 1568514571651.9 | [array(['https://firebasestorage.googleapis.com/v0/b/drupalexp-docs.appspot.com/o/docs%2F1547259471349block-animation.png?alt=media&token=3c861004-ef84-4dc7-b494-28870d660683',
None], dtype=object) ] | docs.drupalexp.com |
This:
Install and Configure the Form Server
IIS configuration is a complex task. The integration steps below relating to your IIS web server should be performed by your IIS Web System Administrator.
It is very important that you first follow the basic Live Forms Quick Start Guide and verify that Live Forms.
- Start Tomcat
- Check that the configured AJP port is enabled and listening for requests. On a windows command prompt:
When the tomcat server configuration is completed, install the IIS to Tomcat Connector.
Install the IIS to Tomcat Connector
- Download the connector project from: BonCode Connector website. Click on the link to download the latest version.
- To avoid multiple issues with using the zip file content, unblock the package before unzipping. Simply right click on the zip file and click “Unblock” on the “General” tab:
- Extract the project zip file.
- Execute the Connector_Setup.exe file.
- Accept the License Agreement and click Next.
- Follow the prompts and in the Apache Tomcat location details enter correct AJP port as configured in the tomcat server.xml file above:
- Click Next and use default values in all the prompts until you reach the Select Handler Mapping prompt.
- In Select Handler Mapping prompt select the option Servlet (add wildcard reference and pass all traffic to tomcat)
- Continue to the final prompt and click Install to finish the connector installation.
- Test the configuration by browsing
Troubleshooting
- If you see an access error like the one below, make sure that the Application Pool Identity (the security account set for the Application Pool) also has write permissions to C:\WINDOWS\Microsoft.NET directory.
- To check the currently set identity, open IIS and right click on your Application Pool, then click on the Identity tab:
- Then in your Windows explorer, right click on C:\WINDOWS\Microsoft.NET directory and click on Properties. Check if the identity user set of your Application Pool has the ‘write’ permissions for this directory. If not, then add the user permissions.
- Test the configuration again by browsing
2. Customers using LDAP SSO that see a "Value update failed" error intermittently occur on forms should reconfigure the AJP (IIS to Tomcat) connector with these settings to resolve the error: | https://docs.frevvo.com/d/display/frevvo90/Integrating+with+IIS | 2019-09-15T17:00:11 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.frevvo.com |
Advanced request throttling with Azure API Management
Being able to throttle incoming requests is a key role of Azure API Management. Either by controlling the rate of requests or the total requests/data transferred, API Management allows API providers to protect their APIs from abuse and create value for different API product tiers.
Product-based throttling
To date, the rate throttling capabilities have been limited to being scoped to a particular Product subscription, defined in the Azure portal. This is useful for the API provider to apply limits on the developers who have signed up to use their API, however, it does not help, for example, in throttling individual end users of the API. It is possible that for single user of the developer's application to consume the entire quota and then prevent other customers of the developer from being able to use the application. Also, several customers who might generate a high volume of requests may limit access to occasional users.
Custom key based throttling
NOTE: The
rate-limit-by-keyand
quota-by-keypolicies are not available when in the Consumption tier of Azure API Management.
The new rate-limit-by-key and quota-by-key policies provide a more flexible solution to traffic control. These new policies allow you to define expressions to identify the keys that are used to track traffic usage. The way this works is easiest illustrated with an example.
IP Address throttling
The following policies restrict a single client IP address to only 10 calls every minute, with a total of 1,000,000 calls and 10,000 kilobytes of bandwidth per month.
<rate-limit-by-key <quota-by-key
If all clients on the Internet used a unique IP address, this might be an effective way of limiting usage by user. However, it is likely that multiple users are sharing a single public IP address due to them accessing the Internet via a NAT device. Despite this, for APIs that allow unauthenticated access the
IpAddress might be the best option.
User identity throttling
If an end user is authenticated, then a throttling key can be generated based on information that uniquely identifies that user.
<rate-limit-by-key
This example shows how to extract the Authorization header, convert it to
JWT object and use the subject of the token to identify the user and use that as the rate limiting key. If the user identity is stored in the
JWT as one of the other claims, then that value could be used in its place.
Combined policies
Although the new throttling policies provide more control than the existing throttling policies, there is still value combining both capabilities. Throttling by product subscription key (Limit call rate by subscription and Set usage quota by subscription) is a great way to enable monetizing of an API by charging based on usage levels. The finer grained control of being able to throttle by user is complementary and prevents one user's behavior from degrading the experience of another.
Client driven throttling
When the throttling key is defined using a policy expression, then it is the API provider that is choosing how the throttling is scoped. However, a developer might want to control how they rate limit their own customers. This could be enabled by the API provider by introducing a custom header to allow the developer's client application to communicate the key to the API.
<rate-limit-by-key
This enables the developer's client application to choose how they want to create the rate limiting key. The client developers could create their own rate tiers by allocating sets of keys to users and rotating the key usage.
Summary
Azure API Management provides rate and quote throttling to both protect and add value to your API service. The new throttling policies with custom scoping rules allow you finer grained control over those policies to enable your customers to build even better applications. The examples in this article demonstrate the use of these new policies by manufacturing rate limiting keys with client IP addresses, user identity, and client generated values. However, there are many other parts of the message that could be used such as user agent, URL path fragments, message size.
Next steps
Please give us your feedback in the Disqus thread for this topic. It would be great to hear about other potential key values that have been a logical choice in your scenarios.
Feedback | https://docs.microsoft.com/en-us/azure/api-management/api-management-sample-flexible-throttling | 2019-09-15T16:00:50 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.microsoft.com |
Export-Module
Member
Syntax
Export-ModuleMember [[-Function] <String[]>] [-Cmdlet <String[]>] [-Variable <String[]>] [-Alias <String[]>] [<CommonParameters>]
Description and aliases in the script module are exported, but the variables.
Examples
Example 1: Export functions and aliases in a script module
Export-ModuleMember -Function * -Alias *
This command exports all the functions and aliases defined in the script module.
Example 2: Export specific aliases and functions
Export-ModuleMember -Function Get-Test, New-Test, Start-Test -Alias gtt, ntt, stt
This command exports three aliases and three functions defined in the script module.
You can use this command format to specify the names of module members.
Example 3: Export no members: Export a specific variable: Multiple export commands
# From TestModule.psm1 Function New-Test { Write-Output 'I am New-Test function' } Export-ModuleMember -Function New-Test function Validate-Test { Write-Output 'I am Validate-Test function' } function Start-Test { Write-Output 'I am Start-Test function' } Set-Alias stt Start-Test Export-ModuleMember -Function Start and the alias would be exported. With the Export-ModuleMember commands, only the New-Test and Start-Test functions and the STT alias are exported.
Example 6: Export members in a dynamic module.
Example 7: Declare and export a function in a single command
# From TestModule.psm1 {Write-Output 'I am New-Test function'} function helper {Write-Output 'I am helper function'}.
Parameters
Specifies the aliases that are exported from the script module file. Enter the alias names. Wildcard characters are permitted..
Specifies the functions that are exported from the script module file. Enter the function names. Wildcard characters are permitted. You can also pipe function name strings to Export-ModuleMember.
Specifies the variables that are exported from the script module file. Enter the variable names, without a dollar sign. Wildcard characters are permitted.
Inputs
System.String
You can pipe function name strings to this cmdlet.
Outputs
None
This cmdlet does not generate any output.
Notes
- To exclude a member from the list of exported members, add an Export-ModuleMember command that lists all other members but omits the member that you want to exclude.
Related Links
Feedback | https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/export-modulemember?view=powershell-6 | 2019-09-15T17:37:19 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.microsoft.com |
You are viewing the RapidMiner Server documentation for version 9.2 - Check here for latest version
REST API
The new version of RapidMiner Server offers a REST API to manage it programmatically. In order to use the API, you are required to be authenticated and authorized via JSON Web Token (JWT).
Create a JWT
Use the following RapidMiner Server REST endpoint to create a JWT:
[GET] $RMServerHost/api/rest/tokenservice
where
$RMServerHost is the IP Address or host name of your RapidMiner Server instance.
The request needs to be authorized via
Basic Auth, use your RapidMiner Server credentials to do so.
If the request was successfully executed, you will get the following response body:
{ "idToken": "<tokenContent>", "expirationDate": "<epochSeconds>" }
Use the value of
idToken to authorize your requests. The
expirationDate indicates how long the JWT is valid,
the default is 5 minutes. Once the token expires, you will need to create a new one by repeating the steps
described above or you'll receive a 401 (unauthorized) response.
curl -u username:password $RMServerHost/api/rest/tokenservice
where
$RMServerHostis the address of your RapidMiner Server instance
usernameis the name of the user which should be used
passwordis the password of the user
RapidMiner Server API
The REST API documentation of RapidMiner Server is available as
OpenApi 3.0 specification and is published on
SwaggerHub.
For all further requests to the API you need to have
- the HTTP method of your request (GET, POST, PUT, PATCH or DELETE),
- the content-type header attribute,
- the authorization header attribute which includes the
idTokenfrom the token service and is prefixed with
Bearer(e.g.
Authorization: Bearer <tokenContent>),
- the RapidMiner Server url and
- the route you like to make a request to.
curl -H "Content-Type: application/json" -H "Authorization: Bearer $idToken" $RMServerHost/executions/queues | https://docs.rapidminer.com/9.2/server/use/rest-api/ | 2019-09-15T16:28:01 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.rapidminer.com |
Deprecation: #68141 - typo3/ajax.php¶
See Issue #68141
Description¶
The ajax.php entry-point has been marked as deprecated. All AJAX requests in the Backend using the Ajax API are not affected as they automatically use index.php.
Affected Installations¶
Installations with custom extensions that call typo3/ajax.php without using proper API calls from
BackendUtility. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.4/Deprecation-68141-Typo3ajaxphp.html | 2019-09-15T17:05:10 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.typo3.org |
Welcome to Tigase Messenger for iOS
1. Welcome
Welcome to the documentation for Tigase Messenger for iOS.
1.1. Minimum Requirements
Tigase Messenger for iOS requires an apple device running iOS v10 or later. Compatible devices are listed below:
- iPod Touch (6th generation)
iPad
- iPad (4th generation)
- iPad (5th generation)
- iPad Air
- iPad Air 2
- iPad Mini 2
- iPad Mini 3
- iPad Mini 4
- iPad Pro
1.2. Installation
Tigase Messenger for iOS can be installed the same way any apple approved app can be found: through the appstore. Search for Tigase in the store search function and then tap install and follow the prompts to install Tigase Messenger.
1.3. Account Setup
Upon running Tigase Messenger for iOS for the first time, you will be greeted with the following screen:
1.3.1. Registering for a New Account
The application supports creating a new account registration using in-band registration. This means that on servers supporting it, you can sign up for a new account straight from the client! A list of servers that support this is located here. We have provided quick-links to Tigase maintained servers where you can register an account. However, you may use another domain if you wish. If you wish to use a custom domain, enter the domain address in the top bar, the application will then check with the server to ensure registration is supported.
You will be presented with an error message if it is not supported.
If registration is supported, you will see the following prompts:
Fill out the fields for username, password, and E-mail. You do not need to add the domain to your username, it will be added for you so your JID will look like
[email protected]
An E-mail is required in case a server administrator needs to get in contact with you, or you lose your password and might need recovery.
Once you tap Register, the application will connect and register your account with the server.
1.3.2. Use an Existing Account
If you already have an XMPP account on a server, select this option to login using Tigase Messenger for iOS. Enter your username and password as normal and tap Save to add the account.
1.3.3. Certificate Errors
You may receive certificate errors from servers that may not have certificate chains installed, invalid, or expired certificates. You will receive an unable to connect to server error, however servers with these errors will ask the user to accept or deny these security exceptions but they will show up at system notifications.
After doing so you may reattempt the connection to the server.
2. Tigase Messenger for iOS Interface
The menu interface for Tigase Messenger for iOS is broken up into three main panels; Recent, Contacts and More. This can be brought up from any screen by swiping right from the left side of the screen, or tapping the back option on the top left.
2.1. Recent
The recent menu displays recent conversations with other users, and also serves as a way to navigate between multi-user chatrooms (MUCs). Each conversation will be displayed here along with an icon indicating user or room status.
Tapping one of these conversations will bring up the chat, whether it is MUC or one on one. This panel also serves as an archive of sorts, and previous conversations with users will be accessible in this panel.
You may clear conversations from the archive by dragging the name or MUC conversation to the left and selecting delete. If you are removing a MUC chat, you will leave the chatroom.
2.1.1. New/Join MUC
Tapping the plus button on the top right will bring up the new/join muc panel. This interface will allow you to either join an existing or create a new MUC on your chosen server.
Account: This is the account that will handle data for the MUC chatroom. This is available for users who have multiple accounts logged in.
Server: The server the chatroom is located on, in many cases the muc server will be muc.servername.com, but may be different.
Room: The name of the chatroom you wish to create or join.
Nickname: Your name for use inside the MUC. This will become
[email protected]. MUC conversations do not leak your XMPP account, so a nickname is required.
Password: The password for the MUC room. If you are creating a new chatroom, this will serve as the chat room password.
Once you are finished, tap Join and you will join, or the room will be opened for you.
The recent panel will now display the chatroom, you may tap it to enter the MUC interface.
When in a chatroom, you may view the occupants by tapping Occupants, and will be given a list and statuses of the room participants.
2.2. Contacts
The contacts panel serves as your Roster, displaying all the contacts you have on your roster, and displaying statuses along with their names. Tigase Messenger for iOS supports vCard-Temp Avatars and will retrieve them if they are uploaded by a user.
Contacts with green icons are available or free to chat status.
Contacts with yellow icons are away or extended away.
Contacts with red icons are in do not disturb status.
Contacts with gray icons are offline or unavailable.
Note that contacts will remain gray if you decide not to allow presence notifications in the settings.
You may remove or edit contacts by dragging a contact to the left and tapping Delete. You also have the ability to edit a contact, explained in the next section. Deleting the contact will remove them from your roster, and remove any presence sharing permissions from the contact.
You may also filter contacts by status by selecting All to display all users, or Available to hide users that are offline or unavailable.
2.2.1. Editing a contact
When editing a contact, you may chose to change the account that has friended the user, XMPP name, edit a roster name (which will be shown on your roster). Here, you may also decide to selectively approve or deny subscription requests to and from the user. If you do not send presence updates, they will not know whether you are online, busy, or away. If you elect not to receive presence updates, you will not receive information if they are online, busy or away.
2.2.2. Adding a contact
To add a contact, tap the plus button in the upper left and the add contact screen will show.
First, select the account friends list you wish the new contact to be added too. Then type in the JID of the user, do not use resources, just bare JID. You may enter a friendly nickname for the contact to be added to your friend list, this is optional. When adding users, you have two options to select:
Send presence updates - This will allow sending of presence status and changes to this user on your roster. You may disable this to reduce network usage, however you will not be able to obtain status information.
Receive presence updates - Turning this on will enable the applications to send presence changes to this person on the roster. You may disable this to reduce network usage, however they will not receive notifications if you turn off the phone
If you do decide to receive presence updates when adding a new contact, you will be presented with this screen when they add you back:
By tapping yes, you will receive notifications of presence changes from your contact. This subscription will be maintained by the server, and will stay active with your friends list.
2.3. More
The more panel is your program and account settings panel, from here you can change program settings and general account information.
2.3.1. Accounts
This will list your current accounts, if an avatar has been defined for the account, it will show on the left side but by default the Tigase logo will be used.
2.3.2. vCard data
You can set and change vCard data for your account. Tap the account you wish to edit and you will be presented with a number of fields that may be filled out. There is a blank space in the upper left corner where you may upload a photo as your avatar.
2.3.3. Badge descriptions
We have included a badging system on accounts to help indicate if connections issues are present with any account setup.
2.3.4. Delete an account
If you wish to remove an account, swipe left and select Delete. You will be asked for a confirmation whether you want to remove it from the application, and if the server supports it, you may delete it from the server removing roster, presence subscriptions, and potentially saved history.
You may also add multiple XMPP accounts from this screen. The add account screen looks identical to the one seen in the existing account section.
To change settings for an individual account, tap that account name. Those options are covered under Account Settings section.
2.3.5. Status
Below accounts is a status setting for all connected and online accounts.
To save data usage, your account status will be managed automatically using the following rules by default
However, you may override this logic by tapping Automatic and selecting a status manually.
2.3.6. Settings
Below are settings for the operation and behavior of the application.
Chats
List of Messages
Lines of preview:
Sets the lines of preview text to keep within the chat window without using internal or message archive.
Sorting:
Allows sorting of recent messages by Time, or by status and time (with unavailable resources at the bottom).
Messages
Send messages on return:
If you are offline or away from connection, messages may be resent when you are back online or back in connection if this option is checked.
Clear chat on close:
If this is enabled, when you close chats from the recent screen, all local history on the device will be deleted. This does not affect operation of offline or server-stored message archives.
Message carbons:
Enables or disables message carbons to deliver to all resources. This is on by default, however some servers may not support this.
Request delivery receipts:
Whether or not to request delivery receipts of messages sent.
Attachments
File sharing via HTTP:
This setting turns on the use of HTTP file sharing using the application. The server you are connected too must support this component to enable this option.
Simplified link to HTTP file:
This creates a simplified link to the file after uploading rather than directly sending the file. This may be useful for intermittent communications.
Max image preview size:
Sets the maximum size of image previews to download before fully downloading files. Setting this at 0 prevents previews from retrieving files.
Clear cache:
This clears the devices cache of all downloaded and saved files retrieved from HTTP upload component.
Contacts
Display
Contacts in groups:
Allows contacts to be displayed in groups as defined by the roster. Disabling this will show contacts in a flat organization.
"Hidden" group:
Whether or not to display contacts that are added to the "hidden" group.
General
Auto-authorize contacts:
Selecting this will automatically request subscription to users added to contacts.
Notifications
This section has one option: Whether to accept notifications from unknown. If left disabled, notifications from unknown sources (including server administrators) will not be sent to the native notification section of the device. Instead, you will have to see them under the Recent menu.
3. Advanced Options
This section contains information about advanced settings and options that are available to the application, but may not be typically considered for users.
3.1. Account Settings
For each connected account, there are sever-specific settings that are available. This may be brought up by selecting More… and then choosing the account you wish to edit.
General
Enabled:
Whether or not to enable this account. If it is disabled, it will be considered unavailable and offline.
Change account settings:
This screen allows changing of the account password if needed.
Push Notifications Tigase Messenger for iOS supports XEP-0357 Push Notifications which will receive notifications when a device may be inactive, or the application is closed by the system. Devices must be registered for push notifications and must register them VIA the Tigase XMPP Push Component, enabling push components will register the device you are using.
Enabled:
Enables Push notification support. Enabling this will register the device, and enable notifcations.
When in Away/XA/DND state:
When enabled, push notifications will be delivered when in Away, Extended away, or Do not disturb statuses which may exist while the device is inactive.
Message Archiving
Enabled:
Enabling this will allow the device to use the server’s message archive component. This will allow storage and retrieval of messages.
Automatic synchronization:
If this is enabled, it will synchronize with the server upon connection, sharing and retrieving message history.
Synchronization:
Choose the level of synchronization that the device will retrieve and send to the server. | http://docs.tigase.net.s3-website-us-west-2.amazonaws.com/tigase-ios/master-snapshot/Tigase_Swift_Guide/html/ | 2019-09-15T16:11:01 | CC-MAIN-2019-39 | 1568514571651.9 | [array(['./images/home.png', 'home'], dtype=object)
array(['./images/regfailure.png', 'regfailure'], dtype=object)
array(['./images/registernew.png', 'registernew'], dtype=object)
array(['./images/recent.png', 'recent'], dtype=object)
array(['./images/delchat.png', 'delchat'], dtype=object)
array(['./images/join.png', 'join'], dtype=object)
array(['./images/occu.png', 'occu'], dtype=object)
array(['./images/roster.png', 'roster'], dtype=object)
array(['/images/edituser.png', 'edituser'], dtype=object)
array(['./images/adduser.png', 'adduser'], dtype=object)
array(['./images/presreq.png', 'presreq'], dtype=object)
array(['./images/settings.png', 'settings'], dtype=object)
array(['./images/delacct.png', 'delacct'], dtype=object)
array(['./images/setstatus.png', 'setstatus'], dtype=object)
array(['/images/chatsettings.png', 'chatsettings'], dtype=object)
array(['./images/acctsetting.png', 'acctsetting'], dtype=object)] | docs.tigase.net.s3-website-us-west-2.amazonaws.com |
Celery 1.0.6 (stable) documentation
You can schedule tasks to run at intervals like cron.)
If you want a little more control over when the task is executed, for example, a particular time of day or day of the week, you can use crontab to set the run_every property:
from celery.task import PeriodicTask from celery.task.schedules import crontab class EveryMondayMorningTask(PeriodicTask): run_every = crontab(hour=7, minute=30, day_of_week=1) def run(self, **kwargs): logger = self.get_logger(**kwargs) logger.info("Execute every Monday at 7:30AM.")
If you want to use periodic tasks you need to start the celerybeat service. You have to make sure only one instance of this server is running at any time, or else you will end up with multiple executions of the same task.
To start the celerybeat service:
$ celerybeat
or if using Django:
$ python manage.py celerybeat
You can also start celerybeat with celeryd by using the -B option, this is convenient if you only have one server:
$ celeryd -B
or if using Django:
$ python manage.py celeryd -B | https://docs.celeryproject.org/en/1.0-archived/getting-started/periodic-tasks.html | 2019-09-15T16:40:47 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.celeryproject.org |
Security considerations for TrueSight Operations Management
Each TrueSight Operations Management component requires some degree of security planning. Although each component planning section contains planning information for the applicable component product, this topic provides a consolidated list of all of the security planning information for all TrueSight Operations Management components.
Presentation Server
Infrastructure Management
Securing the Integration Service
PATROL Agent
PATROL Agent 11.3
App Visibility Manager
See also:
- Applying private certificates between the App Visibility portal and the Presentation Server (TrueSight Application Management 11.3)
- Applying private certificates to App Visibility components (TrueSight Application Management 11.3)
- Concealing sensitive data recorded by the App Visibility agents (TrueSight Application Management 11.3)
- Hiding sensitive information with an App Visibility confidentiality policy file (TrueSight Application Management 11.3)
- Implementing private certificates for App Visibility components and Synthetic TEA Agents (TrueSight Application Management 11.3)
Synthetic Transaction Monitoring
See also:
- Applying private certificates to Synthetic TEA Agents (TrueSight Application Management 11.3)
Real End User Experience Monitoring Software Edition
Related topics
Implementation phases for Operations Management
Remedy SSO security planning
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/TSOperations/113/security-considerations-for-truesight-operations-management-843619596.html | 2019-09-15T17:10:16 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.bmc.com |
Although you hear ring-back tone, the callee side is not ringing
Please check if the ring tone of the callee is set to off or the volume is too low.
Do you see current session display on the Session List at Brekeke SIP Server?
If yes,
- click on [Session ID] and you will be automatically led to [Session Detail] page.
- Then check [to-ip] for callee’s IP address, [session-status] for current phase (check [phase] if before version 1.5.1.3) and see if it is routed to the correct destination.
- If [session-status] indicates “Inviting” or “Provisioning”, it will not give a ringing tone to the callee phone. | https://docs.brekeke.com/sip/although-you-hear-ring-back-tone-the-callee-side-is-not-ringing | 2019-09-15T16:46:06 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.brekeke.com |
Versions Compared
Old Version 26
changes.mady.by.user Stephen Kairys
Saved on
New Version Current
changes.mady.by.user Stephen Kairys
Saved on
Key
- This line was added.
- This line was removed.
- Formatting was changed.
The following pages are intended for organizations that place bids via the PulsePoint exchange.
Note to Developers:
The PulsePoint exchange supports two APIs:
PulsePoint RTB, a custom specification created by our in-house development team, and
OpenRTB, the standard specification created by the Interactive Advertising Bureau (IAB). | https://docs.pulsepoint.com/pages/diffpagesbyversion.action?pageId=5309327&selectedPageVersions=26&selectedPageVersions=27 | 2019-09-15T15:55:30 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.pulsepoint.com |
Contents IT Operations Management Previous Topic Next Topic Create actual application services during planning Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create actual application services during planning The service deployment owner creates an actual application service based on the planned one. Before you beginRole required: sm_admin About this taskFresh install deployments do not have the Service Map Planner module. The improved planning functionality is provided as an integral part of the new Service Mapping user interface. For more information, see KB0689681: Features replacing deprecated Service Planner in Service Mapping.The actual application service inherits all information defined for the planned service. This new actual application service appears in the list of all application services and has the Non-Operational state. Ideally, only the service deployment owner should create the actual application service, however, any user with the sm_admin role can perform this task.When Service Mapping creates an actual application service during planning, it performs the following validation: Checks that the entry point is not used for another application service. Checks that the application service name is unique and does not exist in the CMDB. Procedure Navigate to Service Mapping > Service Map Planner > Phases. Select the phase which contains the planned application service. Click the name of the planned application service. Make sure that the name and the service deployment owner are defined. Click Create and Discover. Service Mapping creates the actual application service and starts the process of discovery and mapping. When the discovery process completes, the View map link appears on the page. If necessary, troubleshoot validation errors: Click the Validation Errors tab. Review error messages and fix the issues that caused them. Enter the name of the application owner. The application service owner is a user who is familiar with the infrastructure and applications making up the service. This user is the application service SME who provides information necessary for a successful creation of an application service. Click Send for Review. At this stage, you let the application owner review mapping results and provide feedback in the form of notes on the planned application service page. Previous topicCheck entry point connectivityNext topicReview application services On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-it-operations-management/page/product/service-mapping/task/create-business-service-planning.html | 2019-09-15T16:49:46 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.servicenow.com |
Cement is an advanced CLI Application Framework for Python. Its goal is to introduce a standard, and feature-full platform for both simple and complex command line applications as well as support rapid development needs without sacrificing quality.
Core features include (but are not limited to):
-
- Controller handler supports sub-commands, and nested controllers
- Zero external dependencies* (ext’s with dependencies ship separately)
- 100% test coverage
- Extensive documentation
- Tested on Python 2.6, 2.7, 3.1, and 3.2
Note that argparse is required as an external dependency for Python < 2.7 and < 3.2.
- DOCS:
- CODE:
- PYPI:
- SITE:
- T-CI:
- HELP: [email protected] - #cement | http://cement.readthedocs.org/en/stable-2.0.x/index.html | 2013-05-18T10:52:50 | CC-MAIN-2013-20 | 1368696382261 | [] | cement.readthedocs.org |
Amazon CloudFront can serve both compressed and uncompressed files from an origin server. CloudFront relies on the origin server either to compress the files or to have compressed and uncompressed versions of files available; CloudFront does not perform the compression on behalf of the origin server. With some qualifications, CloudFront can also serve compressed content from Amazon S3. For more information, see Choosing the File Types to Compress.
Serving compressed content makes downloads faster because the files are smaller—in some cases, less than half the size of the original. Especially for JavaScript and CSS files, faster downloads translates into faster rendering of web pages for your users. In addition, because the cost of CloudFront data transfer is based on the total amount of data served, serving compressed files is less expensive than serving uncompressed files.
CloudFront can only serve compressed data if the viewer (for example, a web browser or media player) requests
compressed content by including
Accept-Encoding: gzip in the request header. The content must be compressed
using gzip; other compression algorithms are not supported. If the request header includes additional content
encodings, for example,
deflate or
sdch, CloudFront removes them before forwarding the request
to the origin server. If
gzip is missing from the
Accept-Encoding field,
CloudFront serves only the uncompressed version of the file. For more information about the
Accept-Encoding request-header
field, see "Section 14.3 Accept Encoding" in Hypertext Transfer Protocol -- HTTP/1.1 at.
For more information, see the following topics:
Topics
Here's how CloudFront commonly serves compressed content from a custom origin to a web application:
You configure your web server to compress selected file types. For more information, see Choosing the File Types to Compress.
You create a CloudFront distribution.
You program your web application to access files using CloudFront URLs.
A user accesses your application in a web browser.
CloudFront directs web requests to the edge location that has the lowest latency for the user, which may or may not be the geographically closest edge location.
At the edge location, CloudFront checks the cache for the object referenced in each request.
If the browser included
Accept-Encoding: gzip in the request header, CloudFront
checks for a compressed version of the file. If not, CloudFront checks for an uncompressed version.
If the file is in the cache, CloudFront returns the file to the web browser. If the file is not in the cache:
CloudFront forwards the request to the origin server.
If the request is for a type of file that you want to serve compressed (see Step 1), the web server compresses the file.
The web server returns the file (compressed or uncompressed, as applicable) to CloudFront.
CloudFront adds the file to the cache and serves the file to the user's browser.
By default, IIS does not serve compressed content for requests that come through proxy
servers such as CloudFront. If you're using IIS and if you configured IIS to compress content by using the
httpCompression element, change the values of the
noCompressionForHttp10 and
noCompressionForProxies attributes to false.
In addition, if you have compressed objects that are requested less frequently than every few seconds,
you may have to change the values of
frequentHitThreshold and
frequentHitTimePeriod.
For more information, refer to the IIS documentation on the Microsoft website.
Some versions of NGINX require that you customize NGINX settings when you're using CloudFront to serve compressed files.
In the documentation for your version of NGINX, see the documentation for the
HttpGzipModule for
more information about the following settings:
gzip_http_version: CloudFront sends requests in HTTP 1.0 format. In some versions of NGINX, the
default value for the
gzip_http_version setting is 1.1. If your version of NGINX includes this setting,
change the value to
1.0.
gzip_proxied: When CloudFront forwards a request to the origin server,
it includes a
Via header. This causes NGINX to interpret the request as proxied and, by default,
NGINX disables compression for proxied requests. If your version of NGINX includes the
gzip_proxied
setting, change the value to
any.
If you want to serve compressed files from Amazon S3:
Create two versions of each file, one compressed and one uncompressed. To ensure that the compressed and uncompressed versions of a file don't overwrite one another in the CloudFront cache, give each file a unique name, for example, welcome.js and welcome.js.gz.
Open the Amazon S3 console at.
Upload both versions to Amazon S3.
Add a
Content-Encoding header field for each
compressed file and set the field value to
gzip.
For an example of how to add a
Content-Encoding header field using the AWS SDK for PHP,
see Upload an Object Using the AWS SDK for PHP in the
Amazon Simple Storage Service Developer Guide. Some third-party tools are also able to add this field.
To add a
Content-Encoding header field and set the field value using the Amazon S3 console, perform the following
procedure:
In the Amazon S3 console, in the Buckets pane, click the name of the bucket that contains the compressed files.
At the top of the Objects and Folders pane, click Actions and, in the Actions list, click Properties.
In the Properties pane, click the Metadata tab.
In the Objects and Folders pane, click the name of a file for which you want to add a
Content-Encoding header field.
On the Metadata tab, click
Add More Metadata.
In the Key list, click
Content-Encoding.
In the Value field, enter
gzip.
Click Save.
Repeat Step 4d through 4h for the remaining compressed files.
When generating HTML that links to content in CloudFront (for example, using php, asp, or jsp),
evaluate whether the request from the viewer includes
Accept-Encoding: gzip
in the request header. If so, rewrite the corresponding link to point to the compressed object name.
Some types of files compress well, for example, HTML, CSS, and JavaScript files. Some types of files may compress a few percent, but not enough to justify the additional processor cycles required for your web server to compress the content, and some types of files even get larger when they're compressed. File types that generally don't compress well include graphic files that are already compressed (.jpg, .gif), video formats, and audio formats. We recommend that you test compression for the file types in your distribution to ensure that there is sufficient benefit to compression. | http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html | 2013-05-18T10:52:40 | CC-MAIN-2013-20 | 1368696382261 | [] | docs.aws.amazon.com |
After a small business or individual creates a website, the next step is to drive traffic to the website. Driving traffic to a website increases revenue through advertisements, affiliates, and mass exposure. Search Engine Optimization (SEO), affiliate advertising, blogging, and social networking are successful tools that are proven to increase website traffic. This guide is ideal for individuals or small businesses that want to increase revenue by driving traffic to a particular website.
Get Unlimited Access to Our Complete Business Library
Plus
Would you be interested in taking a longer survey for a chance to win a 1-month free subscription to Docstoc Premium? | http://premium.docstoc.com/docs/9692950/Guide-to-Driving-Traffic-to-Your-Site | 2013-05-18T10:21:03 | CC-MAIN-2013-20 | 1368696382261 | [] | premium.docstoc.com |
Welcome to CKAN’s Documentation¶
This documentation covers how to set up and manage CKAN. For high-level information on what CKAN is, see the CKAN website.
Structure & Audiences
These docs are ordered with the beginner documentation first, and the most advanced documentation last:
- Installing CKAN and Getting Started walk you through installing CKAN and setting up your own CKAN site with some basic customizations. These docs are for sysadmins who’re new to CKAN and want go get started with it.
- The sections under Features cover setting up and using CKAN features, beyond those that just work out of the box. These are for sysadmins who want to learn how to manage and get more out of their CKAN site.
- Writing Extensions, Theming and The CKAN API are advanced docs for developers who want to develop an extension, theme or API app using CKAN.
- Contributing to CKAN and Testing CKAN are for testers, translators and core developers who want to contribute to CKAN.
- Finally, Config File Options and CKAN Releases are reference docs covering CKAN’s config file options and the differences between CKAN releases.
- Installing CKAN
- Getting Started
- Features
- Writing Extensions
- Theming
- The CKAN API
- Contributing to CKAN
- Testing CKAN
- Config File Options
- CKAN Releases | https://docs.ckan.org/en/942-documentation-guidelines/index.html | 2020-05-25T01:04:23 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.ckan.org |
Public Key Infrastructure (PKI) Tool
Overview
As described in the pki-guide, a certificate hierarchy with certain properties is required to run a Corda network. Specifically, the certificate hierarchy should include the two main CENM entities - the Identity Manager and the Network Map - and ensure that all entities map back to one common root of trust. The key pairs and certificates for these entities are used within the Signing Service to sign related network data such as approved CSRs, CRRs, Network Map and Network Parameter changes.
This certificate hierarchy should be generated prior to setting up the network as it must be provided to the Signing Service during setup of the CENM services. Generation of all key pairs and certificates in the hierarchy is left up to the discretion of the network operator. That is, provided the resulting keys and certificates adhere to the requirements set out in the linked guide above, an operator is free to use any generation process/method they wish.
The security requirements of the PKI will depend on the requirements of the underlying Corda network and the levels of assumed trust throughout the system. The security around the PKI is ultimately up to the operator, however some general recommendations are:
- Using locally generated Java key stores for anything other than a toy/test network is strongly discouraged.
- All keys should be generated and stay within the confines of a Hardware Security Module (HSM).
- Due to its importance, the root key should be protected by extra security measures. This could range from living in a separate HSM to only existing as offline shards distributed across different geographical locations.
The PKI Tool is a CENM provided utility that can be used to generate a Corda compliant hierarchy.
Features
- Allows a user to define their desired certificate hierarchy via a configuration file.
- Ability to generate private and public key pairs along with accompanying X509 certificates for all entities.
- Supports local key and certificate generation as well as HSM integration for Utimaco, Gemalto, Securosys and Azure Key Vault.
- Supports ‘additive’ mode, allowing a user to use existing keys to generate key pairs and certificates for entities further down the chain.
- Certificate Revocation List (CRL) file generation.
Running the PKI Tool
The tool is designed to be executed from the command line, where the entire certificate hierarchy is specified in the configuration file:
java -jar pkitool.jar --config-file <CONFIG_FILE>
Generating Certificates for non-Production Deployments
By default, a check will be done on the proposed certificate hierarchy before any generation steps to ensure that CRL
information is present for all entities. If this is not required then this check can be disabled by passing the
--ignore-missing-crl or
-i startup flag:
java -jar pkitool-<VERSION>.jar --ignore-missing-crl --config-file <CONFIG_FILE>
Configuration
Default Configuration
The PKI tool comes with the default configuration which can be used for testing. This configuration resembles a basic
version of the Corda Network certificate hierarchy, with the key omission of any CRL information. To generate the
certificate hierarchy using the default configuration, omit the
--config-file argument:
java -jar pkitool.jar --ignore-missing-crl
The output of this command are a set of local key stores within the generated
key-stores/ directory along with the
network trust store within the generated
trust-stores/ directory. Note these directories are generated relative to
the working directory that the tool is run from.
--ignore-missing-crlflag.
Custom Configuration
For anything other than a simple test, a custom configuration file can be created to define the hierarchy. Along with other parameters, the configuration is composed of three main sections:
- Key Stores Configuration: This is the list of key stores (local or HSM) that are used by the PKI tool. Each key store is referenced by a certificate configuration and is used to either generate and store the key pair along with the accompanying X509 certificate (if it does not currently exist). Alternatively, if the key pair and certificate has been previously generated then the existing values will be used to generated the remaining entities in the configured hierarchy.
- Certificates Stores Configuration: This is a list of certificates stores (in the form of Java key stores) that are used by the tool for storing generated certificates.
- Certificates Configurations: This is the list of configurations for the entities that form the desired certificate hierarchy.
Key Stores Configuration
This configuration block defines all key stores that should be used by the PKI Tool. Each key store can be either local (backed by a Java key store file) or HSM (backed by a LAN HSM device). For HSM key stores, the available options and authentication methods will depend on the HSM being used. See config-pki-tool-parameters for more details.
A mixture of key store types is allowed. That is, it is possible to generate some key pairs within a HSM device and others locally. Note that mixing key store types is not supported for a given entity.
Certificates Stores Configuration
This configuration block defines all certificate stores that will contain generated certificates. All certificate stores take the form of locally stored Java key store files, and contain no private keys.
includeInconfig parameter, or alternatively via the
defaultCertificatesStoreconfig parameter.
Certificates Configurations
The certificates configuration block defines the actual entities that form the desired hierarchy, It is expressed as a map from the user-defined alias to certificate configuration. The alias serves two purposes. Firstly, it can be used to reference the given entity throughout the rest of the PKI Tool config. Secondly, it also defines the alias for the generated (or existing) certificate entry in the corresponding certificate store. The certificate configuration defines the entity specific properties of both the X509 certificate and associated key pair. See config-pki-tool-parameters for more information.
If the desire is to use the resultant certificate hierarchy in a Corda network, this configuration block must define a
set of certificates that meet some basic requirements. In addition to the hierarchy having to be under a single trust
root (excluding SSL keys), it must include an entry for the Identity Manager CENM service, with the accompanying
certificate having the
DOORMAN_CA role. It also must include an entry for the Network Map CENM service, with the
accompanying certificate having the
NETWORK_MAP role. These certificate roles are validated by Corda nodes when they
receive a response from the CENM services, so failure to set the roles will result in a hierarchy incompatible with
Corda. CRL information is also needed if revocation is being used (see the
Certificate Revocation List Information
section below).
NETWORK_PARAMETERScertificate role is available which can be used to create a different entity, separate from the Network Map entity, that is responsible for signing Network Parameter changes. This can be useful as a network operator will often want to have the Network Map signing task run automatically on a schedule. Having a different PKI entity for each task allows the operator to keep the process of signing the high risk and infrequent Network Parameter changes isolated from the low risk and frequent process of signing Network Map changes.
NETWORK_PARAMETERSrole is only supported in Corda nodes running platform version 4+. Therefore, this should only ever be used in a network with
minimumPlatformVersion>= 4.
Certificate Templates
Out of the box, the PKI Tool comes with some predefined certificate templates that can be used to generate a basic, Corda compliant certificate hierarchy. Each template defines all necessary parameters, such as certificate subject, role and signedBy attributes, and greatly reduces the size of the configuration file.
The following certificate templates are available:
CORDA_TLS_CRL_SIGNER|Certificate for signing the CRL for the Corda Node’s TLS-level certificate|cordatlscrlsigner| |
CORDA_ROOT|Corda Root certificate|cordarootca| |
CORDA_SUBORDINATE|Corda Subordinate certificate|cordasubordinateca| |
CORDA_IDENTITY_MANAGER|Corda Identity Manager certificate|cordaidentitymanagerca| |
CORDA_NETWORK_MAP|Corda Network Map certificate|cordanetworkmap| |
CORDA_SSL_ROOT|Corda SSL Root certificate|cordasslrootca| |
CORDA_SSL_IDENTITY_MANAGER|Corda SSL Identity Manager certificate|cordasslidentitymanager| |
CORDA_SSL_NETWORK_MAP|Corda SSL Network Map certificate|cordasslnetworkmap| |
CORDA_SSL_SIGNER|Corda SSL Signer certificate|cordasslsigner|
Each certificate type has a default key store associated with it that is protected with the default password “password”.
Similarly, the root and tls crl signer certificates are preconfigured to be stored in a default
“network-root-truststore” with the default password “trustpass”. As a result, there is no need to specify the
keyStores or
certificatesStore configuration block.
A certificate template can be used in the
certificates configuration block, in place of the certificate aliases,
by prepending the template name with
::. A basic example of this is:
Customising The Templates
Customisation of the templates is supporting, allowing the default values within each template to be overridden. This can be achieved by extending the template:
certificates = { ... "::CORDA_SUBORDINATE" { subject = "CN=Custom Subordinate Certificate, O=Custom Company, L=New York, C=US" }, ... }
In this scenario, the
CORDA_SUBORDINATE certificate and key pair will be generated using the defaulted values from
the template for all parameters apart from the explicitly set subject.
The certificate alias can also be overridden by prepending the template notation with the chose custom alias. For example, a custom alias for the root entity can be defined by:
certificates = { "customrootca::CORDA_ROOT", ... }
signedByproperty set to the alias of the signing certificate, which is assumed to be the default one. As such, whenever the default alias of a certificate changes, all the certificates (configurations) being signed by this certificate needs to be updated by overriding the
signedByproperty. Following is the example of that.
Free-form Certificates
As an alternative to using the templates, each key pair and certificate can defined using the standard configuration options. See the config-pki-tool-parameters documentation for all possible parameters, and see below for examples that use this approach. Note that the templates only support local key stores - using a HSM requires the certificate hierarchy to be defined without templates.
Certificate Revocation List Information
Unless explicitly set, all configurations will be generated without CRL information. That is, unless the configuration
explicitly defines all necessary CRL file configurations or all CRL distribution URLs, all certificates will be
generated without the
Certificate Revocation List Distribution Point extension and will therefore be incompatible
with any network using strict revocation checking.
As outlined in the config-pki-tool-parameters doc, this extension is defined using the following logic:
- If the certificate configuration has the
crlDistributionUrlparameter set then use this.
- Otherwise take the
crlDistributionUrlvalue from the parent entities CRL file configuration (if exists).
CRL File Configuration
As referenced above, the PKI Tool can be configured to generate an accompanying CRL file for each CA entity via the
crl configuration block. This configuration determines the resulting CRL file for that entity as well as, by
association, the CRL endpoint configuration for any child entities in the hierarchy.
For example, a CRL file for the root can be defined:
certificates { ... "example-subordinate-alias" { ... crl = { crlDistributionUrl = "" file = "./crl-files/subordinate.crl" } }, "example-networkmap-alias" { ... issuesCertificates = false signedBy = "example-subordinate-alias" } ... }
This will result in the encoded CRL file
crl-files/subordinate.crl being created and will also result in any other
certificates that have been configured to be signed by “example-subordinate-alias” having the crlDistributionPoint
extension set to. Note that this means that the
subordinate’s CRL file must be hosted, and available, on this endpoint.
crl.revocationsparameter. See config-pki-tool-parameters for more information.
crlDistributionUrlparameter. This allows a certificate hierarchy to be defined that can use previously or externally generated CRL files. The above configuration snippet can be defined without CRL file configurations as follows:
certificates { ... "example-subordinate-alias" { ... crlDistributionUrl = "" }, "example-networkmap-alias" { ... issuesCertificates = false signedBy = "example-subordinate-alias" crlDistributionUrl = "" } ... }
As previously mentioned, it is up to the network operator to ensure that any configured CRL endpoints are available. The Identity Manager supports hosting of these CRL files (see the the “CRL Configuration” section of the identity-manager doc).
HSM Libraries
If using the PKI Tool with a HSM then, due to the proprietary nature of the HSM libraries, the appropriate jars need to be provided separately and referenced within the configuration file. The libraries that are required will depend on the HSM that is being used.
An example configuration block for a PKI Tool integrating with a Utimaco HSM is:
hsmLibraries = [ { type = UTIMACO_HSM jars = ["/path/to/CryptoServerJCE.jar"] } ]
Some HSMs (e.g. Gemalto Luna) also require shared libraries to be provided. An example configuration block for this is:
hsmLibraries = [ { type = GEMALTO_HSM jars = ["/path/to/LunaProvider.jar"] sharedLibDir = "/path/to/shared-libraries/dir/" } ]
See the example configurations below to see these config blocks being used in a complete file.
Azure Key Vault
To keep inline with the other HSMs, the Azure Key Vault client jar needs to provided as above. Unlike the other HSMs,
there are many dependent libraries. The top-level dependencies are
azure-keyvault and
adal4j, however these both
have transitive dependencies that need to be included. That is, either all jars need to be provided separately (via a
comma-separated list) or an uber jar needs to be provided.
The gradle script below will build an uber jar. First copy the following text in to a new file called build.gradle anywhere on your file system. Please do not change any of your existing build.gradle files.
plugins { id 'com.github.johnrengelman.shadow' version '4.0.4' id 'java' } repositories { jcenter() } dependencies { compile 'com.microsoft.azure:azure-keyvault:1.2.1' compile 'com.microsoft.azure:adal4j:1.6.4' } shadowJar { which can be referenced in the config.
Generating SSL Keys
As outlined in the enm-with-ssl doc, all inter-service CENM communication can be configured to encrypt their messages via SSL. This feature requires the operator to provide a set of SSL key pairs and certificates to each service, which can be generated using the PKI tool.
The template values described above can be used to generate these if required, or alternatively they be configured by the operator. Note that these keys are only to establish trust between services and should be completely separate from the main certificate hierarchy. Further more, in most cases, there should not be a need for CRL information - if they key pairs need to be rotated for any reason then an operator can just regenerate a new set of trusted key pairs and reconfigure the CENM services to use these.
A basic example configuration using the templates is:
certificates = { "::CORDA_SSL_ROOT" "::CORDA_SSL_IDENTITY_MANAGER" "::CORDA_SSL_NETWORK_MAP" "::CORDA_SSL_SIGNER" } | https://docs.corda.net/docs/corda-enterprise/4.4/pki-tool.html | 2020-05-25T00:50:19 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.corda.net |
AWS Cognito allows your users to sign in directly to your website or app with a username or password. AWS Cognito has two main components called user pools and identity pools. User pools are directories which allow client app users to sign-up or sign-in by giving out user pool tokens. Identity pools enable client app users to access other AWS services by allowing exchange of user pool tokens between client app and other AWS services.
miniOrange Supports the following Usecases for AWS Cognito
miniOrange SAML plugin enables SSO into client applications using Cognito User Store as identity source. The plugin uses identity details from Cognito pool and provides SSO based access to client applications.
miniOrange IdP enables users to SSO into AWS Cognito. The end user first authenticates through miniOrange Idp by SSO using miniOrange Console, and is then redirected to his AWS account.
miniorange Single Sign On plugin can use AWS Cognito as Identity Provider. The miniOrange SSO plugin forwards user authentication requests to AWS Cognito. After successful authorization using AWS Cognito credentials, the user is given access to the requested resource.
1. An unknown user tries to access the resources.
2. miniOrange sends an authorization request to AWS Cognito.
3. AWS Cognito asks the user to login and authorizes the application.
4. User is redirected to the login page where the user logs in.
5. The AWS Cognito Server authenticates the user and sends the authorization code to miniOrange SSO Connector.
6. The OAuth Client sends its own client_id, client_secret with the authorization code that has received from AWS Cognito Server.
7. AWS Cognito Server authenticates the request and sends the Access token to miniOrange SSO Connector.
8. miniOrange SSO Connector uses the access token to access resources on the resource server.
9. AWS Cognito Application returns user information like first name, last name, Email & other attributes corresponding to the user to which access token was assigned.
10. miniOrange SSO Connector logs in the user with received attributes.
11. Now, the user authenticated and logged in. Thus, the application gives access to the resources.
With miniOrange Identity broker service you can delegate all your single sign on requirements, user management, 2 factor authentication and even risk based access at the click of a button and focus on your business case. We can integrate with any type of app even if it does not understand any standard protocol like SAML, OpenId Connect or OAuth. miniOrange Single Sign-On Service can establish trust between two apps via secure https endpoint and automated user mapping to achieve SSO.
You can configure any User store like AWS Cognito to single sign-on into applications which don’t support any protocol or supports protocols other than OAuth like SAML, WS-FED, JWT, etc. for single sign-on using miniOrange cross-protocol support.
For example, you can configure the miniOrange broker service to use AWS User Pool and single sign-on into an external application, such as mobile application based on Cordova platform. We will authenticate our mobile application through AWS User pool using JWT tokens.
1. An unknown user tries to access any external application.
2. The Application sends an authentication request to miniOrange broker service, using any protocol that the application supports.
3. The miniOrange broker service forwards the authentication request to AWS Cognito.
4. User is redirected to AWS Cognito login page, where the user enters their credentials to authorize the application.
5. The AWS Cognito Server authenticates the user and sends the response to miniOrange broker service.
6. miniOrange broker service sends an authentication response to the Application. This response contains the user’s information as well as the authentication status, based on which the user is given access to the resource.
7. Upon successful authentication, the user is given access to the resource.
AWS Cognito can be configured to use any SAML Identity Provider. miniorange SAML Identity Provider for user authentication. When a user requests access for a resource, Cognito sends a SAML authentication request to miniOrange IdP and the user has to login with their miniOrange account. On successful authentication, the user is provided access to the resource.
1. An unknown user tries to access AWS Cognito Application.
2. AWS Cognito creates a SAML authentication Request and sends it to the configured Identity Provider. The user is prompted to log in with their Identity Provider account.
3. The SAML Identity Provider sends back a SAML Response to the AWS Cognito application. This response contains the user’s information as well as the authentication status, based on which the user is given access to the resource.
4. Upon successful authentication, the user is given access to the site. | https://docs.miniorange.com/aws-cognito | 2020-05-25T00:45:37 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.miniorange.com |
definition of a specified database.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-database [--catalog-id <value>] --name <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--catalog-id (string)
The ID of the Data Catalog in which the database resides. If none is provided, the AWS account ID is used by default.
--name (string)
The name of the database to retrieve. For Hive compatibility, this should be all lower.
Database -> (structure)
The definition of the specified database in the Data Catalog.
Name -> (string)The name of the database. For Hive compatibility, this is folded to lowercase when it is stored.
Description -> (string)A description of the database.
LocationUri -> (string)The location of the database (for example, an HDFS path).
Parameters -> (map)
These key-value pairs define parameters and properties of the database.
key -> (string)
value -> (string)
CreateTime -> (timestamp)The time at which the metadata database was created in the catalog.
CreateTableDefaultPermissions -> (list)
Creates a set of default permissions on the table for principals.
(structure)
Permissions granted to a principal.
Principal -> (structure)
The principal who is granted permissions.
DataLakePrincipalIdentifier -> (string)An identifier for the AWS Lake Formation principal.
Permissions -> (list)
The permissions that are granted to the principal.
(string) | https://docs.aws.amazon.com/cli/latest/reference/glue/get-database.html | 2020-05-25T02:40:28 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
stable release of CKAN (CKAN 2.4
- site_url
Provide the site’s URL (used when putting links to the site into the FileStore, notification emails etc). For example:
ckan.site_url =
Do not add a trailing slash to the URL.
5. Setup Solr¶
CKAN uses Solr as its search platform, and uses a customized Solr schema file that takes into account CKAN’s specific search needs. Now that we have CKAN installed, we need to install and configure Solr.
Note
These instructions explain how to setup Solr with a single core. If you want multiple applications, or multiple instances of CKAN, to share the same Solr server then you probably want a multi-core Solr setup instead. See Multicore Solr setup.
Note
These instructions explain how to deploy Solr using the Jetty web server, but CKAN doesn’t require Jetty - you can deploy Solr to another web server, such as Tomcat, if that’s convenient on your operating system.
Edit the Jetty configuration file (/etc/default/jetty) and change the following variables:
NO_START=0 # (line 4) JETTY_HOST=127.0.0.1 # (line 15) JETTY_PORT=8983 # (line 18)
Start the Jetty server:
sudo service jetty start
You should now see a welcome page from Solr if you open in your web browser (replace localhost with your server address if needed)./
Replace the default schema.xml file with a symlink to the CKAN schema file included in the sources.
sudo mv /etc/solr/conf/schema.xml /etc/solr/conf/schema.xml.bak sudo ln -s /usr/lib/ckan/default/src/ckan/ckan/config/solr/schema.xml /etc/solr/conf/schema.xml
Now restart Solr:
sudo service jetty restart
and check that Solr is running by opening.
Finally, change the solr_url setting. | https://docs.ckan.org/en/2517-rtd-toc-redirects/maintaining/installing/install-from-source.html | 2020-05-25T02:29:57 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.ckan.org |
Running our CorDapp
Now that we’ve written a CorDapp, it’s time to test it by running it on some real Corda nodes.
Deploying our CorDapp
Let’s take a look at the nodes we’re going to deploy. Open the project’s
build.gradle file and scroll down to the
task deployNodes section. This section defines three nodes. There are two standard nodes (
PartyA and
PartyB), plus a special network map/notary node that is running the network map service and advertises a validating notary
service. (The notary doesn't need a web server) |___ - five terminal windows in all..
IOUFlowby typing:
start IOUFlow iouValue: 99, otherParty: "O=PartyB,L=New York,C=US"
This single command will cause PartyA and PartyB to automatically agree an IOU. This is one of the great advantages of the flow framework - it allows you to reduce complex negotiation and update processes into a single function call.
If the flow worked, it should have recorded a new IOU in the vaults of both PartyA and PartyB. Let’s check..
However, if we run the same command on the other node (the notary),
We have written a simple CorDapp that allows IOUs to be issued onto the ledger. Our CorDapp is made up of two key parts:
- The
IOUState, representing IOUs on the blockchain
- The
IOUFlow, orchestrating the process of agreeing the creation of an IOU on-ledger
After completing this tutorial, your CorDapp should look like this:
- Java:
- Kotlin:
Next steps. | https://docs.corda.net/docs/corda-os/4.0/hello-world-running.html | 2020-05-25T01:52:29 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['/en/images/running_node.png', 'running node running node'],
dtype=object) ] | docs.corda.net |
Search engines indexation of your Gitea installation
By default your Gitea installation will be indexed by search engines. If you don’t want your repository to be visible for search engines read further.
Block search engines indexation using robots.txt
To make Gitea serve a custom
robots.txt (default: empty 404) for top level installations,
create a file called
robots.txt in the
custom folder or
CustomPath
Examples on how to configure the
robots.txt can be found at.
User-agent: * Disallow: /
If you installed Gitea in a subdirectory, you will need to create or edit the
robots.txt in the top level directory.
User-agent: * Disallow: /gitea/ | https://docs.gitea.io/en-us/search-engines-indexation/ | 2020-05-25T02:09:04 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.gitea.io |
Setting up networked games for multiplayer.
Modern multiplayer experiences require synchronizing vast amounts of data between large numbers of clients spread around the world. What data you send and how you send it is extremely important to providing a compelling experience to users since it can drastically affect how your project performs and feels. In Unreal Engine, Replication is the name for the process of synchronizing data and procedure calls between clients and servers. The Replication system provides a higher-level abstraction along with low-level customization to make it easier to deal with all the various situations you might encounter when creating a project designed for multiple simultaneous users. | https://docs.unrealengine.com/en-US/Gameplay/Networking/index.html | 2020-05-25T01:29:25 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.unrealengine.com |
Tools.
- Click Connect to Cluster in the Cassandra Explorer menu and click Connect to Cluster as shown below.
-,.
Click the View more link associated with each row in the column family to navigate to a comprehensive column explorer with facility to search column data across the row. | https://docs.wso2.com/pages/viewpage.action?pageId=43997502 | 2020-05-25T01:33:02 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.wso2.com |
refnx - Neutron and X-ray reflectometry analysis in Python¶
refnx is a flexible, powerful, Python package for generalised curvefitting analysis, specifically neutron and X-ray reflectometry data.
It uses several scipy.optimize algorithms for fitting data, and estimating parameter uncertainties. As well as the scipy algorithms refnx uses the emcee Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler for Bayesian parameter estimation.
Reflectometry analysis uses a modular and object oriented approach to model parameterisation. Models are made up by sequences of components, frequently slabs of uniform scattering length density, but other components are available, including splines for freeform modelling of a scattering length density profile. These components allow the parameterisation of a model in terms of physically relevant parameters. The Bayesian nature of the package allows the specification of prior probabilities for the model, so parameter bounds can be described in terms of probability distribution functions. These priors not only applicable to any parameter, but can apply to any other user-definable knowledge about the system (such as adsorbed amount). Co-refinement of multiple contrast datasets is straightforward, with sharing of joint parameters across each model.
Various tutorials are available from the refnx YouTube channel, and there are GUI programs available on github as well.
The refnx package is free software, using a BSD licence. If you are interested in participating in this project please use the refnx github repository, all contributions are welcomed.
- Installation
- Getting started
- GUI
- Co-refinement of multiple contrast datasets
- Inequality constraints with refnx
- Analysing lipid membrane data
- Frequently Asked Questions
- What’s the best way to ask for help or submit a bug report?
- How should I cite refnx?
- How is instrumental resolution smearing handled?
- What are the units of scattering length density?
- What are the ‘fronting’ and ‘backing’ media?
- How do I open the standalone app on macOS Catalina?
- Can I save models/objectives to file?
- Testimonials
- API reference | https://refnx.readthedocs.io/en/stable/ | 2020-05-25T00:19:23 | CC-MAIN-2020-24 | 1590347387155.10 | [] | refnx.readthedocs.io |
source identifier to an existing RDS event notification subscription.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
add-source-identifier-to-subscription --subscription-name <value> --source-identifier <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--subscription-name (string)
The name of the RDS event notification subscription you want to add a source identifier to.
--source-identifier (string)
The identifier of the event source to be added.
Constraints:
- source identifier to a subscription
The following add-source-identifier example adds another source identifier to an existing subscription.
aws rds add-source-identifier-to-subscription \ --subscription-name my-instance-events \ --source-identifier test-instance-repl
Output:
{ "EventSubscription": { "SubscriptionCreationTime": "Tue Jul 31 23:22:01 UTC 2018", "CustSubscriptionId": "my-instance-events", "EventSubscriptionArn": "arn:aws:rds:us-east-1:123456789012:es:my-instance-events", "Enabled": false, "Status": "modifying", "EventCategoriesList": [ "backup", "recovery" ], "CustomerAwsId": "123456789012", "SnsTopicArn": "arn:aws:sns:us-east-1:123456789012:interesting-events", "SourceType": "db-instance", "SourceIdsList": [ "test-instance", "test-instance. | https://docs.aws.amazon.com/cli/latest/reference/rds/add-source-identifier-to-subscription.html | 2020-05-25T03:10:53 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
The primary design goals of the Quick Start application; it has simply been extended using the many standard hooks provided by the Alfresco product. As such, this user help is an extension of the core Share user help and does not replace it.
The content, layout, and design of the Quick Start demo website is managed by Alfresco Share. Alfresco Web Editor is available for in-context editing on some website pages.
This help assumes you are familiar with Alfresco Share. | https://docs.alfresco.com/4.2/concepts/qs-introduction.html | 2020-05-25T02:15:43 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.alfresco.com |
NETCONF Unified Translation Unit¶
Unified translation units are located in repository.
Kotlin is used as prefered programming language in NETCONF translation units because it provides type aliases and better null-safety.
Readers¶
- Readers are handlers responsible for reading and parsing the data coming from a device
- There are 2 types of readers: Reader and ListReader. Reader can be used to handle container or argument
Mandatory interfaces to implement¶
Each reader needs to implement one of these interfaces based on type of target node in YANG.For more information about methods please read javadocs.
ConfigListReaderCustomizer - implement this interface if target composite node in YANG is list and represents config data.
ConfigReaderCustomizer - implement this interface if target composite node in YANG is container or augmentation and represents config data.
OperListReaderCustomizer - implement this interface if target composite node in YANG is list and represents operational data.
OperReaderCustomizer - implement this interface if target composite node in YANG is container or augmentation and represents operational data.
Base Readers¶
Each base reader for netconf readers should be generic. The generic marks the data element within device YANG that is being parsed into. The base reader should contain abstract methods:
- fun readIid(<args>): InstanceIdentifier<T> - each child reader should fill in the device specific InstanceIdentifier that points to the information needed for this reader. Arguments may vary and they are used to be more specific IID (eg. when creating an IID to gather information about a specific interface, you may want to pass interface name as argument).
- fun readData(data: T?, configBuilder: ConfigBuilder, <args>) - this method is used to transform Openconfig data (contained in ConfigBuilder) into device data (T) using .
- The framework provides safe methods to use when handling data on device:
- safePut deletes or adds managed data. Does not touch data that was previously on the device and is not handled by the writer.
- safeMerge stores just the changed data into device. Does not touch data that was previously on the device and is not handled by the writer.
- safeDelete removes data from the device only if the managed node does not contain any other information (even one not handled by the writer).
Mandatory interfaces to implement¶
Each writer needs to implement one of these interfaces based on type of target node in YANG. Unlike mandatory interfaces for reading, only interfaces for writing config data are available (because it is not possible to write operational data). For more information about methods please read javadocs.
ListWriterCustomizer - implement this interface if target composite node in YANG is list. An implementation needs to be registered as GenericListWriter.
WriterCustomizer - implement this interface if target composite node in YANG is container or augmentation. An implementation needs to be registered as GenericWriter.
Base Writers¶
Each base writer should be generic and contain abstract methods:
- fun getIid(id: InstanceIdentifier<Config>): InstanceIdentifier<T> - this method returns InstanceIdentifier that points to a node where data should be written
- fun getData(data: Config): T - this method transforms Openconfig data into device specific data (T)
TranslateUnit¶
Translate unit class must implement interface io.frinx.unitopo.registry.spi.TranslateUnit. Naming convention for translate unit class is just name Unit. Translate unit class is usually instantiated, initialized and closed from Blueprint.
Implementation of TranslateUnit must be registered into TranslationUnitCollector and must provide set of supported underlay YANG models. Snippet below shows registration of Unit for junos device version 17.3.
class Unit(private val registry: TranslationUnitCollector) : TranslateUnit { private var reg: TranslationUnitCollector.Registration? = null fun init() { reg = registry.registerTranslateUnit(this) } fun close() { reg?.let { reg!!.close() } } override fun getUnderlayYangSchemas() = setOf( UnderlayInterfacesYangInfo.getInstance())
Blueprint example of injecting TranslationUnitCollector to Juniper173InterfaceUnit:
<blueprint xmlns="" xmlns: <reference id="unifiedTranslationRegistry" interface="io.frinx.unitopo.registry.api.TranslationUnitCollector"/> <bean id="juniper173InterfaceUnit" class="io.frinx.unitopo.unit.junos.interfaces.Unit" init- <argument ref="unifiedTranslationRegistry" /> </bean> </blueprint>
Implementation of TranslateUnit must implement these methods:
toString(): String
- Return unique string among all translation units which will be used as ID for the translation unit (e.g. “IOS XR Interface (Openconfig) translate unit”)
getYangSchemas(): Set
- Return YANG models containing composite nodes handled by handlers(readers/writers). It must return empty Set if no handlers are implemented.
getUnderlayYangSchemas(): Set
- Return YANG module informations about underlay models used in the translation unit. These YANG modules describes configuration of NETCONF capable device.
getRpcs(underlayAccess: UnderlayAccess): Set>
- Return RPC services implemented in the translation unit. Default implementation returns an emptySet. Parameter underlayAccess represents object containing methods for communication with a device via NETCONF and should be passed to readers/writers.
provideHandlers(rRegistry: ModifiableReaderRegistryBuilder, wRegistry: ModifiableWriterRegistryBuilder, underlayAccess: UnderlayAccess): Unit
- Handlers(readers/writers) need to be registered in this method. underlayAccess represents object containing methods for communication with a device via NETCONF and should be passed to readers/writers.
- How to register readers/writers is described in CLI Translation Unit | https://docs.frinx.io/FRINX_ODL_Distribution/translation-framework/netconf-unified-translation-unit.html | 2020-05-25T01:25:52 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.frinx.io |
OENumericMolFunc2¶
Attention
This API is currently available in C++ and Python.
class OENumericMolFunc2 : public OEMolFunc2
The OENumericMolFunc2 class converts a OEMolFunc1 into a OEMolFunc2 by providing numeric hessians.
- The following methods are publicly inherited from OEFunc0:
-
- The following methods are publicly inherited from OEFunc1:
-
- The following methods are publicly inherited from OEFunc2:
-
- The following methods are publicly inherited from OEMolFunc:
-
- The OENumericMolFunc2 class defines the following public methods:
-
Constructors¶
OENumericMolFunc2(OEMolFunc1 &, bool own=false)
Default and copy constructors.
The molecule function used for function evaluation and to be converted must be provided as the first argument to the constructor. The second argument specifies the scaling factor. The third argument specifies whether the OENumericMolFunc2 object takes ownership of the memory of the molecule function instance. By default that does not happen, so the OENumericMolFunc2 destructor does not delete the molecule function instance. If ownership of the molecule function is transferred to the OENumericMolFunc2 instance, the molecule function’s delete operator will be called in the OENumericMolFunc2 destructor.
Set¶)
Predicates are passed directly to the molecule function that is to be scaled. | https://docs.eyesopen.com/toolkits/cpp/oefftk/OEMolPotentialClasses/OENumericMolFunc2.html | 2020-05-25T00:49:07 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['../../_images/OENumericMolFunc2.png',
'../../_images/OENumericMolFunc2.png'], dtype=object)] | docs.eyesopen.com |
Description
Rationalise the way we set compiler options.
Previous Release Behavior
Setting compiler options is a bit of mixed bag.
Current Release Behavior
There are a number of options with which you can control the compiler. Some options control the operation of the compiler (including debug information), and some control the language aspect. This change rationalises the setting of options into a sane, useful and consistent manner.
The options you can set may be detailed by executing the command jpp2 -h. You can add 'no' to the option to negate the option.
The manner in which we decide options during compilation is now like this:
1) If the file name ends in .jabba, we set the jabba option and turn off case sensitivity (i.e. we become case insensitive for keywords and variables).
2) Any options in Config_EMULATE with the 'compiler_options = xxx' statement are processed. This was added for this change, and the Universe emulation now contains this
compiler_options = universe
See PN5_60769 for a description of what the universe option gives.
3) Any options in the 'JBC_JPP2' environment variable are now processed.
4) Any options in the source code are processed for example:
$option jabba,nocase
As an example, if you want to compile all the time without case sensitivity, this is the option you would add to Config_EMULATE
compiler_options = nocase
You could also achieve this by setting the following environment variable:
export JBC_JPP2=nocase [Unix] set JBC_JPP2=nocase [Windows]
Note that for D3 emulations, this has been added as a default option and it now replaces the defunct option compiler_case_insensitive_variables_keywords = true | https://docs.jbase.com/pn5_60770 | 2020-05-25T01:22:48 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.jbase.com |
The Cache-Control max-age defines the amount of time a file should be cached for.
The max-age response directive indicates that the response is to be considered stale after its age is greater than the specified number of seconds.
The max-age is expressed in seconds.
Common max-age values are:
Disabled cache: max-age=0
Five seconds: max-age=5
One minute: max-age=60
One hour: max-age=3600
One day: max-age=86400
One week: max-age=604800
When using max-age to define your cache times one should consider how fast your users should be able to request the updated files.
For example for versions used by the development teams we recommend to disable the cache functionality.
For production versions we recommend a low cache value.
Remember: if you for example have a max-age of 604800 your users could have to wait up to one week before receiving the updated file. | https://docs.locize.com/more/caching | 2020-05-25T02:23:14 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.locize.com |
A red number in the left hand corner of a shift marks an edit in the schedule. Either there is a proposed trade or a staff member newly applied for a shift. Click on the desired shift. Drag & drop employees into that shift slot, or click on the green "+" button and they hereby accept the edit. All relevant employees will be notified via email by clicking on "send changes".
| http://docs.staffomatic.com/staffomatic-help-center/shifts/how-do-i-accept-incoming-employee-applications-for-shifts | 2018-05-20T14:09:04 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['https://uploads.intercomcdn.com/i/o/26032789/9cbf74bebbd3002c59d41b3f/Bildschirmfoto+2017-06-08+um+14.17.49.png',
None], dtype=object) ] | docs.staffomatic.com |
You can import VMs, templates and snapshots that have previously been exported and stored locally in XVA format (with a .xva file extension) or XVA version 1 format (ova.xml and associated files) using the XenCenter Import wizard.
Procedure
Click Next to continue.
The import progress is displayed. | https://docs.citrix.com/fr-fr/xencenter/6-5/xs-xc-vms-exportimport/xs-xc-vms-import.html | 2018-05-20T14:12:43 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.citrix.com |
Packet Forwarding with RPCAP
The ExtraHop Discover appliance a Discover appliance through a software tap such as Remote Packet Capture (RPCAP).
This or Windows device that you want to forward traffic from. You must modify the configuration file (rpcapd.ini) to specify device interfaces or to direct traffic to multiple Discover appliance Discover appliance is ready to receive the forwarded packets.
You can configure up to 16 rules for packet forwarding in the Discover appliance; each rule must have a single TCP port over which the Discover appliance Discover appliance Discover appliance. Discover appliance in Berkeley Packet Filter (BPF) syntax. For example, you can type tcp port 80 to forward all traffic on TCP port 80 from your remote network device to the Discover appliance. For more information about BPF syntax, see.
-.. In addition, you can create multiple ActiveClient entries for multiple Discover appliances if your environment requires high availability.
- -v
- Runs rpcap in active mode only instead of both active and passive modes.
- -d
- Runs rpcap as a daemon (in Linux) or as a service (in Windows).
- -L
- Sends log messages to a syslog server.
Install and start rpcapd on a Linux device
Before you beginThe minimum Linux kernel version required to run rpcapd is 2.6.32.
- In a web browser, navigate to https://<extrahop_management_ip>/tools, where the <extrahop_management_ip> is the IP address of your Discover appliance.
- will be similar to the following example: sudo ./install.sh -k 172.18.10.25 2003, where 172.18.10.25 is the IP address of your monito by adding one of the following lines: restart.
Example Linux=eth0 ActiveClient = 10.10.6.45, 2004, ifname=eth1 NullAuthPermit = YES
Install rpcapd on a Windows device with Powershell
- In a web browser, navigate to https://<extrahop_management_ip>/tools.
- Download and unzip the rpcapd file for Windows.
- Open PowerShell and navigate to the directory with the unzipped files.
- Run the following command, where <extrahop_rpcap_target_ip> is the IP address of the Discover appliance where you want to forward packets to and <extrahop_rpcapd_port> is the port you the device should connect through: the following command: ./install-rpcapd.ps1 -InputDir . -RpcapIp <extrahop_rpcap_target_ip> -RpcapPort <extrahop_rpcapd_port>NullAuthPermit = YES.
- Specify an interface to monitor by adding the following line: 2: appliance and the model of your appliance.
- For the lower end of the UDP port range, take the lowest TCP port listed in the set of rules on the Discover? | https://docs.extrahop.com/6.2/rpcap/ | 2018-05-20T13:26:17 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['/images/6.2/rpcap_diagram.png', None], dtype=object)
array(['/images/6.2/rpcap_default.png', None], dtype=object)
array(['/images/6.2/rpcap_eth1.png', None], dtype=object)
array(['/images/6.2/rpcap_tcp80.png', None], dtype=object)
array(['/images/6.2/rpcap_eth1_tcp80.png', None], dtype=object)] | docs.extrahop.com |
Kentico 10 Documentation Configuring Kentico Managing sites Configuring settings for sites Settings - On-line marketing Settings - Contact management Settings - Activities Last updated by Radka Uhlířová on October 25, 2016 Export to PDF | Copy page link ✖ Copy to clipboard General Log activities If enabled, activities are logged for the website (according to the other settings in this category). Track file downloads (cms.file) for these extensions The system can track file downloads as Page visit activities for files stored as CMS.File pages in the content tree of a website. This setting specifies which types of files the tracking includes.Enter the allowed file types as a list of extensions separated by semicolons, for example: pdf;docx;pngIf left empty, the system tracks all file types. Page Page visits If enabled, the page visit activity is logged. Landing page If enabled, the landing page activity is logged. A landing page is where the visitor comes first when viewing the website. Membership User registration If enabled, the user registration activity is logged. User login If enabled, the user login activity is logged. E-commerce Adding a product to shopping cart If enabled, the activity of adding a product to the shopping cart is logged. Removing a product from shopping cart If enabled, the activity of removing a product from the shopping cart is logged. Adding a product to wishlist If enabled, the activity of adding a product to wishlist is logged. Purchase If enabled, an activity is logged when a contact makes a purchase on the website (one record for the entire purchase). Purchased product Indicates if an activity is logged individually for every purchase of a product. Email marketing Newsletter subscription If enabled, the activity of subscribing to a newsletter is logged. Email feed unsubscription If enabled, the activity of unsubscribing from an email feed is logged. Opt out from all marketing emails If enabled, the activity of opting out from all marketing emails is logged. Email opening If enabled, the activity of opening a tracked marketing email is logged. Clickthrough tracking If enabled, the activity of clicking a particular hyperlink in a marketing email is logged. Search Search If enabled, the internal search activity is logged (the site visitor uses a standard search web part). External search If enabled, the external search activity is logged (the site visitor uses an external search engine that leads them to the website). Subscriptions Blog post subscription If enabled, the blog post subscription activity is logged. Blog post comments If enabled, the activity of visitor's adding a new comment to a blog post is logged. Forum post subscription If enabled, the forum post subscription activity is logged. Forum posts If enabled, the activity of visitor's adding new forum posts is logged. Message board subscription If enabled, the message board subscription activity is logged. Message board posts If enabled, the activity of visitor's adding new message board posts is logged. Other On-line form submission If enabled, the activity of submitting an on-line form will be logged. Content rating If enabled, the content rating activity is logged. Poll voting If enabled, the activity of poll voting is logged. Custom table form submit If enabled, the activity after submitting custom table form is logged. Event booking If enabled, the activity of event booking, i.e. attendee's subscribing to an event, is logged. Custom activities If enabled, custom activities are logged. Was this page helpful? Send us feedback | https://docs.kentico.com/k10/configuring-kentico/managing-sites/configuring-settings-for-sites/settings-on-line-marketing/settings-contact-management/settings-activities | 2018-05-20T13:49:28 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.kentico.com |
With a keyboard shortcut, you can switch some of the Fusion power command options that appear in the Virtual Machine menu and the applications menu, from the default option.
About this task
The shortcut applies to the Shut Down/Power Off and Restart/Reset power-option pairs as listed in the Virtual Machine menu and the applications menu (
). Pressing the Option or Alt key does not affect the power buttons in the toolbar.
For example, if your virtual machine defaults to the soft options, Shut Down and Restart, holding down either the Option key or the Alt key changes the soft options to the hard options, Power Off and Reset, respectively.
Pressing the Option or Alt key has no effect on other power options.
You can also configure Fusion to permanently display the hard option or soft option of a power-option pair. Therefore, you can change Shut Down to Power Off and Restart to Reset. Later, when you access the Virtual Machine or application (
) menu while the virtual machine is in a powered-on state, Fusion lists the Power Off option instead of Shut Down option and the Reset option instead of the Restart option. See Configure Virtual Machine Power Options.
Procedure
- Select Virtual Machine to display the Virtual Machine menu.
- Hold down the Option key (Mac keyboards) or Alt key (PC keyboards) and select an alternative power option.
See Options for Fusion Power Commands for descriptions of the power commands. | https://docs.vmware.com/en/VMware-Fusion/10.0/com.vmware.fusion.using.doc/GUID-8C0FF5FC-2E13-4F6C-A42D-4F9FC3C472C4.html | 2018-05-20T14:03:24 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.vmware.com |
How and When Do Features Become Available?
Supported Browsers
Supported browsers for Salesforce vary depending on whether you use Salesforce Classic or Lightning Experience.
Salesforce Overall
Spring ’17 gives you more reasons to love Lightning Experience. Customize your navigation experience with favorites, see multiple records on one screen with console apps, and access more global actions from anywhere in Lightning Experience.
Lightning Experience
Lightning Experience is a completely reimagined interface. Even better, it’s built on our UI platform, so the experience can grow and evolve with your needs. Check out the new features and considerations in this release.
Sales
Advisors can now create, maintain, and visualize clients and households through new relationship groups. Get new client service enhancements, including alerts on a client’s profile page and financial accounts to help advisors keep up with changes to client’s financial accounts.
Health Cloud
Security Health Check offers custom baselines to streamline the job of setting up security for your users and customers. You can encrypt Chatter posts and attachments, and protect Internet of Things devices with OAuth 2.0.
Deployment
The “Modify All Data” permission is now selected automatically when the “Deploy Change Sets” permission is selected.
Development
Whether you’re using Lightning components, Visualforce, Apex, or our APIs with your favorite programming language, these enhancements to Force.com help you develop amazing applications, integrations, and packages for resale to other organizations. | https://releasenotes.docs.salesforce.com/en-us/spring17/release-notes/rn_feature_impact.htm | 2018-05-20T13:26:28 | CC-MAIN-2018-22 | 1526794863570.21 | [] | releasenotes.docs.salesforce.com |
Go to "employees" and click on the employee you want to edit. The employee profile will open in a new window. Click on "edit" in the top right corner:
A new window will open where you can edit the rights of an employee.
Click "save" to finish the process. | http://docs.staffomatic.com/staffomatic-help-center/employees/how-do-i-change-the-rights-of-an-employee | 2018-05-20T14:04:42 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['https://downloads.intercomcdn.com/i/o/31107231/e0778105e414c44a5c7fb0a3/Bildschirmfoto+2017-08-14+um+16.16.43.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/31107260/308b9fa15b0da0abc02a18ff/Bildschirmfoto+2017-08-14+um+16.50.55.png',
None], dtype=object) ] | docs.staffomatic.com |
Troubleshooting¶
If you are migrating from an older version of the contrib notebook extensions repository, some old files might be left on your system. This can lead, for example, to having all the nbextensions listed twice on the configurator page.
Extensions Not Loading for Large Notebooks¶
If you have a large notebook, extensions can stop working after the notebook is loaded. Unfortunately, although this can be caused by nbextensions which take a long time to load, it’s also an issue with notebook itself. You can check #2075 for details.
To mitigate this issue, you can increase the timeout for requirejs by adding it in your custom.js:
// default is 30s, increase to 1 minute window.requirejs.config({waitseconds: 60});
You can find details of where to find/create a custom.js file at the notebook documentation about custom.js.
More details about the issue on the nbextensions side can be found in #1195.
Removing Double Entries¶
The nbextensions from older versions will be located in the
nbextensions
subdirectory in one of the
data directories that can be found using the
jupyter --paths
command. If you run your notebook server with the
--debug flag, you should
also be able to tell where the extra nbextensions are located from jupyter
server logs produced by the
jupyter_nbextensions_configurator serverextension:
jupyter notebook --debug
To remove the extensions, use
jupyter contrib nbextension uninstall --user
and possibly the system-wide ones as well:
jupyter contrib nbextension uninstall --system
(though that may need admin privileges to write to system-level jupyter dirs, not sure on windows).
If the above doesn’t work, the configurator serverextension should give warning logs about where duplicate files are found.
As a matter of interest, the possible install locations are, briefly:
- user’s jupyter data dir, on Windows ~.jupyter
- python sys.prefix jupyter data dir, in sys.prefix + /share/jupyter/nbextensions
- system-wide jupyter data dir, OS-dependent, but in Windows 7, I think they should be in ~\AppData\jupyter\nbextensions
To find all possible paths, you can use the jupyter command:
jupyter --paths
Generating Local Documentation¶
The documentation can be found online at readthedocs:
If you want to create documentation locally, use
$ tox -e docs
Display the documentation locally by navigating to
build/html/index.html
in your browser. Or alternatively you may run a local server to display the docs.
In Python 3:
python -m http.server 8000
Then, in your browser, go to.
If you want to avoid
tox (if you are using conda for example), you can call
sphinx-build directly:
sphinx-build -E -T -b readthedocs -c docs/source . docs/build
Then, start a local server
python -m http.server 8000
And go to ‘’. | http://jupyter-contrib-nbextensions.readthedocs.io/en/latest/troubleshooting.html | 2018-05-20T13:46:09 | CC-MAIN-2018-22 | 1526794863570.21 | [] | jupyter-contrib-nbextensions.readthedocs.io |
If you have Ambari Solr installed, you must upgrade Ambari Infra after upgrading Ambari.
Steps
Make sure Ambari Infra services are stopped. From Ambari Web, browse to Services > Ambari Infra and select Stop from the Service Actions menu.
On every host in your cluster with an Infra Solr Client installed, run the following commands:
For RHEL/CentOS/Oracle Linux:
yum clean all
yum upgrade ambari-infra-solr-client
For SLES:
zypper clean
zypper up ambari-infra-solr-client
For Ubuntu/Debian:
apt-get clean all
apt-get update
apt-get install ambari-infra-solr-client
Execute the following command on all hosts running an Ambari Infra Solr Instance:
For RHEL/CentOS/Oracle Linux:
yum upgrade ambari-infra-solr
For SLES:
zypper up ambari-infra-solr
For Ubuntu/Debian:
apt-get install ambari-infra-solr
Start the Ambari Infra services.
From Ambari Web, browse to Services > Ambari Infra select Service Actions then choose Start. | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.0/bk_installing-hdf-on-hdp/content/upgrade_ambari_infra.html | 2018-05-20T13:23:00 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.hortonworks.com |
Retire a knowledge article Retired knowledge articles are no longer available for users to view. A knowledge article has an associated retirement workflow, similar to the publishing workflow. This allows administrators to configure these workflows, defining an approval and review process for retiring knowledge if appropriate. When editing an article, click Retire to launch the retirement workflow associated with that article. Related TasksSelect a knowledge article categoryMove a knowledge articleCreate knowledge from an incident or problemImport a Word document to a knowledge base | https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/product/knowledge_management/concept/c_RetiredKnowledgeArticles.html | 2018-05-20T14:10:04 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
Search This Document
Are.
+How to: Check what is being displayed in your BlackBerry Hub
- In the BlackBerry Hub, tap
.
- Tap
.
- Tap Hub Management.
- Look at the Email Accounts area and make sure that the switch is set to On for every account that you want to display in the BlackBerry Hub
Make sure that this switch is set to On for every account that you want shown in your BlackBerry Hub.
Did this solve your issue?
Yes
No
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/48934/rok1377718808072.jsp | 2013-12-05T04:09:06 | CC-MAIN-2013-48 | 1386163039753 | [array(['cag1379016236462_en-us.png', 'Hub management Hub management'],
dtype=object) ] | docs.blackberry.com |
This page represents the current plan; for discussion please check the tracker link above.
Back in March there was a discussion on the developers' list about splitting up the unsupported modules into new categories: incubator (active development); dormant (keep for a while); legacy (to be removed).
Now that we have a proposal to publish GeoTools modules to Maven Central, Ben has pointed out that there will be a problem with experimental unsupported modules that have dependencies not hosted in Maven Central. This seems like another good reason to split it up.
Incubator (active development)
Dormant (keep for the moment)
Legacy (to be removed)
This proposal is under construction.
Voting has not started yet: | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=148866181 | 2013-12-05T04:06:35 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.codehaus.org |
The information in this page is completely out of date, but another question is:
Is it relevant, given some of the other documentation on j1.6 unit testing?
If it's still judged relevant, I'll be happy to tackle editing the page, but I don't want to waste my time if the page is just going to be deleted in the end, in favor of other pages. | http://docs.joomla.org/index.php?title=Talk:Unit_Testing&oldid=31081 | 2013-12-05T04:11:21 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.joomla.org |
This is a document that can be used by a tenant that has an existing lease with an option to purchase the property. Pursuant to the lease agreement, the tenant is electing to exercise the option to purchase the property under certain terms and conditions provided in the lease. This document should be used by an individual or entity that is currently a tenant of a lease agreement that grants them an option to purchase the property at a set price.
Get Unlimited Access to Our Complete Business Library
Plus | http://premium.docstoc.com/docs/29702974/Notice-to-Exercise-Lease-Option | 2013-12-05T04:05:59 | CC-MAIN-2013-48 | 1386163039753 | [] | premium.docstoc.com |
Help Center
Local Navigation
Banner® indicator, Wi-Fi® connection indicator, and roaming indicator
- battery power indicator
- sound profile indicator
- search icon
The theme that users select on their BlackBerry®<<
Pane manager
The pane manager provides filtered views of content and allows users to navigate content without leaving the context of the screen. You can filter content in two different ways.
In the scrollable view, users can move through each pane of content. Users can move through the panes continuously. Or, you can set a start and end point so that users know when they reach the first and the last pane. You can add hint text or arrows to the left side and the right side of the screen to indicate that other panes of content are available.
The tab view displays all available tabs on the screen. Users have immediate access to each tab.
You can also allow users to filter content within a specific pane. Users still have the ability to easily switch to other panes.
Users can perform the following action in a pane manager:
Best practice: Implementing pane managers
- Use a pane manager if users need to navigate filtered views of content frequently.
- Use the PaneManagerView class to create scrollable views or tab views. For more information about implementing pane managers, see the API reference guide for the BlackBerry® Java® SDK and the BlackBerry Java Application UI and Navigation Development Guide.
- Assign shortcut keys for switching to the next and previous panes. For example, in English, allow users to press "N" to switch to the next pane and "P" to switch to the previous pane.
- In most cases, allow users to close an application by pressing the Escape key. Do not display the previous pane. If users filter content within a specific pane, then display all of the content in the pane panes of content. The more panes of content, the more difficult it becomes for users to remember each pane. However, you can include more than seven panes if the panes are ordered in an obvious way, such as by date, by number, or in an alphabetical list.
- Add hint text to the left side and the right side of the screen to indicate that other panes of content are available. Use arrows instead of hint text if there is a large number of panes or to indicate that users can navigate in increments, such as by date.
- Allow users to scroll through the panes continuously if it is easy to distinguish the content in each pane. Otherwise, users might get lost. If there are only two panes, do not allow users to scroll continuously through the panes.
- Create a start and end point for the panes if the titles in each pane are similar or if the content in each pane is similar. A fixed start and end point allows users to easily find the first pane and the last pane. It also allows users to learn the order of the titles.
- Avoid using icons in titles, except for branding purposes.. | http://docs.blackberry.com/en/developers/deliverables/17965/Banner_and_title_bars_1123392_11.jsp | 2013-12-05T03:56:23 | CC-MAIN-2013-48 | 1386163039753 | [array(['banner_1131868_11.jpg',
'This illustration shows the items that can appear on a banner.'],
dtype=object)
array(['title_bar_1131874_11.jpg',
'This illustration shows a title bar in the Media application.'],
dtype=object)
array(['tab_title_view_1146985_11.jpg',
'This screen shows an example of a tab view.'], dtype=object)
array(['scrollable_title_filtered_pane_1155200_11.jpg',
'This screen shows an example of a scrollable view with the ability to filter content in a specific pane.'],
dtype=object) ] | docs.blackberry.com |
About the BlackBerry Desktop Software
The BlackBerry Desktop Software is designed to link the data, media files, and applications on your BlackBerry smartphone or your BlackBerry PlayBook tablet with your computer.
You can use the BlackBerry Desktop Software to do the following tasks with your smartphone or tablet:
- Synchronize your media files (music, pictures, and videos)
-.
The home screen of the BlackBerry Desktop Software provides you with quick access to common tasks and provides information about your connected smartphone or tablet, such as the model information and the last dates that your data was backed up and synchronized. If you have used the BlackBerry Desktop Software with other smartphones or tablets, you can connect these smartphones or tablets and switch between them using the Device menu.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/43033/1478827.jsp | 2013-12-05T03:54:52 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.blackberry.com |
OSPF Interface Configuration¶
OSP! The presence of NAT prevents OSPF from properly communicating with neighbors to form a full adjacency.
To configure an interface for use with OSPF, start in
config-ospf mode and
use the
interface <if-name> command to enter
config-ospf-if mode.
tnsr(config-ospf)# interface <if-name> tnsr(config-ospf-if)#
config-ospf-if mode contains the following commands:
- ip address (*|<ip4-address>)
These commands specify how OSPF will behave for all addresses on an interface (
*) for for a specific IPv4 address on an interface. In most cases, the
*form will be used here, but when there are multiple addresses available on an interface, a specific choice may be necessary.
- area <area-id>
This command defines the interface as a member of the given area. This is required to activate an interface for use by OSPF.
- authentication [message-digest|null]
Configures authentication for OSPF neighbors on this interface. All routers connected to this interface must have identical authentication configurations. This can also be enabled in the area settings.
When run without parameters, simple password authentication is used.
- message-digest
When set, enables MD5 HMAC authentication for this interface.
- null
When set, no authentication is used by OSPF on this interface. This is the default behavior, but may be explicitly configured with this command to override the authentication configured for this area.
- authentication-key <key>
Configures a simple password to use for authentication when that type of authentication is active. This password may only have a maximum length of 8 characters.
Warning
This method of authentication is weak, and MD5 HMAC authentication should be used instead if it is supported by all connected routers.
- cost <link-cost>
A manual cost value to apply to this interface, rather than allowing automatic cost calculation to take place.
In situations where multiple paths are possible to the same destination, this allows OSPF to prefer one path over another when all else is equal.
- dead-interval <time>
Time, in seconds from
1-65535, without communication from a neighbor on this interface before considering it dead. This is also known as the
RouterDeadIntervaltimer in OSPF. Default value is
40. This timer should be set to the same value for all routers.
- dead-interval minimal hello <multiplier>
When active, the
dead-intervalis forced to a value of
1and OSPF will instead send
<multiplier>number of Hello messages each second. This allows for faster convergence, but will consume more resources.
Note
When set, this overrides the values of both
dead-intervaland
hello-interval. Custom values configured with those commands will be ignored by OSPF.
- hello-interval <interval>
The interval, in seconds from
1-65535, at which this router will send hello messages. This is also known as the
HelloIntervaltimer in OSPF. Default value is
10. This timer should be set to the same value for all routers.
A lower value will result in faster convergence times, but will consume more resources.
- message-digest-key key-id <id> md5-key <key>
Configures MD5 HMAC authentication for use with
message-digesttype authentication.
- key-id <id>
An integer value from
1-255which identifies the secret key. This value must be identical on all routers.
- md5-key <key>
The content of the secret key identified by
key-id, which is used to generate the message digest. Given as an unencrypted string, similar to a password. The maximum length of the key is 16 characters.
- mtu-ignore
When present, OSPF will ignore the MTU advertised by neighbors and can still achieve a full adjacency when peers do not have matching MTU values.
- retransmit-interval <interval>
The interval, in seconds from
1-65535, at which this router will retransmit Link State Request and Database Description messages. This is also known as the
RxmtIntervaltimer in OSPF. Default value is
5.
-.
- transmit-delay <delay>
The interval, in seconds from
1-65535, at which this router will transmit LSA messages. This is also known as the
InfTransDelaytimer in OSPF. Default value is
1.
- ip network (broadcast|non-broadcast|point-to-multipoint|point-to-point)
Manually configures a specific type of network used on a given interface, rather than letting OSPF determine the type automatically. This controls how OSPF behaves and how it crafts messages when using an interface.
- broadcast
Broadcast networks, such as typical Ethernet networks, allow multiple routers on a segment and OSPF can use broadcast and multicast to send messages to multiple targets at once. OSPF assumes that all routers on broadcast networks are directly connected and can communicate without passing through other routers.
- non-broadcast
Non-broadcast networks support multiple routers but do not have broadcast or multicast capabilities. Due to this lack of support, neighbors must be manually configured using the
neighborcommand. When using this mode, OSPF simulates a broadcast network using Non-Broadcast Multi-Access (NMBA) mode, but transmits messages to known neighbors directly.
- point-to-multipoint
Similar to
non-broadcastmode, but connections to manually configured neighbors are treated as a collection of point-to-point links rather than a shared network. Similar to a point-to-point network, OSPF disables DR election.
- point-to-point
A point-to-point network links a single pair of routers. The interface is still capable of broadcast, and OSPF will dynamically discover neighbors. With this type of network, OSPF disables election of a DR. | https://docs.netgate.com/tnsr/en/latest/dynamicrouting/ospf/config-interface.html | 2019-11-12T03:33:59 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.netgate.com |
StartMeetingTranscription
Starts transcription for the specified
meetingId.
Request Syntax
POST /meetings/
MeetingId/transcription?operation=start HTTP/1.1 Content-type: application/json { "TranscriptionConfiguration": { "EngineTranscribeMedicalSettings": { "ContentIdentificationType": "
string", "LanguageCode": "
string", "Region": "
string", "Specialty": "
string", "Type": "
string", "VocabularyName": "
string" }, "EngineTranscribeSettings": { "ContentIdentificationType": "
string", "ContentRedactionType": "
string", "EnablePartialResultsStabilization":
boolean, "LanguageCode": "
string", "LanguageModelName": "
string", "PartialResultsStability": "
string", "PiiEntityTypes": "
string", "Region": "
string", "VocabularyFilterMethod": "
string", "VocabularyFilterName": "
string", "VocabularyName": "
string" } } }
URI Request Parameters
The request uses the following URI parameters.
Request Body
The request accepts the following data in JSON format.
- TranscriptionConfiguration
The configuration for the current transcription operation. Must contain
EngineTranscribeSettingsor
EngineTranscribeMedicalSettings.
Type: TranscriptionConfiguration parameters don't match the service's restrictions.
HTTP Status Code: 400
- ForbiddenException
The client is permanently forbidden from making the request.
HTTP Status Code: 403
- LimitExceededException
The request exceeds the resource limit.
HTTP Status Code: 400
- NotFoundException
One or more of the resources in the request does not exist in the system.
HTTP Status Code: 404
- ServiceUnavailableException
The service is currently unavailable.
HTTP Status Code: 503
- UnauthorizedException
The user isn't authorized to request a resource.
HTTP Status Code: 401
- UnprocessableEntityException
The request was well-formed but was unable to be followed due to semantic errors.
HTTP Status Code: 422
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/chime/latest/APIReference/API_meeting-chime_StartMeetingTranscription.html | 2022-01-16T23:39:55 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.aws.amazon.com |
MariaDB Galera Database Configuration simultaneously
- Duplicate checking during deployment does not work if resources are deployed in a cluster concurrently.. | https://docs.camunda.org/manual/latest/user-guide/process-engine/database/mariadb-galera-configuration/ | 2022-01-16T22:00:30 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.camunda.org |
Manage New Account State
All new AWS accounts added on Druva CloudRanger will be configured in an Inactive state. Administrators may use the New Account State toggle to set the default status of new account(s) configured on Druva CloudRanger.
To modify the default state for new accounts:
- Log into your Druva CloudRanger console and select the Organization for which you wish to modify the default Account State.
- Click the gear icon on the top navigation bar.
- On the Organization Settings page, set the New Account State toggle to the appropriate status.
Note: Setting the default state to Active for all new accounts configured on Druva CloudRanger could mean additional cost implications, based on your current subscription plan. For more information, contact Support. | https://docs.druva.com/CloudRanger/Manage_Accounts_and_Organizations/010_Manage_Organizations/Manage_New_Account_State | 2022-01-16T22:08:32 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.druva.com |
geometry – Geometry and algebra¶
- class Matrix(rows)¶
Mathematical representation of a matrix. It supports common operations such as matrix addition (
+), subtraction (
-), and multiplication (
*). A
Matrixobject is immutable.
- vector(x, y, z=None)¶
Convenience function to create a
Matrixwith the shape (
3,
1) or (
2,
1).
- class Axis¶
Unit axes of a coordinate system.
- X = vector(1, 0, 0)
- Y = vector(0, 1, 0)
- Z = vector(0, 0, 1)
- ANY = None
Reference frames¶
The Pybricks module and this documentation use the following conventions:
X: Positive means forward. Negative means backward.
Y: Positive means to the left. Negative means to the right.
Z: Positive means upward. Negative means downward.
To make sure that all hub measurements (such as acceleration) have the correct value and sign, you can specify how the hub is mounted in your creation. This adjust the measurements so that it is easy to see how your robot is moving, rather than how the hub is moving.
For example, the hub may be mounted upside down in your design. If you configure the settings as shown in Figure 4, the hub measurements will be adjusted accordingly. This way, a positive acceleration value in the X direction means that your robot accelerates forward, even though the hub accelerates backward. | https://docs.pybricks.com/en/latest/geometry.html | 2022-01-16T21:50:14 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.pybricks.com |
Find your mobile devices version below and follow the listed steps to enable your camera for the Buddy Punch app.
Android 9.0
Tap your smartphone Settings app.
Tap Apps & notifications.
Tap See all apps.
Select the Buddy Punch app.
Tap Permissions.
Toggle the “Camera” option.
Android 8.0
Tap your smartphone Settings app.
Tap Apps & notifications.
Tap App Info.
Select the Buddy Punch app.
Tap Permissions.
Enable Camera.
Android 7.0
Go to Settings.
Tap on Apps.
Click on Settings gear icon from top right side screen beside three vertical dots.
Tap on App permissions.
Select Camera.
Enable location permission for Buddy Punch.
Android 4.1 and Above
From the Home screen, touch Apps.
Select Settings.
Scroll to and select Camera.
Toggle the slider to turn Camera on.
iOS
Go to Settings.
Select the Buddy Punch app.
Toggle Camera.
Need to enable location services? Be sure to check out this article:
To view this article in full outside of the chat window, please click here. | https://docs.buddypunch.com/en/articles/2999579-how-do-i-change-camera-permissions-for-the-buddy-punch-app | 2022-01-16T21:14:38 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.buddypunch.com |
Visualize Applications
Epsagon enables you to understand complex and distributed applications. The Service Map decomposes applications into resources and services, drawing their observed connections in real-time, so you can identify errors, dependencies, or performance bottlenecks in your architecture.
Service Map example:
#Service Dependecies
Hover over a resource in the service map to understand which other resources are dependent (connected) to it.
#Focus View
Focus View allows you to zoom in on individual resources, and see exactly which other resources are dependent on it.
Enter Focus View by right-clicking on the desired resource and selecting focus view.
#Detecting Resource Performance Issues
Looking for an errored:
| https://docs.epsagon.com/docs/application-performance-monitoring/visualize-applications/ | 2022-01-16T21:40:40 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['/assets/images/c173d11-servicemap-5e6e46f7860ebefb2844fcf86ffc158a.png',
'servicemap.png'], dtype=object)
array(['/assets/images/26a163c-servicemap-hover-b44abf64215a4f69573ac4a320bb0cc4.png',
'servicemap-hover.png'], dtype=object)
array(['/assets/images/628e4f0-servicemap-focus-2ddca45f1f917da9898bd60f8982d58b.png',
'servicemap-focus.png'], dtype=object)
array(['/assets/images/b66fc17-duration-breakdown-fdb2ff393e19abdbaec694ada0ff2981.png',
'duration-breakdown.png'], dtype=object) ] | docs.epsagon.com |
1.6 Definitions
The define form defines an identifier to be a synonym for a value:
The define form can also define a function. The difference is that define for a function definition is followed by an open parenthesis, then the function name, a name for each argument, and a closing parenthesis. The expression afterward is the body of the function, which can refer to the function arguments and is evaluated when the function is called.
Since Getting Started, we have been evaluating forms only
in DrRacket’s bottom area, which is also known as the
interactions area. Definitions normally go in the top
area—
Put these two definitions in the definitions area:
Click Run. The functions is-odd? and is-even? are now available in the interactions area:
In our definitions of pi and tau, plait inferred that the newly defined names have type Number and that is-odd? has type (Number -> Boolean). Programs are often easier to read and understand if you write explicitly the type that would otherwise be inferred. Declaring types can sometimes help improve or localize error messages when Plait’s attempt to infer a type fails, since inference can other end up depending on the whole program.
Declare a type for a constant by writing : followed by a type after the defined identifier:
Alternatively, you can declare an idenitifier’s type separate from its definition by using :.
The declaration can appear before or after the definition, as long as it is in the same layer of declarations as the definition. You can even have multiple type definitions for the same identifier, and the type checker will ensure that they’re all consistent.
For a function, attach a type to an argument by writing square brackets around the argument name, :, and a type. Write the function’s result type with : and the type after the parentheses that group the function name with its arguments.
Or, of course, declare the type separately:
You can declare local functions and constants by using the local form as a wrapper. The definitions that appear after local are visible only within the local form, and the result of the local form is the value of the expression that appears after the definitions. The definitions must be grouped with square brackets.
The local form is most often used inside a function to define a helper function or to avoid a repeated computating involving the function arguments.
The let and letrec forms are similar to local, but they are somewhat more compact by avoiding the requirement to write define. The discard-first-if-fruit example above can be equivalently written using let: | https://docs.racket-lang.org/plait/definitions-tutorial.html | 2022-01-16T22:00:43 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.racket-lang.org |
Zeek Fields¶
Zeek logs are sent to Elasticsearch where they are parsed using ingest parsing. Most Zeek logs have a few standard fields and they are parsed as follows:
The remaining fields in each log are specific to the log type. To see how the fields are mapped for a specific Zeek log, take a look at its ingest parser.
You can find ingest parsers in your local filesystem at
/opt/so/conf/elasticsearch/ingest/ or you can find them online at:
For example, suppose you want to know how the Zeek conn.log is parsed. You could take a look at
/opt/so/conf/elasticsearch/ingest/zeek.conn or view it online at:
You’ll see that
zeek.conn then calls the
zeek.common pipeline (
/opt/so/conf/elasticsearch/ingest/zeek.common):
which in turn calls the
common pipeline (
/opt/so/conf/elasticsearch/ingest/common): | https://docs.securityonion.net/en/2.3/zeek-fields.html | 2022-01-16T21:16:05 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.securityonion.net |
Embedded Forms Reference
This reference covers the features of the Camunda Platform Forms SDK. The Forms SDK simplifies the implementation of user task forms in HTML5 / JavaScript based Applications. The Forms SDK itself is written in JavaScript and can be added to any JavaScript based Application.
The Forms SDK and Camunda Tasklist
Camunda Tasklist uses the Form SDK for providing support for Embedded Forms. By default, the tasklist uses the Form SDKs AngularJS integration.
Features
The Forms SDK provides the following features:
- Form handling: attach to a form existing in the DOM or load a form from a URL.
- Variable handling: load and submit variables used in the form.
- Script handling: execute custom JavaScript in Forms
- Angular JS Integration: The Forms SDK optionally integrates with AngularJS to take advantage of AngularJS form validation and other AngularJS goodies.
The following is a simple example of a form with two input fields binding to process variables
CUSTOMER_ID and
CUSTOMER_REVENUE:
<form> <label for="customerId">Customer Id:</label> <input type="text" id="customerId" cam- <label for="customerRevenue">Customer Revenue:</label> <input type="text" id="customerRevenue" cam- </form>
Anti Features
The Forms SDK is intended to be lean and small. By design it is not concerned with things like
- Form Validation: Instead, integrate with existing frameworks such as AngularJS.
- Components / Widgets: Instead, integrate nicely with existing component libraries like jQuery UI, Angular UI, …
- Form Generation: Instead, allow users to leverage the complete power of HTML and JavaScript to implement complex forms. | https://docs.camunda.org/manual/latest/reference/forms/embedded-forms/ | 2022-01-16T21:29:14 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.camunda.org |
Each update depends on approval by Klarna. Expect that the update may be rejected in some cases. Fully captured orders cannot have their order amount updated.
The customer has contacted you to change or remove a product from the order and you need to update the total order amount.
Increasing the order amount is not allowed for all payment methods, see below for details on when it is allowed. Any update to order amount will override and replace the original order amount as well as any possible order lines you might have sent with the order.
Important note: Sometimes the increase might be rejected as we are not allowed to grant a customer an extended order amount. In these cases the customer should be asked to place a new order in your shop for the new items. Be aware that increasing the order amount will trigger a second risk assessment on the customer, sometimes even a credit lookup.
Please note that you will override and replace the original order amount as well as any possible order lines you might have sent with the order.
Fig.1 Update the total order amount flow
The updated amount can optionally be accompanied by descriptive text and new order lines. Supplied order lines will replace the existing order lines. If no order lines are provided in the call, the original order lines will be deleted.
We suggest that you always send updated order lines to improve the customer experience. These can later be used to visualize what the customer bought when Klarna sends settlement details to the customer.
The updated order_amount must not be negative, nor less than the current captured_amount . The order amount cannot be updated if the full order amount has been captured already.
Currency is inferred from the original order.
PATCH /ordermanagement/v1/orders/{order_id}/authorization Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ= Content-Type: application/json { "order_amount": 6000, "description": "", "order_lines": [ { "type": "physical",
Klarna will respond with 204 . The server has fulfilled the request or an error message
HTTP/1.1 204 No Content | https://docs.klarna.com/order-management/integration-guide/retrieve-an-order/update-order-amount/ | 2022-01-16T22:12:40 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.klarna.com |
Klarna Payments accepts orders only for customers with billing addresses in the markets that match the currency of the order, per:
Klarna Payments integration is an alternative payment provider in Shopify, and Shopify limits Klarna Payments, as for all alternative payments methods, to always using only the store’s base currency, per:
As such, if the store base currency is SEK, Klarna Payments will be able to accept orders for customers whose billing country is Sweden. Customers with addresses in other countries will receive an error message, e.g. as shown in the screenshot below. The exact error message may be dependent upon region & Klarna product, i.e. such as “Option not available” or “You need a US billing address to use Klarna”.
The exact error message may differ based on region and other data.
In the Klarna Merchant Portal Settings app, merchants can generate new Klarna API credentials and/or disable previous API credentials, as shown below:
If changing API credentials are necessary, updated credentials must be changed both for 1) the Klarna Payments app (under Apps in your Shopify store) and 2) the Klarna alternative payment method(s) (under Settings->Payments in your Shopify store). If the API credentials don’t exactly match between the app and the payment method, order updates will fail to update Klarna. You can verify order updates or manually update orders in the Klarna Merchant Portal, as needed, but note there may be a small time delay (~ 5 minutes) for Klarna to be updated after the Shopify order has been updated so take care to not duplicate order management updates (such as partial refunds).
When API credentials are changed but credentials use the same Klarna merchant id (i.e. K100123_26712af8ef91 to K100123_1d2d9e478472), pending fraud status notifications from Klarna (e.g. fraud pending to accepted) will not be able to modify previously placed orders in the Shopify store; those Shopify orders will stay stuck in Pending status. Other order management updates can still be successful as long as the API credentials match between both the Klarna Payments app and alternative payment methods (including inactive alternative payment methods).
If a store is required to change the Klarna contract (e.g. change in store ownership) resulting in a new Klarna merchant id (i.e. K100123_26712af8ef91 to K100789_83e99c12023), all order management updates will be unsuccessful for previously placed orders, and all order management for those orders must be done in the Klarna Merchant Portal.
Removing the Klarna Payments app from a Shopify store will prevent capture or cancel order management updates automatically from Shopify to Klarna; refunds should update Klarna (without refund order line items though as refund order line item data is not accessible without the Klarna Payments app).
If choosing Klarna from the Shopify checkout results in an Oops error page, per screenshot below, verify that the Klarna API credentials for the payment method (at Shopify admin Settings->Payments->Alternative Payment Methods) exactly match the Klarna API credentials used for the Klarna Payments app in the store (under Apps menu in Shopify Admin). The Klarna API credentials are entered in 2 places and both those entries must match exactly (without any extra whitespace, upper vs. lower case must match, etc.).
After Shopify redirects to a HPSDK alternative payment integration, such as Klarna Payments, when a customer places an order, stock inventory can no longer be guaranteed as the inventory is not locked in the Shopify store. To avoid oversells, the Klarna Payments integration again checks Shopify inventory when the Klarna Payments hosted payments page is loaded, reloaded, and when the customer places the order (if the Klarna Payments app is up to date with the required read_inventory scope). (To verify that the app is up to date, click the Klarna Payments app within the Shopify admin, and if an update is required, the update will be requested.) Even with these additional inventory checks, depending on store traffic and timing, oversells can still possibly occur.
To further prevent oversells, Klarna Payments offers a merchant-configurable setting “Minimum quantity of product's stock inventory required for a Klarna order” (accessible either via the Klarna Payments app in the Shopify store OR via) for merchants to limit Klarna orders based on stock inventory availability.
Additionally, each Shopify app, such as the Klarna Payments app, are limited for the rate of Shopify API calls that can be made during a time period, as documented at: For stores with high volume flash sales in a small time period, alternative payment methods, like Klarna, could result in rate limit errors for customers.
Klarna Payments on Shopify is not compatible with the following apps:
There are two unrelated pending refund issues:
Refunds older than 60 days require the read_all_orders scope, as noted at. If some refunds are stuck in pending in your Shopify store and the order is older than 60 days, please make sure that the Klarna Payments app is up to date in your Shopify store to be able to access orders older than 60 days. To do so, a Shopify admin user with full admin permissions to the Shopify store, within the Shopify admin web page, can go to the Apps menu on the left hand side and click the "Klarna Payments" app. If the Klarna Payments app doesn’t have the necessary permissions, a web page will be presented stating "You are about to update Klarna Payments". After clicking "Update unlisted app" from the bottom right of the screen, future refunds should again update the corresponding Klarna order, including orders older than 60 days. For already processed refunds, the order can be updated through the Klarna Merchant Portal.
For stores that integrated Klarna Payments after July 2018, the required read_all_orders scope is requested when the Klarna Payments Shopify app is installed, and any refund can be processed successfully as long the refund is approved by Klarna.
When a refund is first made in Shopify, the Shopify order timeline will have an entry stating "A refund is pending". After a short delay (around 5 minutes), the Klarna order will be updated (if the refund is successful) and another order note will be added to the Shopify order timeline indicating if the refund was successful ("{refund amount} was refunded'") or not ("Unable to refund").
Note, even for successful refunds, the "Message" in the Shopify order timeline always states "Transaction pending".
The Klarna Payments for Shopify integration does not yet support Shopify’s edit orders functionality:
For edited orders, Klarna will block Klarna as a payment method upon redirect, and the customer can instead pay with a different supported payment method for the store.
Orders edited in Shopify will NOT update Klarna, and any order updates must be manually edited in the Klarna order via the Klarna Merchant Portal, separately from the Shopify order. | https://docs.klarna.com/platform-solutions/shopify/payments/installing-klarna-payments-as-an-alternative-payment-method-on-shopify-hpsdk/known-issues/ | 2022-01-16T21:14:53 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.klarna.com |
VDA registration
IntroductionIntroduction
Before a VDA can be used, it must register (establish communication) with one or more Controllers or Cloud Connectors on the Site. (In an on-premises Citrix Virtual Apps and Desktops deployment, VDAs register with Controllers. In a Citrix Virtual Apps and Desktops, the VDA might reject session launches that were brokered by an unlisted Controller..
Citrix Virtual Apps and Desktops automatically tests the connectivity to configured Controllers or Cloud Connectors during VDA installation. Errors are displayed if a Controller or Cloud Connector cannot be reached. If you ignore a warning that a Controller cannot be contacted (or when you do not specify Controller or Cloud Connector addresses during VDA installation), several)
Citrix recommends using GPO for initial VDA registration. It has the highest priority. (Auto-update is listed earliersetting. (If security is your top priority, use the
Virtual Delivery Agent Settings > Controller SIDssetting.)
This setting is stored under
HKLM\Software\Policies\Citrix\VirtualDesktopAgent (ListOfDDCs).
Registry-based
To specify this method, complete one of the following steps:
- On the Delivery Controller page in the VDA installation wizard, select Do it manually. Then, enter the FQDN of an installed Controller and then click Add. If you’ve installed other Controllers, add their addresses.
- For a command-line VDA installation, use the
/controllersoption.) or Cloud Connector discovery.
FarmGUID is present if a site OU was specified during VDA installation. (This might be used in legacy deployments.)
Optionally, update the
ListOfSIDs registry key (for more information, see ListOfSIDs:script (available on every Controller). Also, configure the
FarmGuid` registry entry on each VDA to point to the right OU. This setting can be configured using Group Policy.
For details, see Active Directory OU-based discovery.
MCS-based
If you plan to use only MCS to provision VMs, you can instruct MCS to set up the list of Controllers or Cloud Connectors. This feature is compatible.
List more than one controller on ListOfDDCs registry key separated by a space to prevent registration issues if a controller is not available.
Example:
DDC7x.xd.local DDC7xHA.xd.local
32-bit:
HKEY_LOCAL_MACHINE \Software\Citrix\VirtualDesktopAgent\ListOfDDCs
HKEY_LOCAL_MACHINE \Software\Citrix\VirtualDesktopAgent\ListOfDDCs (REG_SZ)
- Ensure all values listed under
ListOfDDCsmap to a valid fully qualified domain name to prevent startup registration delays. Citrix Provisioning to provision machines, except for Citrix Provisioning was added or removed since the last check, or if a policy change occurred that affects VDA registration, the Controller sends an updated list to its registered VDAs and the cache is updated. The VDA accepts connections from all the Controllers in its most recently cached list.
- If a VDA receives a list that does not include the Controller (or Cloud Connector) it is registered with (in other words, that Controller was removed from the site), the VDA re-registers, choosing among the Controllers in the ListOfDDCs. names
Consider the following when configuring items that can affect VDA registration. Citrix ADC. to Controller or Cloud Connector, and Controller or Cloud Connector to
Sometimes,.
These groups are intended for use within a single Site (not multiple Sites)..
For XenDesktop 7.0 or higher, there is one more step you need to perform to use Registration Groups feature. You need to Prohibit Enable Auto Update of Controller policy from Citrix Studio.SIDsw in newer environments. Instead, use zones.
- Reducing Active Directory load: Before the auto-update feature was introduced in XenApp and XenDesktop 7.6, the
ListOfSIDswas used to reduce the load on domain controllers. By pre-populating the
ListOfSIDs, the resolution from DNS names to SIDs can).
Controller search during VDA registrationController search during VDA registration
When a VDA tries to register, the Broker Agent first performs a DNS lookup in the local domain to ensure that the specified Controller can be reached.
If that initial lookup doesn’t find the Controller, the Broker Agent can start a fallback top-down query in AD. That query searches all domains, and repeats frequently. If the Controller address is invalid (for example, the administrator entered an incorrect FQDN when installing the VDA), that query’s activity can potentially lead to a distributed denial of service (DDoS) condition on the domain controller.
The following registry key controls whether the Broker Agent uses the fallback top-down query when it cannot locate a Controller during the initial search.
HKEY_LOCAL_MACHINE\Software\Policies\Citrix\VirtualDesktopAgent
- Name:
DisableDdcWildcardNameLookup
- Type:
DWORD
- Value:
1(default) or
0
When set to
1, the fallback search is disabled. If the initial search for the Controller fails, the Broker Agent stops looking. This is the default setting.
When set to
0, the fallback search is enabled. If the initial search for the Controller fails, the fallback top-down search is started.
Troubleshoot VDA registration issuesTroubleshoot VDA registration issues
As noted previously, a VDA must be registered with a Delivery Controller create a Delivery Group.
Identifying issues during machine catalog creation: was not obtained about a machine (perhaps because it had never registered with a Delivery Controller), you might choose to add the machine anyway..
Identifying issues after creating Delivery Groups: After you create a Delivery Group, Studio displays details about machines associated with that group. The details pane for a Delivery Group indicates the number of machines that are expected to be registered but are not. In other words, there might be one or more machines that are powered on and not in maintenance mode, but are not currently registered with a Controller. When viewing a “not registered, but expected to be” machine, review the Troubleshoot tab in the details pane for possible causes and recommended corrective actions.
More information about troubleshooting VDA registration
For more information about functional levels, see VDA versions and functional levels.
For more information about VDA registration troubleshooting, see CTX136668.
You can also use the Citrix Health Assistant to troubleshoot VDA registration and session launch. For details, see CTX207624. | https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/1912-ltsr/manage-deployment/vda-registration.html | 2022-01-16T21:46:49 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['/en-us/citrix-virtual-apps-desktops/1912-ltsr/media/vda-install-controllers-all.png',
'Delivery Controller page in the VDA installation wizard'],
dtype=object)
array(['/en-us/citrix-virtual-apps-desktops/1912-ltsr/media/vda-registration-cache-file.png',
"Example of a VDA's registration cache file"], dtype=object)
array(['/en-us/citrix-virtual-apps-desktops/1912-ltsr/media/vda-registration-listofsids.png',
'Example of different Controllers used for registration and brokering'],
dtype=object) ] | docs.citrix.com |
1 Introduction
This how-to will teach you how to go from a blank slate to an app running on a device.
The Mendix Native Mobile Builder is the UI tool to set up and build your Mendix Native Mobile Apps. It is directly accessible in Mendix Studio Pro v8.15 and above for all apps with a native mobile navigation profile.
The Mendix Native Mobile Builder does not currently support connections behind proxy servers. Please make sure you are not behind a proxy server and that your security rules allow access to the required services.
2 Prerequisites
Before starting this how-to, make sure you have completed the following prerequisites:
- Mendix Studio Pro v8.15 and above installed using the online installer. The offline installer does not include the Mendix Native Mobile Builder dependency.
- Read How to Get Started with Native Mobile to see how to create, style and debug an application with Mendix Studio Pro
- Deploy your native mobile app to the cloud via Studio Pro and have the cloud address of your deployed application available
- A GitHub account.
- An App Center account. We recommend a paid account if you will be building and deploying regularly.
2.1 Platform-Specific Prerequisites
If you plan to deploy your app for testing on an iOS device, make sure you have completed the following prerequisites:
- Register for an Apple Developer Account
- Have an iOS device for testing the iOS package that will be produced
- Have an iOS deployment certificate and a provisioning file for which your device is activated
- Have Xcode installed on your computer for deploying the iOS package to your test device
If you plan to deploy your app for testing on an Android device, make sure you have an Android device available.
3 Getting Your Tokens
To use the Mendix Native Mobile Builder, you will first need to get tokens to authenticate with GitHub and App Center. If you already have tokens for your GitHub and App Center, you do not need to complete the Getting Your Token sections.
3.1 GitHub Token
- Go to GitHub and sign in.
- Go to Settings by clicking your profile picture in the top-right corner of the page.
- Click Developer settings at the bottom of the left menu.
- Navigate to Personal access tokens and then click Generate new token to create a new personal access token.
- In the Note field, write Mendix Native Mobile Builder.
- Under Select scopes, select repo and workflows.
- Click Generate token.
- Store your token in a secure place. You will not be able to see it again. If you lose it, you will have to create a new token and delete your old one.
3.2 App Center Token
- Go to App Center and sign in.
- Click your profile icon in the top-right corner, then click Settings, and then Account Settings.
- In the API Tokens tab, click New API token.
- Add a description of your token, select Full Access, then click Add new API token, and then New API Token.
- Store this token in a secure place as well. You will not be able to see it again. If you lose it, you will have to create a new token and delete your old one.
4 Build Your Native App
The Mendix Native Mobile Builder needs to communicate with GitHub and App Center. Therefore, make sure your firewall permissions do not restrict the tool.
From Studio Pro:
Click App > Build Native Mobile App:
When Mendix Native Mobile Builder launches you will see the home screen:
Select Build app for distribution.
Fill in your app’s name and the app identifier. The wizard provides defaults, but you might want to align the app identifier to use your company’s reversed URL, or change the app name in some other way:
Click Next Step when ready.
In the Build Type screen fill in your GitHub and App Center API tokens. The tool will verify the tokens grant sufficient access to valid accounts and will notify you if they do not:
Click Next Step when ready.
Select Choose your icon if you already have an image you would like to use as an icon. If you continue without adding a custom image, your app will use the default images displayed below. You can change app icon later if you wish:
Click Next Step when ready.
Select Choose your splash screen if you already have an image you would like to use as a splash screen, or just continue if you are satisfied using the default image. You can change the splash screen later if you wish:
Click Next Step when ready.
Drag and drop your custom fonts onto the field if you already have a selection of fonts you would like to use, or continue if you do not need to add custom fonts. You can add custom fonts later if you wish:
Click Next Step when ready.
You have completed the mandatory basic app configuration required to build your app. Now you see the Build app for distribution screen:
Next, do the following:
Fill in an intentional version number. For defaults, we recommend you use these numbering guidelines:
- Versions lower than 0.5.0 for alpha releases
- Versions ranging from 0.5 to 0.9.x for beta releases
- Versions starting from 1.0.0 for release
Fill in your Runtime URL. It can be the IP of your local machine if you plan on testing against a locally-running Studio Pro installation. If you already deployed your app to Mendix Cloud, you can point it to the URL of the deployed runtime as found in Cloud Portal (for example, “".
Click the Build button to start the build.
The tool will set up your GitHub repository, commit your changes, configure App Center with two new apps (one for iOS and one for Android), and continue building your apps:
After the build completes you can scan the QR code provided to install the app on your device. Currently the QR code service is only supported for Android devices:
5 Signing Your Apps
By default, App Center builds are unsigned and cannot be released on the Google Play Store or the Apple App Store. To release your apps, you must provide your signature keys to Mendix Native Mobile Builder. Signature keys prove the authenticity of your app and prevent forgeries. For more information to how to acquire these keys, see the Managing App Signing Keys Reference Guide.
5.1 Set Up Signing for iOS
iOS supports two types of signing configurations: Development and Release. The type of the build depends on the type of provisioning file and certificate that was used for configuring the tool. To set up signing for iOS, follow these steps:
From within Mendix Native Mobile Builder, select iOS under Certificates:
Upload your provisioning file and P12 certificate, and then type in your password. The tool will verify that:
- The app identifier of the app is included in the Provisioning file
- The Certificate is included in the Provisioning file
- The password can unlock the certificate
If the tool errors, please correct the issue and try again:
Click Save.
With that you have completed setting up signing for iOS. Your next build will use the provided configuration to sign your iOS app.
5.2 Set Up Signing for Android
From within Mendix Native Mobile Builder, choose Android under Certificates:
Upload your keystore file and provide the keystore password, the key alias and the key password as defined when setting up the keystore. The tool will verify that:
- The keystore password is valid
- The key alias exists in the provided keystore
If it errors, please correct the issue and try again:
Click Save.
With that you have completed setting up signing for Android. The next build will use the provided configuration to sign your Android app.
6 Distributing
This section will guide you in distributing your binaries, setup, signing for iOS and Android using your release certificates and keystore, and building your binaries.
For distributing to a specific platform, consult the appropriate section below:
6.1 Distribute the iOS app to App Store Connect
Depending on whether you chose to sign your iOS app or not, the output of the build will be an IPA or XCArchive file, respectively. IPA files can be directly distributed to App Store Connect for further processing. XCArchives require XCode to sign and generate an IPA before they can be further processed.
6.1.1 Distribute a Signed IPA
To be able to upload your app to App Store Connect, you will have to have set up a new app using the App Store Connect website. While there, use the app name and app id you used to build your app. For further instruction, see the App Store Connect Guide to adding a new app.
When signing your iOS app, an IPA file is generated. To upload an IPA to the Apple App Store, XCode includes a command line tool. Assuming XCode is installed and the extra command line tool is set up, the command to upload the IPA is the following:
xcrun altool --upload-app --type ios --file "path/to/application.ipa" --username "YOUR_APPSTORE_USER_EMAIL" --password "YOUR_APPSTORE_PASSWORD"
Replace
file "path/to/application.ipa" with the absolute path to your IPA file,
username with your developer app store email address, and
password with your Apple App Store password.
The command will first verify your IPA is packaged correctly and ready to be shipped, and then will then upload it to TestFlight for further processing.
6.1.2 Distributed an Unsigned XCArchive
Local signing is useful if you only want to test your app on a device, or you do not have a distribution certificate and have run out of build minutes on App Center when signing with a developer certificate.
In order to deploy the nativeTemplate.xcarchive on a device or on the Apple App Store, an Apple developer account and development team is required. If one is available, do the following:
- Using Xcode, double-click the nativeTemplate.xcarchive file. It should open with the built-in Application Loader software.
Click the Distribute App button to start the local signing flow:
Select Development:
Choose a Development Team:
Configure your Development distribution options:
Select a re-signing option:
Review your .ipa content and click Export:
Congratulations. You now have a signed .ipa file:
6.2 Distribute the Android app to Google Play
A signed Android APK can be uploaded to Google Play store directly. For more info on setting up a new app and uploading your binaries follow Google’s guide on Uploading an app. | https://docs.mendix.com/howto/mobile/deploying-native-app | 2022-01-16T21:12:22 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['attachments/nbui/build-release-app.png', 'Build release app'],
dtype=object)
array(['attachments/deploying-native-app/xcode-app-loader-7.png',
'Xcode Application loader'], dtype=object) ] | docs.mendix.com |
The Pica8 PICOS open network operating system is based on Linux and consequently borrows technology from the Linux server realm that promotes ease of use, including zero touch provisioning (ZTP). Once a switch is physically connected to the network, ZTP enables the automation of provisioning and configuration processes, typically using a Dynamic Host Control Protocol (DHCP) server.
ZTP routines can also take advantage of open source Linux tools such as Puppet and Chef, which were originally tools to automate server configuration tasks. These tools have now been adapted by the Open Source community to provision switch configurations. So, just as racks of servers and VMs are added to a cluster using Puppet or Chef, network switches and routers can be configured in the cluster by the same tools.
This document describes how ZTP works in PICOS, how to enable or disable it, and the ZTP API. | https://docs.pica8.com/display/PicOS421sp/Zero+Touch+Provisioning | 2022-01-16T21:45:46 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.pica8.com |
public interface Joinpoint
A runtime joinpoint is an event that occurs on a static
joinpoint (i.e. a location in
@Nullable Object proceed() throws Throwable
The implementation and the semantics of this method depends on the actual joinpoint type (see the children interfaces).
Throwable- if the joinpoint throws an exception
@Nullable Object getThis()
For instance, the target object for an invocation.
@Nonnull AccessibleObject getStaticPart()
The static part is an accessible object on which a chain of interceptors are installed. | https://docs.spring.io/spring-framework/docs/5.3.10/javadoc-api/org/aopalliance/intercept/Joinpoint.html | 2022-01-16T23:19:36 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.spring.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.