content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Deprecated install support.
The
PRE_INSTALL_SCRIPT and
POST_INSTALL_SCRIPT properties are the old way to specify CMake scripts to run before and after installing a target. They are used only when the old
INSTALL_TARGETS command is used to install the target. Use the
install() command instead.
© 2000–2020 Kitware, Inc. and Contributors
Licensed under the BSD 3-clause License. | https://docs.w3cub.com/cmake~3.19/prop_tgt/pre_install_script | 2021-02-24T21:16:48 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.w3cub.com |
The Erlang SSL application implements the SSL/TLS/DTLS protocol for the currently supported versions, see the
ssl(3) manual page.
By default SSL/TLS is run over the TCP/IP protocol even though you can plug in any other reliable transport protocol with the same Application Programming Interface (API) as the
gen_tcp module in Kernel. DTLS is by default run over UDP/IP, which means that application data has no delivery guarentees. Other transports, such as SCTP, may be supported in future releases.
If a client and a server wants to use an upgrade mechanism, such as defined by RFC 2817, to upgrade a regular TCP/IP connection to an TLS connection, this is supported by the Erlang SSL application API. This can be useful for, for example, supporting HTTP and HTTPS on the same port and implementing virtual hosting. Note this is a TLS feature only.
To achieve authentication and privacy, the client and server perform a TLS/DTLS handshake procedure before transmitting or receiving any data. During the handshake, they agree on a protocol version and cryptographic algorithms, generate shared secrets using public key cryptographies, and optionally authenticate each other with digital certificates.
A symmetric key algorithm has one key only. The key is used for both encryption and decryption. These algorithms are fast, compared to public key algorithms (using two keys, one public and one private) and are therefore typically used for encrypting bulk data.
The keys for the symmetric encryption are generated uniquely for each connection and are based on a secret negotiated in the TLS/DTLS handshake.
The TLS/DTLS handshake protocol and data transfer is run on top of the TLS/DTLS Record Protocol, which uses a keyed-hash Message Authenticity Code (MAC), or a Hash-based MAC (HMAC), to protect the message data integrity. From the TLS RFC: "A Message Authentication Code is a one-way hash computed from a message and some secret data. It is difficult to forge without knowing the secret data. Its purpose is to detect if the message has been altered."
A certificate is similar to a driver's license, or a passport. The holder of the certificate is called the subject. The certificate is signed with the private key of the issuer of the certificate. A chain of trust is built by having the issuer in its turn being certified by another certificate, and so on, until you reach the so called root certificate, which is self-signed, that is, issued by itself.
Certificates are issued by Certification Authorities (CAs) only. A handful of top CAs in the world issue root certificates. You can examine several of these certificates by clicking through the menus of your web browser.
Authentication of the peer is done by public key path validation as defined in RFC 3280. This means basically the following:
The server always sends a certificate chain as part of the TLS handshake, but the client only sends one if requested by the server. If the client does not have an appropriate certificate, it can send an "empty" certificate to the server.
The client can choose to accept some path evaluation errors, for example, a web browser can ask the user whether to accept an unknown CA root certificate. The server, if it requests a certificate, does however not accept any path validation errors. It is configurable if the server is to accept or reject an "empty" certificate as response to a certificate request.
From the TLS RFC: data is by default kept by the SSL application in a memory storage, hence session data is lost at application restart or takeover. Users can define their own callback module to handle session data storage if persistent data storage is required. Session data is also invalidated after 24 hours from it was saved, for security reasons. The amount of time the session data is to be saved can be configured.
By default the TLS/DTLS clients try to reuse an available session and by default the TLS/DTLS servers agree to reuse sessions when clients ask for it.
© 2010–2017 Ericsson AB
Licensed under the Apache License, Version 2.0. | https://docs.w3cub.com/erlang~21/lib/ssl-9.0/doc/html/ssl_protocol | 2021-02-24T21:19:18 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.w3cub.com |
Examples¶
Matrix multiplication¶
Here is a naive implementation of matrix multiplication using a HSA kernel:
@roc.jit def matmul(A, B, C): i = roc.get_global_id(0) j = roc.get_global_id(1) if i >= C.shape[0] or j >= C.shape[1]: return. HSA provides a fast shared memory for workitems in a group to cooper.shared.array(shape=(blocksize, blocksize), dtype=float32) if x >= C.shape[0] or y >= C.shape[1]: return tmp = 0 for i in range(gridsize): # preload sA[tx, ty] = A[x, ty + i * blocksize] sB[tx, ty] = B[tx + i * blocksize, y] # wait for preload to end roc.barrier(1) # compute loop for j in range(blocksize): tmp += sA[tx, j] * sB[j, ty] # wait for compute to end roc.barrier(1) C[x, y] = tmp N = gridsize * blocksize A = np.random.random((N, N)).astype(np.float32) B = np.random.random((N, N)).astype(np.float32) C = np.zeros_like(A) griddim = gridsize, gridsize blockdim = blocksize, blocksize with roc.register(A, B, C): ts = timer() matmulfast[griddim, blockdim](A, B, C) te = timer() print("1st GPU time:", te - ts) with roc.register(A, B, C): ts = timer() matmulfast[griddim, blockdim](A, B, C) te = timer() print("2nd GPU time:", te - ts) ts = timer() ans = np.dot(A, B) te = timer() print("CPU time:", te - ts) np.testing.assert_allclose(ans, C, rtol=1e-5)
Because the shared memory is a limited. | https://numba.readthedocs.io/en/stable/roc/examples.html | 2021-02-24T20:53:50 | CC-MAIN-2021-10 | 1614178347321.0 | [] | numba.readthedocs.io |
The Clean Data Utility (DelOrpModItems.dll), also referred to as Delete Orphan Model Items, allows you to check for problems in your database records and, if required, to run clean-ups on those records. should run this utility prior to upgrading your drawings.
It is recommended that you make regular back-ups of your database so that you can restore data in the event of database problems.
Database Report — Generates a report, written to the DBCleanup.txt file in your Temp folder, that helps you decide whether a manual cleanup alternative exists before using the Entire Database command to delete the problems from the database.
The report indicates the following problems:
Broken database relationships, such as those between the Equipment and Plant Item tables, or between the Symbol and Symbol Representation tables.
Orphan model items of the item types listed below under Model Items.
Entire Database — Runs the report and removes orphaned records from the plant database. This option includes all the clean-ups performed by the Model Items, OPCs, and Gaps options. Run this option only after running the Database Report option and examining the report.
Model Items — Finds and deletes any model item in the database that does not have a corresponding entry in the T_Representation table. The utility works on an item type basis and repairs the following model item types: Vessel, Mechanical, Exchanger, Equipment: Other, Equipment Component, Instrument, Nozzle, Piping Component, Ducting Component, Pipe Run, Signal Run, Duct Run, OPC, Item Note, Area Break, Room, and Room Component. Once the orphan model items for an item type are found, you can select any or all of the items and choose to delete them.
OPCs — Finds and repairs off-page connectors (OPCs) that have lost their associations with the OPC with which they were originally paired. If one OPC has lost the identity of its mated OPC, but the mated OPC still has the identity of the first OPC, then the OPC is considered repairable. To repair the OPC, the utility updates the identity information for the first OPC. However, if both the OPC and its mated OPC have lost the identities of each other, then the OPCs are considered non-repairable, and you are given the option to delete them.
Gaps — Repairs and updates gaps in the representation record with the proper item type. On rare occasions you will need to perform this operation if you have gapping problems in your drawings. for a symbol that is not a gap. If you select Yes for any symbol other than a gap, your data set may get corrupted. | https://docs.hexagonppm.com/r/en-US/Intergraph-Smart-P-ID-Utilities-Help-2019/Version-2019-9.0/174677 | 2021-02-24T20:17:02 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.hexagonppm.com |
Starting the Hive Metastore in CDH
Cloudera recommends that you deploy the Hive metastore, which stores the metadata for Hive tables and partitions, in "remote mode." In this mode the metastore service runs in its own JVM process and other services, such as HiveServer2, HCatalog, and Apache Impala communicate with the metastore using the Thrift network API.
After installing and configuring the Hive metastore, you can start the service.
To run the metastore as a daemon, the command is:
$ sudo service hive-metastore start | https://docs.cloudera.com/documentation/enterprise/5-16-x/topics/cdh_ig_hive_metastore_start.html | 2021-02-24T20:06:41 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.cloudera.com |
This section covers more involved topics such as mail configuration and setting up mail for an entire domain.example.FreeBSD.org
#
host example.FreeBSD.orgexample.FreeBSD.org has address 204.216.27.XX
In this example, mail sent directly to
<[email protected]>
should work without problems, assuming
Sendmail is running correctly on
example.FreeBSD.org.
For this example:
#
host example.FreeBSD.orgexample.
When configuringD line to
/etc/sendmail.cf.>。 | https://docs.freebsd.org/zh_TW/books/handbook/mail-advanced.html | 2021-02-24T21:22:28 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.freebsd.org |
To set up Fuse as a new network on Metamask. Click on the network selector at the top of the app and then choose "Custom RPC" from the list:
Then in the "New RPC URL" enter this address: New rpc URL:
Optionally you can add the full parameters:
Additional: ChainId: 0x7a Explorer: Symbol: Fuse
`` | https://docs.fuse.io/the-fuse-studio/getting-started/how-to-add-fuse-to-your-metamask | 2021-02-24T19:47:27 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.fuse.io |
IPAM Guides Published
Hi there,
The IPAM team and I have just published design, deployment, and operations guidance for IP Address Management (IPAM) in Windows Server 2012.
- IPAM Overview : Is a brief description of IPAM in Server 2012.
- What's New in IPAM : Has recent changes in IPAM.
- Understanding IPAM : Provides an introduction with design and planning information.
- What is IPAM?
- IPAM Terminology
- Getting Started with IPAM
- IPAM Architecture
- IPAM Deployment Planning
- IPAM Deployment Guide : Provides detailed, click-by-click deployment instructions.
- Planning to Deploy IPAM
- Implementing Your IPAM Design Plan
- Checklist: Deploy IPAM
- Deploying IPAM Server
- Deploying IPAM Client
- Assigning IPAM Server and Administrator Roles
- Network Management with IPAM : Provides operational, troubleshooting, and best practices guidance.
- Using the IPAM Client Console
- Managing Server Inventory
- Managing IP Address Space
- Multi-server Management
- IP Address Tracking
- Operational Event Tracking
- Using Windows PowerShell with IPAM
- Best Practices
- Troubleshooting IPAM
- IPAM Backup and Restore
- Step-by-step: Configure IPAM to Manage Your IP Address Space : Provides procedures to set up IPAM in a test lab.
Note: If you’re looking for an IPAM quick start, see Getting Started with IPAM in the Understanding IPAM guide.
Greg Lindsay, Sr. Technical Writer, Windows Server (DNS, DHCP, IPAM). | https://docs.microsoft.com/en-us/archive/blogs/wsnetdoc/ipam-guides-published | 2021-02-24T20:45:04 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.microsoft.com |
Running a node with Docker
Running a Request Node with Docker is easy. There are only a few requirements:
- Docker installed on your system;
- A web3 provider (we recommend using a service like infura);
- An Ethereum wallet with some funds for gas (if you plan on creating requests through this node);
Launching the IPFS nodeLaunching the IPFS node
To launch the IPFS node run:
This command will launch the IPFS node with Request network configurations.
Launching the Request NodeLaunching the Request Node
To launch the Request node you can run:
The environment variables passed to the script are:
- MNEMONIC should be the node wallet mnemonic seed.
- WEB3_PROVIDER_URL should be the URL to your web3 provider.
- ETHEREUM_NETWORK_ID should be either
1for Mainnet or
4for Rinkeby.
- IPFS_HOST is the URL of your IPFS node. Here we use the Docker host URL.
That's it! Now your Node should be running and syncing to the network.
Give it some minutes to finish synchronizing and its API will be available on.
If you want to know more about the available options you can pass to the node, you can check them here.
Using Docker ComposeUsing Docker Compose
We can (and should) use docker-compose to make it simpler to launch your local Request Node.
With Docker Compose installed, use the following
docker-compose.yml file:
Now you can run:
Your node should start initializing. | https://docs.request.network/docs/guides/6-hosting-a-node/1-docker/ | 2021-02-24T21:03:09 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.request.network |
[−][src]Type Definition ed25519_dalek::
SignatureError
type SignatureError = Error;
Errors which may occur while processing signatures and keypairs.
This error may arise due to:
Being given bytes with a length different to what was expected.
A problem decompressing
r, a curve point, in the
Signature, or the curve point for a
PublicKey.
A problem with the format of
s, a scalar, in the
Signature. This is only raised if the high-bit of the scalar was set. (Scalars must only be constructed from 255-bit integers.)
Failure of a signature to satisfy the verification equation. | https://docs.rs/ed25519-dalek/1.0.1/ed25519_dalek/type.SignatureError.html | 2021-02-24T20:23:55 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.rs |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Write-ECRImageScanningConfiguration-RepositoryName <String>-RegistryId <String>-ImageScanningConfiguration_ScanOnPush <Boolean>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter>
true, images will be scanned after being pushed. If this parameter is not specified, it will default to
falseand images will not be scanned unless a scan is manually started with the StartImageScan API.
AWS Tools for PowerShell: 2.x.y.z | https://docs.aws.amazon.com/powershell/latest/reference/items/Write-ECRImageScanningConfiguration.html | 2021-02-24T20:07:50 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.aws.amazon.com |
The clean data utility allows you to check your database for broken database relationships or orphan drawing items and to clean up the database by deleting orphan model items. Clean Data utility must be run from within the Smart P&ID modeler.
For easy access to this utility, you can create a custom menu in the Smart P&ID interface to run the Clean Data utility. For information about how to do this, see the topic Create a New Menu in the Smart P&ID User's Guide.
Log messages generated when orphaned records are deleted from the plant database are written to the DBCleanup.txt file in the folder assigned to the TEMP environment variable.
Log messages are placed in SPDelOrpModItems.log file in the folder assigned to the TEMP environment variable. The log file contains information about deleted items including the item type and SP_ID number.
Generate a Report
Open a drawing in Smart P&ID.
Click Tools > Custom Commands.
On the Custom Command dialog box, browse to ..\SmartPlant\P&ID Workstation\bin and double-click DelOrpModItems.dll.
On the Clean Data dialog box, click Database Report. The results are written to the DBCleanup.txt file in your Temp folder. This report helps you decide if a manual cleanup alternative exists before using the Entire Database command to automatically delete the problems from the database.
Perform Automatic Database Clean-Up"> Before starting the clean-up, ensure that all other users have logged out of the plant.
Click Entire Database to generate the report and automatically delete the problems from the plant database. This command results in the deletion of items where there are problems of referential integrity or non-unique records and includes all the clean-ups performed by the Model Items, OPCs, and Gaps options."> Running this option will result in the deletion of many corrupted records in the database. To perform a less drastic set of clean-ups, skip this step and follow the rest of the steps in this procedure.
Click Model Items.
On the Delete Orphan Model Items dialog box, select each model item type from Item Type Names list to see if any orphan items exist in the database. The following model item types are checked: Vessel, Mechanical, Exchanger, Equipment: Other, Equipment Component, Instrument, Nozzle, Piping Component, Ducting Component, Pipe Run, Signal Run, Duct Run, OPC, Item Note, Area Break, Room, and Room Component.
In the List view, select the model orphan items to delete, and click Delete. click Delete All to select and delete all the items in the list view.
Click Close to return to the Clean Data dialog box.
On the Clean Data dialog box, click OPCs.
On the Repair OPCs dialog box, chose either repairable or non-repairable from the OPC Type list. Repairable OPC pairs retain one link out of two between the mates. Non-repairable OPC pairs retain neither link.
Choose the OPC pair you are interested in from the OPC list, and click Fix if it is a repairable pair or Delete if it is non-repairable.
Click Close to return to the Clean Data dialog box.
On the Clean Data dialog box, click Gaps to find and repair gaps that do not have the correct representation in the database. to a symbol that is not a gap. If you select Yes to any symbol other than a gap, you may corrupt your data set.
On the Clean Data dialog box, click Close to return to the design software. | https://docs.hexagonppm.com/r/en-US/Intergraph-Smart-P-ID-Utilities-Help-2019/Version-2019-9.0/174679 | 2021-02-24T19:47:19 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.hexagonppm.com |
Tabbed menus are a great way to add a multiset of information inside a mega menu. WP Mega Menu provides an easy implementation for this. Below you will find a step by step guide to creating a Tabbed Menu with WP Mega Menu.
Pre-requisites
Before getting started please make sure that you have the latest version of WP Mega Menu installed and activated. You can always find the latest version of WP Mega Menu from the WordPress.org plugin directory.
Step 1: Add WP Mega Menu To Your Site
At first, you need to create a Menu. To do that go to the Dashboard > Appearance > Menus
Checkmark on the enable option and choose a theme from the Mega Menu settings. After you add a new menu item, hover over the menu name and you will see a WP Mega Menu button. Click on it to go to the WP Mega Menu settings.
For reference, we named our menu as a “Tabbed Menu”. If you want to learn more about how to add WP Mega Menu to your menu, you can check out this part of the documentation.
Step 2: Add the ‘WPMM Grid Posts Widget’ to a Column
After clicking on the icon you will find the WP Mega Menu widget & configuration interface. Enable the Mega Menu option by turning on the toggle button. After that, you need to add a row. Choose your preferred row number and click on it to apply.
After you have added your row, from the widgets section of the left sidebar scroll down to the WPMM Grid Post widget. Drag & drop it to your preferred column section.
Step 3: Configure the WPMM Grid Posts Widget For Tabbed Menu
After you have added the WPMM Grid Post widget to your preferred row, you need to configure its settings by expanding the collapse button. Set the widget title, define the post order hierarchy, select the category of posts you want to show, define the number of columns & post per count.
Once you have finalized the settings, enable the Tabbed menu feature for this menu by clicking on the checkbox in the “Show Left Category of the Widget?” option. That’s it.
Now save your customization and once again save the WP Mega Menu to show it on your frontend. | https://docs.themeum.com/wp-megamenu/create-a-tabbed-menu/ | 2021-02-24T20:30:40 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['https://docs.themeum.com/wp-content/uploads/2020/02/wp-mega-menu-appearance-menus-create-menu-1024x594.png',
None], dtype=object)
array(['https://docs.themeum.com/wp-content/uploads/2020/02/themeum-wp-mega-menu-hover-button-click-to-configu.png',
None], dtype=object)
array(['https://docs.themeum.com/wp-content/uploads/2020/02/themeum-wp-mega-menu-enable-mega-menu-add-row-1024x420.png',
None], dtype=object)
array(['https://docs.themeum.com/wp-content/uploads/2020/02/themeum-wp-mega-menu-add-wp-grid-posts-column-1024x419.png',
None], dtype=object)
array(['https://docs.themeum.com/wp-content/uploads/2020/02/themeum-wp-mega-menu-customize-wpmm-grid-post-enab-1024x434.png',
None], dtype=object) ] | docs.themeum.com |
When you begin using ThoughtSpot, the onboarding process starts automatically and guides you through a few basic scenarios.
Usually, you receive an email welcoming you to the onboarding process at ThoughtSpot.
To repeat user onboarding, see Revisit onboarding.
How onboarding works for the user
Your onboarding experience begins when you login for the first time.
Step 1: Get Started
The initial welcome screen provides an overview of the onboarding process, and offers a quick video.
Watch the video, and click Continue.
Alternatively, you can click Exit to homepage at the top right corner of the screen, and end the onboarding process. This option appears at this and every subsequent step.
Step 2: Recommended data source
This screen introduces the primary data source for onboarding. The administrator selects this source before you begin. Click Continue.
Step 3: Select a pinboard
Consider which of the initial pinboards to explore first, and click on it.
Step 4: View your insights
Examine the pinboard you selected, and learn what insights it provides.
Click Follow to receive periodic emails about this pinboard.
Repeating the onboarding process
[Optional] Any user can repeat their onboarding experience at any time. Simply select Profile from the user icon on the top right corner of the page, and under Preferences > New user onboarding, click Revisit Onboarding. See Revisit onboarding.
If you are a new user and you did not experience onboarding, please contact your administrator and request that they configure it for you and other new users.
You can always get additional help. As you start to use ThoughtSpot, we recommend that you review the Information Center. | https://docs.thoughtspot.com/6.2/end-user/onboarding/user-onboarding-experience.html | 2021-02-24T20:23:51 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.thoughtspot.com |
Saia S-Bus OPC Server
Check out these articles and learn more details about the Saia S-Bus OPC Server Configuration tool and how to use it.
The configuration tool for the WEBfactory Saia S-Bus OPC Server is accessible via the Start Menu > Programs > WEBfactory 2010 > OPC Server Configuration > Saia S-Bus OPC Server.
Saia S-Bus OPC Server configuration tool
The Project menu features the following contextual options:
Project menu
The Edit menu features the following contextual options:
Edit menu
The Server menu features the following contextual options:
Server menu
The toolbar below the menu contains all commands that are accessible via the menus:
The Configurator toolbar
The Station properties section features the following settings:
Station properties panel
The Saia S-Bus OPC Server features the folowing filtering options:
Filter panel
The Item properties section features the following settings:
Item Properties panel
The General settings section features the following options:
General Settings panel | https://docs.webfactory-i4.com/webfactory2010/en/saia-s-bus-opc-server.html | 2021-02-24T20:16:42 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['image/15f9ab67d6f798.jpg', 'Capture24.jpg'], dtype=object)
array(['image/15f9ab67d78e5a.jpg', 'Capture25.jpg'], dtype=object)
array(['image/15f9ab67d80b90.jpg', 'Capture26.jpg'], dtype=object)
array(['image/15f9ab67d88b80.jpg', 'Capture27.jpg'], dtype=object)
array(['image/15f9ab67cda784.png', '85.png'], dtype=object)
array(['image/15f9ab67d97aab.jpg', 'Capture28.jpg'], dtype=object)
array(['image/15f9ab67da08fa.jpg', 'Capture31.jpg'], dtype=object)
array(['image/15f9ab67dad3f4.jpg', 'Capture32.jpg'], dtype=object)
array(['image/15f9ab67db5ca1.jpg', 'Capture33.jpg'], dtype=object)] | docs.webfactory-i4.com |
Smart Contracts¶
Contracts are special kinds of accounts that have code, an ABI, and state.
This is a technical description of the protocol and execution. It is recommended to interact with contracts using higher level API supported by aergocli and the various SDKs.
Deployment¶
Contracts are deployed by sending a transaction with the contract code as payload to the null receiver (empty receiver address). Upon execution, the contract state is created at an address calculated from the sender and the nonce of the deployment transaction. You can find out the created contract address in the transaction receipt.
See here for details about address generation.
Calling Contracts¶
To call contract methods, send a transaction to the contract’s address specifying the function name and arguments as JSON payload. | https://aergo.readthedocs.io/en/1.3/specs/contracts.html | 2021-02-24T21:17:42 | CC-MAIN-2021-10 | 1614178347321.0 | [] | aergo.readthedocs.io |
changes.mady.by.user Unknown User (mkos)
Saved on Mar 23, 2016
...
If you are a vendor, see documentation for using LAC to add products, templates, customers, create orders and activate licenses, as well as manage LAC administration, which includes:
If you are an end user, see documentation for using LAC to activate licenses, which includes:
If you have a question about using LAC, please contact our support team.
Other handy links: | https://docs.x-formation.com/pages/diffpagesbyversion.action?pageId=819202&selectedPageVersions=24&selectedPageVersions=25 | 2021-02-24T20:10:11 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.x-formation.com |
Add Bugsnag to your EventMachine projects.
Bugsnag’s Ruby gem can be installed using Bundler by adding the gem to your
Gemfile:
gem 'bugsnag-em'
Don’t forget to run
bundle install after updating your
Gemfile. app, add the following code to your entrypoint: in your EventMachine applications, you’ll need to implement
EventMachine.error_handler:
EventMachine.error_handler{|e| Bugsnag.notify(e) }
If you’d
In order to correlate errors with customer reports, or to see a list of users who experienced each error, it is helpful to capture and display user information on your Bugsnag dashboard.
You can set this information using a
Bugsnag.with block as follows:
Bugsnag.with(user: {id: current_user.id, email: current_user.email}) do EM::next_tick do raise 'oops' end end
For more information, see reporting handled errors.
In order to quickly reproduce and fix errors, it is often helpful to send additional application-specific diagnostic data to Bugsnag. This can be accomplished using the
Bugsnag.with call:
Bugsnag.with(user_id: '123') do # Your code here: http = EM::HttpRequest.new('').get http.errback{ raise 'oops' } ... end | https://docs.bugsnag.com/platforms/ruby/event-machine/ | 2021-05-06T03:21:00 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.bugsnag.com |
Military veterans can be your most dedicated and hardest working employees if you give them a quality recruiting experience and fulfilling work to do. With over 20,000 veterans transitioning out of the service every month, this source of talent is completely renewable, year after year.
To help companies with the intricacies of recruiting this unique group of talented individuals, Candidit has developed a platform that verifies military skills, translates them to civilian terms, and quickly matches them to jobs based on their FiT analysis.
The process is as simple as 1-2-3, JST. Veterans upload their Joint Services Transcript, it is translated, and they are matched to jobs. The JST is a transcript of everything an individual did (both education and occupations) while serving that is currently being used by every branch of the military except the Air Force. It looks like this:
Once a JST has been uploaded to the system, it only take a minute or two for the user to start receiving their matches to a company's jobs. If they have civilian experience in addition to what can be found on their JST, they can add that to their profile via the resume builder to improve the quality of their matches.
To build this JST translator we explored over 6,000 military courses and occupations representing over 90,000 military competencies, and translated every one of them to our taxonomy of 30,000 competencies. From there we are able to match military Candidits to jobs based on their civilian competency set and the competencies required to do each job.
This unique translator is included as part of Candidit's Suite of Services. | https://docs.candidit.com/diversity-and-inclusion/military-veterans | 2021-05-06T04:17:33 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.candidit.com |
The QUndoView class displays the contents of a QUndoStack. More...
This class was introduced in Qt 4.2. and sets the observed group to group.
The view will update itself autmiatically whenever the active stack of the group changes.
Constructs a new view with parent parent and sets the observed stack to stack..
Returns the group displayed by this view.
If the view is not looking at group, this function returns
nullptr.
See also setGroup() and setStack().
Returns the stack currently displayed by this view. If the view is looking at a QUndoGroup, this the group's active stack.
See also setStack() and setGroup().
© The Qt Company Ltd
Licensed under the GNU Free Documentation License, Version 1.3. | https://docs.w3cub.com/qt~5.15/qundoview | 2021-05-06T03:49:14 | CC-MAIN-2021-21 | 1620243988725.79 | [] | docs.w3cub.com |
194 documents found.
Gemini allows you to map Gemini global groups with Active Directory groups. Users are then automatically added or removed from the Gemini groups according to the Active Directory groups. Note that you must have the Gemini scheduler service installed for the synchronization to work.
Enable and configure Active Directory integration in Gemin via the Administration –> Active Directory page:
You can choose to add new users automatically, so if a new user is added to an Active Directory group and that user is not in Gemini then the user will be added to Gemini automatically. Note that only users who have have never logged in to AD will not be imported.
Once we have configured Gemini to synchronize with Active Directory and the Gemini scheduler service has been installed you can start mapping groups between Gemini and Active Directory via the Administration –> Global Groups page and click on the “Active Directory” tab:
You can add multiple Active Directory groups to a Gemini global group thus making users in these groups part of the Gemini group. | http://docs42.countersoft.com/docs/page/234/active-directory-integration | 2019-05-19T12:42:17 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs42.countersoft.com |
Introduction
Welcome to your Confluence toolbox! With ScriptRunner for Confluence you can do the following and more.
Maintain Confluence more easily
Make your Confluence maintenance a breeze with built-in Scripts that allow you to:
Automate everyday tasks
Create automations that makes your instance more efficient and powerful. Script Listeners & Script Jobs allow you to do things like:
Integrate with other apps
Automate functionally, promote good practice, and ensure tasks/content doesn’t slip through the cracks by integrating Confluence with your other apps.
ScriptRunner allows you to:
Automate your workflow by creating a Jira project whenever a Confluence page is created
Pull data from external systems displaying it in Confluence via the REST API, like Xero, SalesForce and Trello.
If you run into issues, contact our support portal
If you’re looking for help scripting or for examples from other users in the field, head over to the Atlassian Community. | http://scriptrunner-docs.connect.adaptavist.com/confluencecloud/quickstart.html | 2019-05-19T12:53:20 | CC-MAIN-2019-22 | 1558232254882.18 | [array(['../image/confluencecloud/sr-banner.png', 'sr banner'],
dtype=object) ] | scriptrunner-docs.connect.adaptavist.com |
All content with label configuration+gridfs+hotrod+infinispan+installation+jboss_cache.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, partitioning, query, deadlock, archetype, jbossas, nexus, guide, schema, listener,
cache, s3, amazon, memcached, grid, test, jcache, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, 缓存, s, hibernate, aws, getting, interface, clustering, setup, eviction, out_of_memory, concurrency, examples, import, index, events, hash_function, batch, buddy_replication, loader, xa, write_through, cloud, remoting, mvcc, tutorial, notification, murmurhash2, jbosscache3x, xml, read_committed, distribution, started, cachestore, data_grid, cacheloader, hibernate_search, cluster, br, development, adaptor, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, demo, cache_server, scala, client, non-blocking, migration, filesystem, jpa, tx, gui_demo, eventing, snmp, client_server, murmurhash, infinispan_user_guide, standalone, webdav, snapshot, repeatable_read, docs, consistent_hash, batching, store, jta, faq, as5, 2lcache, lucene, jgroups, locking, rest, hot_rod
more »
( - configuration, - gridfs, - hotrod, - infinispan, - installation, - jboss_cache )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/configuration+gridfs+hotrod+infinispan+installation+jboss_cache | 2019-05-19T14:07:08 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.jboss.org |
position from world space to local space.
This function is essentially the opposite of Transform.TransformPoint, which is used to convert from local to world space.
Note that the returned position is affected by scale. Use Transform.InverseTransformDirection if you are dealing with direction vectors rather than positions.
// Calculate the transform's position relative to the camera. using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Transform cam; public Vector3 cameraRelative;
void Start() { cam = Camera.main.transform; Vector3 cameraRelative = cam.InverseTransformPoint(transform.position);
if (cameraRelative.z > 0) print("The object is in front of the camera"); else print("The object is behind the camera"); } }
Transforms the position
x,
y,
z from world space to local space. The opposite of Transform.TransformPoint.
Note that the returned position is affected by scale. Use Transform.InverseTransformDirection if you are dealing with directions.
// Calculate the world origin relative to this transform. using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Start() { Vector3 relativePoint = transform.InverseTransformPoint(0, 0, 0);
if (relativePoint.z > 0) print("The world origin is in front of this object"); else print("The world origin is behind of this object"); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Transform.InverseTransformPoint.html | 2019-05-19T12:35:20 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.unity3d.com |
32.2.
ast — Abstract Syntax Trees¶
Source.
32.2.1. Node classes¶
- class
ast.
AST¶
This is the base of all AST node classes. The actual node classes are derived from the
Parser/Python.asdlfile, which is reproduced below. They are defined in the
_astC module and re-exported in
ast.
There is one class defined for each left-hand side symbol in the abstract grammar (for example,
ast.stmtor
ast.expr). In addition, there is one class defined for each constructor on the right-hand side; these classes inherit from the classes for the left-hand side trees. For example,
ast.BinOpinherits from
ast.expr. For production rules with alternatives (aka “sums”), the left-hand side class is abstract: only instances of specific constructor nodes are ever created.
_fields¶
Each concrete class has an attribute
_fieldswhich gives the names of all child nodes.
Each instance of a concrete class has one attribute for each child node, of the type as defined in the grammar. For example,
ast.BinOpinstances have an attribute
leftof().
lineno¶
col_offset¶
Instances of
ast.exprand
ast.stmtsubclasses have
linenoand
col_offsetattributes. The
linenois the line number of source text (1-indexed so the first line is line 1) and the
col_offsetis the UTF-8 byte offset of the first token that generated the node. The UTF-8 offset is recorded because the parser uses UTF-8 internally.
The constructor of a class
ast.Tparses its arguments as follows:
-node,)
32.2.2. Abstract Grammar¶
The abstract grammar is currently defined as follows:
-- ASDL's 7 builtin types are: -- identifier, int, string, bytes, object, singleton, constant -- -- singleton: None, True or False -- constant can be None, whereas None means "no value" for object. module Python { mod = Module(stmt* body) | Interactive(stmt* body) | Expression(expr body) -- not really an actual node but useful in Jython's typesystem. | Suite(stmt* body) stmt = FunctionDef(identifier name, arguments args, stmt* body, expr* decorator_list, expr? returns) | AsyncFunctionDef(identifier name, arguments args, stmt* body, expr* decorator_list, expr? returns) | ClassDef(identifier name, expr* bases, keyword* keywords, stmt* body, expr* decorator_list) | Return(expr? value) | Delete(expr* targets) | Assign(expr* targets, expr value) | AugAssign(expr target, operator op, expr value) -- 'simple' indicates that we annotate simple name without parens | AnnAssign(expr target, expr annotation, expr? value, int simple) -- use 'orelse' because else is a keyword in target languages | For(expr target, expr iter, stmt* body, stmt* orelse) | AsyncFor(expr target, expr iter, stmt* body, stmt* orelse) | While(expr test, stmt* body, stmt* orelse) | If(expr test, stmt* body, stmt* orelse) | With(withitem* items, stmt* body) | AsyncWith | Await(expr value) | Yield(expr? value) | YieldFrom(expr value) -- need sequences for compare to distinguish between -- x < 4 < 3 and (x < 4) < 3 | Compare(expr left, cmpop* ops, expr* comparators) | Call(expr func, expr* args, keyword* keywords) | Num(object n) -- a number as a PyObject. | Str(string s) -- need to specify raw, unicode, etc? | FormattedValue(expr value, int? conversion, expr? format_spec) | JoinedStr(expr* values) | Bytes(bytes s) | NameConstant(singleton value) | Ellipsis | Constant(constant value) -- | MatMult |, int is_async) excepthandler = ExceptHandler(expr? type, identifier? name, stmt* body) attributes (int lineno, int col_offset) arguments = (arg* args, arg? vararg, arg* kwonlyargs, expr* kw_defaults, arg? kwarg, expr* defaults) arg = (identifier arg, expr? annotation) attributes (int lineno, int col_offset) -- keyword arguments supplied to call (NULL identifier for **kwargs) keyword = (identifier? arg, expr value) -- import name with optional 'as' alias. alias = (identifier name, identifier? asname) withitem = (expr context_expr, expr? optional_vars) }
32.2.3.
ast Helpers¶
Apart from the node classes, the
ast module defines these utility functions
and classes for traversing abstract syntax trees:
ast.
parse(source, filename='<unknown>', mode='exec')¶
Parse the source into an AST node. Equivalent to
compile(source, filename, mode, ast.PyCF_ONLY_AST).
Warning
It is possible to crash the Python interpreter with a sufficiently large/complex string due to stack depth limitations in Python’s AST compiler.
ast.
literal_eval(node_or_string)¶
Safely evaluate an expression node or a string containing a Python literal or container display. The string or node provided may only consist of the following Python literal structures: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, and
None.
This can be used for safely evaluating strings containing Python values from untrusted sources without the need to parse the values oneself. It is not capable of evaluating arbitrarily complex expressions, for example involving operators or indexing.
Warning
It is possible to crash the Python interpreter with a sufficiently large/complex string due to stack depth limitations in Python’s AST compiler.
Changed in version 3.2: Now allows bytes and set literals.().
ast.
fix_missing_locations(node)¶
When you compile a node tree with
compile(), the compiler expects
linenoand
col_offsetattributes for every node that supports them. This is rather tedious to fill in for generated nodes, so this helper adds these attributes recursively where not already set, by setting them to the values of the parent node. It works recursively starting at node.
ast.
increment_lineno(node, n=1)¶
Increment the line number of each node in the tree starting at node by n. This is useful to “move code” to a different location in a file.
ast.
copy_location(new_node, old_node)¶
Copy source location (
linenoand
col_offset) from old_node to new_node if possible, and return new_node.
ast.
iter_fields(node)¶
Yield a tuple of
(fieldname, value)for each field in
node._fieldsthat is present on node.
ast.
iter_child_nodes(node)¶
Yield all direct child nodes of node, that is, all fields that are nodes and all items of fields that are lists of nodes.
ast.
walk(node)¶
Recursively yield all descendant nodes in the tree starting at node (including node itself), in no specified order. This is useful if you only want to modify nodes in place and don’t care about the context.
- class
ast.
NodeVisitor¶
A node visitor base class that walks the abstract syntax tree and calls a visitor function for every node found. This function may return a value which is forwarded by the
visit()method.
This class is meant to be subclassed, with the subclass adding visitor methods.
visit(node)¶
Visit a node. The default implementation calls the method called
self.visit_classnamewhere classname is the name of the node class, or
generic_visit()if that method doesn’t exist.
generic_visit(node)¶
This visitor calls
visit()on all children of the node.
Note that child nodes of nodes that have a custom visitor method won’t be visited unless the visitor calls
generic_visit()or visits them itself.
Don’t use the
NodeVisitorif you want to apply changes to nodes during traversal. For this a special visitor exists (
NodeTransformer) that allows modifications.
- class
ast.
NodeTransformer¶
A
NodeVisitorsubclass that walks the abstract syntax tree and allows modification of nodes.
The
NodeTransformerwill)
ast.
dump(node, annotate_fields=True, include_attributes=False)¶.
See also
Green Tree Snakes, an external documentation resource, has good details on working with Python ASTs. | http://docs.activestate.com/activepython/3.6/python/library/ast.html | 2019-05-19T12:40:15 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.activestate.com |
Combining JWT-based authentication with basic access authentication¶
In this example we will make a service with basic HTTP authentication for Haskell clients and other programs, as well as with JWT-based authentication for web browsers. Web browsers will still use basic HTTP authentication to retrieve JWTs though.
Warning: this is insecure when done over plain HTTP, so TLS should be used. See warp-tls for that.
While basic authentication comes with Servant itself, servant-auth and servant-auth-server packages are needed for the JWT-based one.
This recipe uses the following ingredients:
{-# LANGUAGE OverloadedStrings, TypeFamilies, DataKinds, DeriveGeneric, TypeOperators #-} import Data.Aeson import GHC.Generics import Data.Proxy import System.IO import Network.HTTP.Client (newManager, defaultManagerSettings) import Network.Wai.Handler.Warp import Servant as S import Servant.Client import Servant.Auth as SA import Servant.Auth.Server as SAS import Control.Monad.IO.Class (liftIO) import Data.Map as M import Data.ByteString (ByteString) port :: Int port = 3001
Authentication¶
Below is how we’ll represent a user: usually user identifier is handy to keep around, along with their role if role-based access control is used, and other commonly needed information, such as an organization identifier:
data AuthenticatedUser = AUser { auID :: Int , auOrgID :: Int } deriving (Show, Generic)
The following instances are needed for JWT:
instance ToJSON AuthenticatedUser instance FromJSON AuthenticatedUser instance ToJWT AuthenticatedUser instance FromJWT AuthenticatedUser
We’ll have to use a bit of imagination to pretend that the following
Map is a database connection pool:
type Login = ByteString type Password = ByteString type DB = Map (Login, Password) AuthenticatedUser type Connection = DB type Pool a = a initConnPool :: IO (Pool Connection) initConnPool = pure $ fromList [ (("user", "pass"), AUser 1 1) , (("user2", "pass2"), AUser 2 1) ]
See the “PostgreSQL connection pool” recipe for actual connection pooling, and we proceed to an authentication function that would use our improvised DB connection pool and credentials provided by a user:
authCheck :: Pool Connection -> BasicAuthData -> IO (AuthResult AuthenticatedUser) authCheck connPool (BasicAuthData login password) = pure $ maybe SAS.Indefinite Authenticated $ M.lookup (login, password) connPool
Warning: make sure to use a proper password hashing function in functions like this: see bcrypt, scrypt, pgcrypto.
Unlike
Servant.BasicAuth,
Servant.Auth uses
FromBasicAuthData
type class for the authentication process itself. But since our
connection pool will be initialized elsewhere, we’ll have to pass it
somehow: it can be done via a context entry and
BasicAuthCfg type
family. We can actually pass a function at once, to make it a bit more
generic:
type instance BasicAuthCfg = BasicAuthData -> IO (AuthResult AuthenticatedUser) instance FromBasicAuthData AuthenticatedUser where fromBasicAuthData authData authCheckFunction = authCheckFunction authData
API¶
Test API with a couple of endpoints:
type TestAPI = "foo" :> Capture "i" Int :> Get '[JSON] () :<|> "bar" :> Get '[JSON] ()
We’ll use this for server-side functions, listing the allowed
authentication methods using the
Auth combinator:
type TestAPIServer = Auth '[SA.JWT, SA.BasicAuth] AuthenticatedUser :> TestAPI
But
Servant.Auth.Client only supports JWT-based authentication, so
we’ll have to use regular
Servant.BasicAuth to derive client
functions that use basic access authentication:
type TestAPIClient = S.BasicAuth "test" AuthenticatedUser :> TestAPI
Client¶
Client code in this setting is the same as it would be with just
Servant.BasicAuth, using
servant-client:
testClient :: IO () testClient = do mgr <- newManager defaultManagerSettings let (foo :<|> _) = client (Proxy :: Proxy TestAPIClient) (BasicAuthData "name" "pass") res <- runClientM (foo 42) (mkClientEnv mgr (BaseUrl Http "localhost" port "")) hPutStrLn stderr $ case res of Left err -> "Error: " ++ show err Right r -> "Success: " ++ show r
Server¶
Server code is slightly different – we’re getting
AuthResult here:
server :: Server TestAPIServer server (Authenticated user) = handleFoo :<|> handleBar where handleFoo :: Int -> Handler () handleFoo n = liftIO $ hPutStrLn stderr $ concat ["foo: ", show user, " / ", show n] handleBar :: Handler () handleBar = liftIO testClient
Catch-all for
BadPassword,
NoSuchUser, and
Indefinite:
server _ = throwAll err401
With
Servant.Auth, we’ll have to put both
CookieSettings and
JWTSettings into context even if we’re not using those, and we’ll
put a partially applied
authCheck function there as well, so that
FromBasicAuthData will be able to use it, while it will use our
connection pool. Otherwise it is similar to the usual way:
mkApp :: Pool Connection -> IO Application mkApp connPool = do myKey <- generateKey let jwtCfg = defaultJWTSettings myKey authCfg = authCheck connPool cfg = jwtCfg :. defaultCookieSettings :. authCfg :. EmptyContext api = Proxy :: Proxy TestAPIServer pure $ serveWithContext api cfg server
Finally, the main function:
main :: IO () main = do connPool <- initConnPool let settings = setPort port $ setBeforeMainLoop (hPutStrLn stderr ("listening on port " ++ show port)) $ defaultSettings runSettings settings =<< mkApp connPool
Usage¶
Now we can try it out with
curl. First of all, let’s ensure that it
fails with
err401 if we’re not authenticated:
$ curl -v '' … < HTTP/1.1 401 Unauthorized
$ curl -v ':[email protected]:3001/bar' … < HTTP/1.1 401 Unauthorized
Now let’s see that basic HTTP authentication works, and that we get JWTs:
$ curl -v ':[email protected]:3001/bar' … < HTTP/1.1 200 OK … < Set-Cookie: XSRF-TOKEN=lQE/sb1fW4rZ/FYUQZskI6RVRllG0CWZrQ0d3fXU4X0=; Path=/; Secure < Set-Cookie: JWT-Cookie; Path=/; HttpOnly; Secure
And authenticate using JWTs alone, using the token from
JWT-Cookie:
curl -v -H 'Authorization: Bearer' '' … < HTTP/1.1 200 OK
This program is available as a cabal project here. | http://docs.servant.dev/en/master/cookbook/jwt-and-basic-auth/JWTAndBasicAuth.html | 2019-05-19T12:28:14 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.servant.dev |
Order Archive¶
Overview¶ easy and convenient. Admin will receive an email notification whenever the order is archived according to schedule. The order has been stored or not displayed in the Customer’s Account, depending on the settings by admin. Furthermore, admins can also use the API to manage archived orders.
In addition, this extension is fully compatible with the Mageplaza Delete Order extension, admins can delete the order even after it has been archived.
How to download and install¶
How to use¶
- Admin can transfer orders from the Grid order by default to the Grid Order Archive, perform mass actions like the default Grid Order
- Set the time to store orders automatically
- Edit, turn off the feature Send email notification when the order is archived according to schedule
- Check API
- Use the Command line
How to Configure¶
1. Configuration¶
From the Admin panel, go to
Stores> Configuration> Mageplaza> Order Archive
1.1. General Configuration¶
- Enable: Select
Enable = Yesto enable the module
- Show Archive Order for Customer(s): Select
Noto have the orders transferred to the Archive be hidden from My Orders in Customer’s Dashboard
1.2. Schedule Configuration¶
- Includes settings related to automatic order storage. An order is only stored in schedule when and only if it satisfies all the conditions on Purchase Date, Order Status, Customer Group, Store View, Shipping Country and Order Total
- Schedule For:
- Set the cycle automatically according to daily, weekly or monthly
- With Weekly, schedule will run automatically on every Monday
- With Monthly, schedule will run automatically on the 1st of every month
- Start Time:
- Set the automatic storage time of each cycle
- By that time of day, the schedule will be run automatically
- Excluded Period:
- Set the time interval for automatic schedule application
- The orders calculated from the moment the current number of days back and forth will be converted to Archive
- For example, Period = 10, today is December 31st, 2018, all orders created from December 21st, 2018 and earlier will be transferred to Archive (if they meet the conditions below)
- Order Status:
- Satisfactory orders can be archived automatically according to schedule
- When selecting Please Select, no order can automatically be archived
- Customer Group(s): Automatic Schedule applies only to orders purchased by customers of the selected Customer group
- Store View(s): Select Store view where the order is placed
- Shipping Countries:
- All Countries: Check all Orders
- Specific Country: Check for orders with Shipping Address at Country selected
- Order Total less than: Order’s Maximum Total Paid
- In addition to the way the schedule is run automatically, Admin can also click the
Run Manuallybutton to store orders that meet conditions whenever they want.
1.3. Email Notification¶
- Enable: Select
Yesto enable email sending feature to admin every time an order is stored (including manual or automatic storage)
- Sender: There are 5 default types of Magento Sender for Admin to choose: General Contact, Sales Representative, Customer Support, Custom Email 1, Custom Email 2
- Send To:
- The chosen emails will receive a notification when the Archive is ordered
- Each email is separated by commas (,)
2. Order Archives Grid¶
- Similar to the default Magento Grid Order, the Order Archives Grid also has basic features such as Filter, Add Columns or Export and View Order
- In Grid, Admin can perform 3 main actions
- UnArchive: The selected orders will be transferred to the default Grid Order
- Delete: The selected Orders will be deleted from the database. This feature only works when the store owner installs the Mageplaza Delete Order module
- View: The Detail Order page will be displayed
3. Command line¶
Admins can use the following command to archive or unarchive any order that they want:
php bin / magento order: archive order_id php bin / magento order: unarchive order_id
4. API¶
Order Archive features API integration with the Rest API commands of Magento. By using the available order structures to check the order information, invoice, credit memo of the order, the admin can quickly capture the details of an order. Details about Rest API Magento here
Instructions for using Postman to check order information with API
Step 1: Get Access Token¶
- Log in to Postman, in the Headers section select Key = Content-Type, Value = application/json
- At Body tab, insert
{"username": "demo", "password": "demo123"}with
demo/demo123are
username/passwordto login to your backend
- Use the Post and Send method with the following command:
- Access Key will be displayed in the Body section | https://docs.mageplaza.com/order-archive/ | 2019-05-19T13:21:49 | CC-MAIN-2019-22 | 1558232254882.18 | [array(['https://i.imgur.com/wccAHvS.png',
'https://i.imgur.com/wccAHvS.png'], dtype=object)
array(['https://i.imgur.com/SoQPPks.png',
'https://i.imgur.com/SoQPPks.png'], dtype=object)
array(['https://i.imgur.com/yUrYB3I.png',
'https://i.imgur.com/yUrYB3I.png'], dtype=object)
array(['https://i.imgur.com/pM61T5A.png',
'https://i.imgur.com/pM61T5A.png'], dtype=object)
array(['https://i.imgur.com/dWrvNDR.png',
'https://i.imgur.com/dWrvNDR.png'], dtype=object)
array(['https://i.imgur.com/UbeCAzW.png',
'https://i.imgur.com/UbeCAzW.png'], dtype=object)
array(['https://i.imgur.com/hcxnXa0.png',
'https://i.imgur.com/hcxnXa0.png'], dtype=object)
array(['https://i.imgur.com/iSJ5mA5.png',
'https://i.imgur.com/iSJ5mA5.png'], dtype=object)] | docs.mageplaza.com |
- MSA-based observations are affected by slit losses due to off-center target placement in the shutter and throughput calibration inaccuracies derived from target acquisition.
Introduction
Parent articles: NIRSpec Operations → NIRSpec MOS Operations
One of the main observing modes of NIRSpec is the multi-object spectroscopy (MOS) with the micro-shutter assembly (MSA). The MSA consists of a fixed grid of roughly a quarter million configurable shutters that are 0. 20" × 0.46" in size. Its shutters can be opened in adjacent rows to create customizable and positionable spectroscopy slits (slitlets) on prime science targets of interest. At any given pointing, MSA targets will map to different positions within each target shutter. Properly accounting for the slit loss errors requires precise knowledge of those positions within the shutters. Because of the very small shutter size, NIRSpec MSA spectral data quality will benefit significantly from accurate catalog astrometry of planned science source positions.
NIRSpec MOS calibration slit loss uncertainties vs. planning catalog accuracy
See also: NIRSpec MOS Operations - Pre-Imaging Using NIRCam
Slit losses in the MSA shutter and planning constraints
The NIRSpec MSA is comprised of a fixed grid of micro-shutters; science sources of interest cannot all be perfectly centered within their configured spectral slits and, as a result of the very small MSA shutter aperture size, moderate flux can be lost outside of the slit. Slit throughput loss is a function of wavelength, and it also changes with relative placement of the science sources within the MSA shutters (e.g., see Table 1 in NIRSpec MPT - Planner article.).
Figure 1 presents the worst-case point source flux throughput in an MSA shutter as a function of wavelength for the different Source Centering Constraints 1 in NIRSpec MSA Planning Tool (MPT).
The Tightly Constrained curve (red) means that science sources will only be observed if their planned position within the shutter would result in observing at least 85% of the total possible flux that can be acquired through an open shutter (at 2.95 μm). The Constrained source placement (green) restricts sources to planned positions that result in at least 75% of the total possible flux throughput. The Midpoint position (blue) will have at least 62% of the throughput possible within an open shutter area. The Entire Open Shutter Area constraint (cyan) will plan the source anywhere within the open 0. 2 × 0.45 shutter region, and the Unconstrained plan constraint (magenta) could have a science source centered precisely behind the ~69 mas wide MSA mounting bars (recommended only for extended science objects).
The NIRSpec MSA science sources could be located anywhere within their planned shutter centering constraint. As a result, the actual slit throughput for a given source observed through the NIRSpec MSA can be in the range from the appropriate colored curve in Figure 1 to the perfectly centered point source curve presented in black. The worst case shutter throughput occurs when sources are centered near to the corners of the allowed shutter area, since the PSF is truncated on two sides. In the Unconstrained case (magenta), a science source of interest might be centered behind both the horizontal and vertical MSA bars in the shutter corner. In this situation only a small fraction of the flux of a point source will make it into the open shutter to be measured by the spectrograph. The Unconstrained position option is recommended only for spatially extended targets where additional flux would make it into the slit, compared to the point source calculation presented in Figure 1.
Slit loss uncertainty as a result of catalog astrometric accuracy
The NIRSpec calibration pipeline will apply a slit loss throughput correction for point sources based on the planned position within an MSA shutter. A perfectly centered point source with optimal TA astrometric accuracy (5 mas) will have no excess flux error; the slit loss can be calibrated at the optimal level achievable by the pipeline (estimated to be approximately a ~6% term). However, if the TA astrometric accuracy is relaxed, the slit loss correction will also carry a greater uncertainty.
Figure 2 presents the possible excess flux calibration uncertainty due to slit losses as a function of the astrometric accuracy of the catalog used for TA planning. These calculations are done assuming a point source, resulting in worst case slit loss throughput uncertainty. The curves are anchored to wavelength defining the source centering constraint: 2.95 μm. The figure describes the geometrical slit throughput only, and does not represent the total end-to-end instrument throughput or sensitivity. Note that the figure throughput curves represent the worst-case scenario of a point source observed in the corners (highest flux loss regions) of an MSA shutter margin.
Figure 2 shows that the accuracy of the flux calibration of the science sources will depend on the catalog relative astrometric accuracy. The ability to effectively limit slit losses using the different Source Centering Constraints in MPT will also depend on the catalog relative astrometric accuracy. These factors should be taken into consideration when deciding whether or not pre-imaging with NIRCam would benefit the spectroscopic observations.
The purpose of Figure 2 is to present the excess flux calibration uncertainties that can result from the imperfect knowledge of the position of point sources in NIRSpec spectral slits from relaxed TA catalog constraints. This “estimated worst case excess MSA slit loss error” is the excess calibration error that would result if a point source target is planned to be at the edge position in its Source Centering Constraint area, but uncertainties in target acquisition result in slightly offset placement compared to the planned position. For example, a point source planned using 20 mas relative astrometric catalog planning accuracy in a Tightly Constrained plan could have an excess MSA slit loss throughput calibration error of ~15% because of the position uncertainty from target acquisition. These worst-case excess calibration errors are very difficult to correct because of imprecise knowledge of the final source centering in a NIRSpec MSA slit.
For NIRSpec MSA science observations and target acquisition, very high quality planning astrometry will limit calibration errors that result from uncertain spectral source positioning. As seen from Figure 2, in field-relative astrometry of 5–10 mas or better is needed to limit the excess flux calibration error for point source observations.
References
Beck et al. 2016, SPIE, 9910, 12
Planning JWST NIRSpec MSA spectroscopy using NIRCam pre-images
This page has no comments. | https://jwst-docs.stsci.edu/display/JTI/NIRSpec+MOS+Operations+-+Slit+Losses?reload=true | 2019-05-19T13:10:03 | CC-MAIN-2019-22 | 1558232254882.18 | [] | jwst-docs.stsci.edu |
Games end when certain conditions are fulfilled: a player reaches a certain goal, time runs out or all players but one are eliminated.
Set End Conditions to end a game when a condition is fulfilled.
Machinations diagrams use End Conditions to specify end states. End Conditions are square Nodes with a smaller, filled square inside (the same symbol that is used to indicate the stop button on most audio and video players).
Machinations checks the End Conditions in a diagram at each time step and stops running immediately when any End Condition is fulfilled. In other words, if you have multiple End Conditions in a diagram, it will stop running as soon as any one of the End Conditions is met. For such an example, watch the video below.
End Conditions must be activated by an Activator. In this case, Activators are used to specify the end state of the game. | https://docs.machinations.io/nodes/end-conditions | 2019-05-19T12:18:13 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.machinations.io |
Imaging Technology
At VRA, we utilize the most advanced retinal imaging available..
| https://www.retina-docs.com/technology | 2019-05-19T12:24:12 | CC-MAIN-2019-22 | 1558232254882.18 | [array(['https://static1.squarespace.com/static/5be3093b2971145c1371cf3e/t/5c2ec0268a922d8f0fe9b8d7/1546567729494/mac%2Bhole%2B3.jpg',
'mac+hole+3.jpg'], dtype=object)
array(['https://static1.squarespace.com/static/5be3093b2971145c1371cf3e/t/5c2ec03c4fa51a57280cf415/1546567970783/fa1.png',
'fa1.png'], dtype=object)
array(['https://static1.squarespace.com/static/5be3093b2971145c1371cf3e/t/5c2ec36a575d1f7eb19557c9/1546568560122/11179_006.jpg',
'11179_006.jpg'], dtype=object)
array(['https://static1.squarespace.com/static/5be3093b2971145c1371cf3e/t/5c2ec19e6d2a732a2d41ebcc/1546568100840/scan_image_Retinal--Choroidal-detachment.jpg',
'scan_image_Retinal--Choroidal-detachment.jpg'], dtype=object)
array(['https://static1.squarespace.com/static/5be3093b2971145c1371cf3e/t/5c2ec0f80e2e7230e30d08d7/1546567938153/hvf.jpg',
'hvf.jpg'], dtype=object) ] | www.retina-docs.com |
Render Modes
RadSocialShare an image sprite is used to create the icons. The HTML is as lightweight and semantic as possible and CSS3 is used for the
border-redius. improvesSocialShare,SocialShare uses internally RadComboBox and RadWindow for its CompactPopup and RadCaptcha for its SendEmail form. These controls inherit the RenderMode of the SocialShare.
Setting Render Mode
There are two ways to configure the rendering mode of the controls:
- The RenderMode property in the markup or in the code-behind that can be used for a particular instance:
<telerik:RadSocialShare</telerik:RadSocialShare>
RadSocialShare1.RenderMode = Telerik.Web.UI.RenderMode.Lightweight;
RadSocialShare1.RenderMode = Telerik.Web.UI.RenderMode.Lightweight
- A global setting in the web.config file that will affect the entire application, unless a concrete value is specified for a given control instance:
<appSettings> <add key="Telerik.Web.UI.SocialShare.RenderMode" value="lightweight" /> </appSettings> | https://docs.telerik.com/devtools/aspnet-ajax/controls/socialshare/mobile-support/render-modes | 2019-05-19T12:22:19 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.telerik.com |
Delete Orders¶
Overview¶ receive an email notification when Order is Delete by schedule.
How to download and install¶
How to Configure¶
I.Configuration¶
From the Admin panel, go to
Stores > Configuration > Mageplaza > Delete Orders.
2. Automatic Delete Configuration¶
An Order can only be deleted automatically by schedule when and only if it satisfies all the conditions of Purchase Date, Order Status, Customer Group, Store View, Shipping Country, and Order Total.
- Schedule For:
- Set the schedule for delete order daily, weekly or monthly.
- With Daily, the schedule runs automatically by date.
- With Weekly, the schedule runs automatically on every Monday.
- With Monthly, the schedule runs automatically on the 1st of the month.
- Start Time:
- Set the starting time to delete order
- By that time of day, the schedule will be run automatically.
- Excluded Period:
- Enter the period to apply to delete order before it.
- For example, Period = 10, today is December 31st, 2018, all orders created before December 21st, 2018 will be deleted (if they meet the conditions below).
- Order Status: Select order status to be applied Delete order.
- Customer Group(s): Choose the customer groups whose orders will be deleted auto by schedule
- Store View(s): Select Store view where Order is purchased to apply for Delete Orders
- Shipping Countries:
- All Countries: Check all Orders.
- Specific Country: Check for orders with Shipping Address at Country selected.
- Order Total less than: Limit the order’s Maximum Value to apply to delete order.
- Besides delete orders automatically, Admin can also click the “Run Manually” button to delete specific orders that meet all conditions
- Note: Admin can also delete orders by using the command line
php bin / magento order: delete order_id. For example Admin wants to delete the order with ID = 15, admin on the command line running the command
php bin / magento order: delete 15.
3. Email Notification¶
- Enable: Select yes to enable email sending to Admin every time an Order is deleted (including manual or auto-deletion).
- Sender: There are 5 default types of Magento Sender for Admin to choose: General Contact, Sales Representative, Customer Support, Custom Email 1, Custom Email 2.
- Send To:
- Insert the email who receive notification when Order is Delete.
- Each email =must be separated by commas (,).
II. Grid¶
From Admin panel, go to
Sales > Orders.
- Admin can delete orders created by clicking on the order ID
- In case Admin wants to delete all order, click Select All, the system will select all created orders.
- After Select order, admin click
Action > Deleteto delete order.
- Also, Admin can delete order by clicking to View of the order.
- Then click Delete.
- The system will show a popup, click OK to delete order
| https://docs.mageplaza.com/delete-orders/index.html | 2019-05-19T12:54:20 | CC-MAIN-2019-22 | 1558232254882.18 | [array(['https://i.imgur.com/dFJpzZb.png',
'https://i.imgur.com/dFJpzZb.png'], dtype=object)
array(['https://i.imgur.com/1r50764.png',
'https://i.imgur.com/1r50764.png'], dtype=object)
array(['https://i.imgur.com/wsWjXc4.png',
'https://i.imgur.com/wsWjXc4.png'], dtype=object)
array(['https://i.imgur.com/1b3EGcY.png',
'https://i.imgur.com/1b3EGcY.png'], dtype=object)
array(['https://i.imgur.com/e3SrAHU.png',
'https://i.imgur.com/e3SrAHU.png'], dtype=object)
array(['https://i.imgur.com/kg4ikwL.png',
'https://i.imgur.com/kg4ikwL.png'], dtype=object)
array(['https://i.imgur.com/1NSnKah.png',
'https://i.imgur.com/1NSnKah.png'], dtype=object)
array(['https://i.imgur.com/iuFrIGv.png',
'https://i.imgur.com/iuFrIGv.png'], dtype=object)
array(['https://i.imgur.com/p7N4glD.png',
'https://i.imgur.com/p7N4glD.png'], dtype=object)
array(['https://i.imgur.com/SNDHFwT.png',
'https://i.imgur.com/SNDHFwT.png'], dtype=object)] | docs.mageplaza.com |
Configure a swapfile location for the host to determine the default location for virtual machine swapfiles in the vSphere Web Client.
About this task
By default, swapfiles for a virtual machine are located on a the virtual machine on a local datastore rather than in the same directory as the virtual machine swapfiles. If the virtual machine is stored on a local datastore, storing the swapfile with the other virtual machine files will not improve vMotion.
Prerequisites
Required privilege:
Procedure
- Browse to the host in the vSphere Web Client navigator.
- Select the Manage tab and click Settings.
- Under Virtual Machines, click Swap file location.
The selected swapfile location is displayed. Manage tab. To change the swapfile location for such a host, edit the cluster settings.
- Click Edit.
- Select where to store the swapfile.
- (Optional) If you select Use a specific datastore, select a datastore from the list.
- Click OK.
Results
The virtual machine swapfile is stored in the location you selected. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.resmgmt.doc/GUID-12B8E0FB-CD43-4972-AC2C-4B4E2955A5DA.html | 2019-05-19T12:20:22 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.vmware.com |
class oeofstream : public oeostream
This class represents oeofstream.
Implements an output stream capable of writing to a specified file or to any generic data source with an associated file descriptor.
The following methods are publicly inherited from oeostream:
The following methods are publicly inherited from oestream:
oeofstream() oeofstream(int ofd) oeofstream(const char *filename) oeofstream(const std::string &filename)
Creates a new oeofstream. If a valid file descriptor (‘fd’) is passed to the constructor, the stream will use the associated data target for output. Otherwise, the constructors expect the name of file to be written to. If the named file does not exist, it will be created. If no parameter is passed to the constructor, the stream can later be opened using the open command defined in oeostream or the oeofstream.openfd command defined in this class.
bool append(const char *filename) bool append(const std::string &filename)
Opens the specified file to which data will be appended. Returns whether or not the file was successfully opened (or created as necessary).
int fd() const
Returns the system dependent file descriptor associated with the stream’s data target. | https://docs.eyesopen.com/toolkits/java/oechemtk/OEPlatformClasses/oeofstream.html | 2019-05-19T13:51:43 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.eyesopen.com |
Start a baseline load-sharing set transfer
The snapmirror initialize-ls-set command initializes and updates a set of load-sharing mirrors. This command is usually used after the snapmirror create command is used to create a SnapMirror relationship for each of the destination volumes in the set of load-sharing mirrors. The initial transfers to empty load-sharing mirrors are baseline transfers done in parallel. During a baseline transfer Data ONTAP takes a Snapshot copy on the source volume to capture the current image of the source volume and transfers all of the Snapshot copies on the source volume to each of the destination volumes.
After the snapmirror initialize-ls-set command successfully completes, the last Snapshot copy transferred is made the exported Snapshot copy on the destination volumes.
The parameter that identifies the set of load-sharing mirrors is the source volume. Data and Snapshot copies are transferred from the source volume to all up-to-date destination volumes in the set of load-sharing mirrors.
This command is only supported for SnapMirror relationships with the field "Relationship Capability" showing as "Pre 8.2" in the output of the snapmirror show command.
To initialize the group of load-sharing mirrors for the source endpoint //vs1.example.com/dept_eng, type the following command:
cluster1::> snapmirror initialize-ls-set -source-path //vs1.example.com/dept_eng | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-cmpr-950/snapmirror__initialize-ls-set.html | 2019-05-19T12:36:57 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.netapp.com |
The OME team is committed to providing frequent, project-wide upgrades both with bug fixes and new functionality. We try to make the schedule for these releases as public as possible. You may want to take a look at the roadmap for exactly what will go into a release. We always inform our mailing lists of the development status.
See the full details of OMERO 5.1.4 features in the Announcements forum.
This guide aims to be as definitive as possible so please do not be put off by the level of detail; upgrading should be a straightforward process.
Before starting the upgrade, please ensure that you have reviewed and satisfied all the system requirements with correct versions for installation. In particular, ensure that you are running a suitable version of PostgreSQL to enable successful upgrading of the database, otherwise the upgrade script aborts with a message saying that your database server version is less than the OMERO prerequisite. If you are upgrading from a version earlier than OMERO 5.0 then first review the 5.0 upgrade notes regarding previous changes in OMERO.
You may wish to review the open file limits. Please consult the Too many open file descriptors section for further details.
The passwords and logins used here are examples. Please consult the Which user account and password do I use where? section for explanation. In particular, be sure to replace the values of db_user and omero_database with the actual database user and database name for your installation.
If you generated configuration stanzas using omero web config which enable OMERO.web via Apache or Nginx, they may include hard-coded links to your previous version of OMERO. We recommend using a future-proof symlink if possible, so that these stanzas do not need updating with each OMERO server upgrade. See also the OMERO.web deployment page.
OMERO.web plugins are very closely integrated into the webclient. For this reason, it is possible that an update of OMERO will cause issues with an older version of a plugin. It is best when updating the server to also install any available plugin updates according to their own documentation.
All cached Bio-Formats memoization files created at import time will be invalidated by the server upgrade. This means the very first loading of each image after upgrade will be slower. After re-initialization, a new memoization file will be automatically generated and OMERO will be able to load images in a performant manner again.
These files are stored under BioFormatsCache in the OMERO data directory, e.g. /OMERO/BioFormatsCache. You may see error messages in your log files when an old memoization file is found; to avoid these messages delete everything under this directory before starting the upgraded server.
If you encounter errors during an OMERO upgrade, database upgrade, etc. you should retain as much log information as possible and notify the OMERO.server team via the mailing lists available on the community page.
All OMERO products check themselves with the OmeroRegistry for update notifications on startup. If you wish to disable this functionality you should do so now as outlined on the OMERO upgrade checks page.
For all users, the basic workflow for upgrading your OMERO.server is listed below. Please refer to each section for additional details.
The first thing to do before any upgrade activity is to backup your database.
$ pg_dump -h localhost -U db_user -Fc -f before_upgrade.db.dump omero_database
Before copying the new binaries, stop the existing server:
$ cd OMERO.server $ bin/omero web stop $ bin/omero admin stop
Your OMERO configuration is stored using config.xml in the etc/grid directory under your OMERO.server directory. Assuming you have not made any file changes within your OMERO.server distribution directory, you are safe to follow the following upgrade procedure:
$ cd .. $ mv OMERO.server OMERO.server-old $ unzip OMERO.server-5.1.4-ice3x-byy.zip $ ln -s OMERO.server-5.1.4-ice3x-byy OMERO.server $ cp OMERO.server-old/etc/grid/config.xml OMERO.server/etc/grid
Note
ice3x and byy need to be replaced by the appropriate Ice version and build number of OMERO.server.
Warning
This section only concerns users upgrading from a 5.0 or earlier server. If upgrading from a 5.1 server, you do not need to upgrade the database.
OMERO 5.1 requires a Unicode-encoded database; without it, the upgrade script aborts with a message warning how the “OMERO database character encoding must be UTF8”. From psql:
# SELECT datname, pg_encoding_to_char(encoding) FROM pg_database; datname | pg_encoding_to_char ------------+--------------------- template1 | UTF8 template0 | UTF8 postgres | UTF8 omero | UTF8 (4 rows)
Alternatively, simply run psql -l and check the output. If your OMERO database is not Unicode-encoded with UTF8 then it must be re-encoded.
If you have the pg_upgradecluster command available then its --locale option can effect the change in encoding. Otherwise, create a Unicode-encoded dump of your database: dump it as before but to a different dump file and with an additional -E UTF8 option. Then, create a Unicode-encoded database for OMERO and restore that dump into it with pg_restore, similarly to effecting a rollback. If required to achieve this, the -E UTF8 option is accepted by both initdb and createdb.
You must use the same username and password you have defined during OMERO.server installation. The 5.1 upgrade script should execute in a short time.
$ cd OMERO.server $ psql -h localhost -U db_user omero_database < sql/psql/OMERO5.1__1/OMERO5.0__0.sql Password for user db_user: ... ... status --------------------------------------------------------------------- + + + YOU HAVE SUCCESSFULLY UPGRADED YOUR DATABASE TO VERSION OMERO5.1__1 + + + (1 row)
Note
If you perform the database upgrade using SQL shell, make sure you are connected to the database using db_user before running the script. See this forum thread for more information.
After you have run the upgrade script, you may want to optimize your database which can both save disk space and speed up access times.
$ psql -h localhost -U db_user omero_database -c 'REINDEX DATABASE "omero_database" FORCE;' $ psql -h localhost -U db_user omero_database -c 'VACUUM FULL VERBOSE ANALYZE;'
If any new official scripts have been added under lib/scripts or if you have modified any of the existing ones, then you will need to backup your modifications. Doing this, however, is not as simple as copying the directory over since the core developers will have also improved these scripts. In order to facilitate saving your work, we have turned the scripts into a Git submodule which can be found at.
For further information on managing your scripts, refer to OMERO.scripts. If you require help, please contact the OME developers.
If you changed the directory name where the 5.1.4 server code resides, make sure to update any system environment variables. Before restarting the server, make sure your PATH and PYTHONPATH system environment variables are pointing to the new locations.
Your memory settings should be copied along with etc/grid/config.xml, but you can check the current settings by running omero admin jvmcfg. See Memory configuration for more information.
The generated web-server configurations for Nginx and Apache have been revised in OMERO 5.1. It is highly recommended that you regenerate your FastCGI Configuration (Unix/Linux) or WSGI Configuration (Unix/Linux), remembering to merge in any of your own modifications if necessary. See What’s new for OMERO 5.1 for sysadmins for details of changes.
If necessary ensure you have set up a regular task to clear out any stale OMERO.web session files as described in OMERO.web Maintenance.
Following a successful database upgrade, you can start the server.
$ cd OMERO.server $ bin/omero admin start
If anything goes wrong, please send the output of omero admin diagnostics to [email protected].
Start OMERO.web with the following command:
$ bin/omero web start
If the upgraded database or the new server version do not work for you, or you otherwise need to rollback to a previous database backup, you may want to restore a database backup. To do so, create a new database,
$ createdb -h localhost -U postgres -E UTF8 -O db_user omero_from_backup
restore the previous archive into this new database,
$ pg_restore -Fc -d omero_from_backup before_upgrade.db.dump
and configure your server to use it.
$ bin/omero config set omero.db.name omero_from_backup | https://docs.openmicroscopy.org/omero/5.1.4/sysadmins/server-upgrade.html | 2019-05-19T12:40:42 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.openmicroscopy.org |
This document describes the support for SQL foreign key constraints introduced in SQLite version 3.6.19 (2009-10-14).:
CREATE TABLE artist( artistid INTEGER PRIMARY KEY, artistname TEXT ); CREATE TABLE track( trackid INTEGER, trackname TEXT, trackartist INTEGER -- Must map to an artist.artistid! );:
CREATE TABLE track( trackid INTEGER, trackname TEXT, trackartist INTEGER, FOREIGN KEY(trackartist) REFERENCES artist(artistid) );:
trackartist IS NULL OR EXISTS(SELECT 1 FROM artist WHERE artistid=trackartist):'; (2009-10-14) -:
sqlite> PRAGMA foreign_keys = ON;! reported if:
The last bullet above is illustrated by the following:
CREATE TABLE parent2(a, b, PRIMARY KEY(a,b)); CREATE TABLE child8(x, y, FOREIGN KEY(x,y) REFERENCES parent2); -- Ok CREATE TABLE child9(x REFERENCES parent2); -- Error!).:
CREATE TABLE artist( artistid INTEGER PRIMARY KEY, artistname TEXT ); CREATE TABLE track( trackid INTEGER, trackname TEXT, trackartist INTEGER REFERENCES artist ); CREATE INDEX trackindex ON track(trackartist);:
CREATE TABLE album( albumartist TEXT, albumname TEXT, albumcover BINARY, PRIMARY KEY(albumartist, albumname) ); CREATE TABLE song( songid INTEGER, songartist TEXT, songalbum TEXT, songname TEXT, FOREIGN KEY(songartist, songalbum) REFERENCES album(albumartist, albumname) );: affect the operation of foreign key actions. It is not possible to disable recursive foreign key actions.
SQLite is in the Public Domain. | http://docs.w3cub.com/sqlite/foreignkeys/ | 2018-02-18T02:44:53 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.w3cub.com |
- Run Forward Path tasks
- Virtualization solution
Forward Path tasks are typically used to automate the creation of production-ready App-V and XenApp packages, based on logic within the Forward Path report. However, Forward Path tasks can be configured to do many other tasks, such as copying files and sending emails. Forward Path tasks are controlled by Forward Path task scripts that are configured to run based on a value in the Outcome column in a Forward Path report. Forward Path reports are controlled by scenarios.
After you create or import Forward Path scenarios and task scripts, you can run tasks and monitor their status.
You can change the default active scenario in the Forward Path Logic Editor.
The lower part of the screen shows the progress and the error log. Some task scripts are dependent on the successful configuration of Install Capture and a virtual machine. See Install Capture for more information. | https://docs.citrix.com/de-de/dna/7-9/reporting/forward-path/dna-forward-path-tasks.html | 2018-02-18T03:13:47 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.citrix.com |
Autoload¶
Core and most of plugins have been fitted with class autoloaders.
These autoloaders (where present) are located in
plugins/%plugin name%/include/autoload.php
These files are auto-generated so please do not edit!
When you add a new class, you need to regenerate the corresponding
autoload.php file.
To do so, run the following command from the Tuleap sources:
$ make autoload-docker
Hint
There is a tool that will remove all instances of
require_once from
all files in a given directory.
$ tools/utils/autoload/generate.sh plugin/%plugin name%/include/
Tip: run a
git diff to carefully check all changes made by the tool! | http://tuleap-documentation.readthedocs.io/en/latest/developer-guide/autoload.html | 2018-02-18T03:21:50 | CC-MAIN-2018-09 | 1518891811352.60 | [] | tuleap-documentation.readthedocs.io |
Channels
Channels allow users to subscribe to different release cadences for an app, be it by major version bumps or in-development releases.
You can provide users with automated upgrades for daily releases, beta, candidate or stable releases. They also let you manage major software series and provide fixes to users wishing to stay on a specific series.
Structure of a channel
A release channel is created and named using the following tuple:
<track>/<risk level>/<branch>
Syntax
- track (string): Indicates a release series with a free-form string such as “
1.0”, “
2”, “
trusty”, etc. When omitted, the default track (“
latest”) is assumed.
- risk level (string, “
stable”, “
candidate”, “
beta” or “
edge”): indicates the stability users can expect from the software.
- branch (optional, string): indicates a temporary branch derived from a risk level. Example:
fix-for-bug123.
In practice, you will use this syntax when releasing or installing software from the command-line.
At any time, you can see the channel mapping of your own published snaps with the
snapcraft status <snap> command.
See the command-line usage section for practical examples.
Discoverability of snaps
The snap search API (used by the
snap find command) only returns results for snaps in channels with a “stable” risk level and no branch. This ensures stability for users.
Nevertheless, if you know the name of a snap, the
snap info <snap> command will give you its complete tracks and risk levels map.
Track
Tracks allow you to publish different series of your software (1.0, 2.0, etc.) and let users follow and get automated upgrades from a specific one.
Overview
- By default, snaps are published to a track called
latest. This is also the default for users when they install a snap, for example
snap install my-app --betaimplies
--channel=latest/beta.
- Users do not get automatically moved between tracks. It is the user’s decision to not install snaps from the
latesttrack, or to move back to it or any other track. Installing a snap from a track is done with the
--channel=<track>/<risk level>flag.
- To create a track you need to request it and get it approved by the developer community. This process happens in the Snapcraft forum, see the requests guidelines.
Risk level
There are four risk levels available for snaps, that denote the stability of revisions they contain.
Overview
- Note that in developer discussions and some documentation, the “risk level” is often referred as the “channel”, since most snaps only use the default track and no branches.
- By default, the
snap installcommand installs snaps from the
stablelevel.
- When a channel is explicitly closed with the
snapcraft closecommand, users following this channel are automatically moved to the channel with the next safest risk level in the same track. For example, if you run
snapcraft close my-app 1.0/edge, users following this channel will be moved to the
1.0/betachannel.
Risk levels meaning
stable: what most users will consume and as the name suggests, should be your most polished, stable and tested versions.
candidate: used to vet uploads that should require no further code changes before moving to stable.
beta: used to provide preview releases of tested changes.
edge: for your most recent changes, probably untested and with no guarantees attached.
Make sure you follow these guidelines, as they have an impact on the discoverability of snaps in search results.
Risk level restrictions: confinement and grade
In your
snapcraft.yaml, you can declare the development status of your snap with the
grade keyword and its confinement policy with the
confinement one.
Depending on these, your snap can be restricted to certain levels.
Branch
Branches are temporary and created on demand when releasing a snap. Their purpose is to provide your users with an easy way to test your bug fixes.
Overview
- Branches are created with the
snapcraft releasecommand. See command-line usage for examples.
- Branches are automatically closed after 30 days without a new revision being released into them. The expiration date of a branch can be checked at any time by the publisher of a snap using the
snapcraft status <snap>command, which provides a complete map of tracks, risk levels, branches and their expiration dates.
- Users following a branch will be automatically moved to the risk level the branch is attached to. For example, if you tell users to install your snap with the following command
snap install my-app --channel=beta/fix-for-bug123, after 30 days, the branch will close and these users will be moved to the
betachannel, as if they had used
snap install my-app --channel=beta.
- Branches are not visible in the
snap infocommand output unless you are following one.
Command-line usage
Note: when using tracks and channels from the command-line, when track or channel is omitted,
latest or
stable is assumed.
Releasing a snap
To release a snap revision to a channel, the command to use is:
snapcraft release <snap> <revision> <channel>
Examples
Releasing revision 12 of “my-app” to the “beta” level of the “latest” default track:
Channel syntax:
latest/beta
$ snapcraft release my-app 12 beta Or $ snapcraft release my-app 12 latest/beta
Releasing revision 12 of “my-app” to the “stable” level of the “1.0” track:
Channel syntax:
1.0/stable
$ snapcraft release my-app 12 1.0 Or $ snapcraft release my-app 12 1.0/stable
Releasing revision 12 of “my-app” to the “fix-for-bug123” branch of the “stable” level of the “latest” track:
Channel syntax:
latest/stable/fix-for-bug123
$ snapcraft release my-app 12 stable/fix-for-bug123 Or $ snapcraft release my-app 12 latest/stable/fix-for-bug123
Installing a snap
To install a snap from a specific channel, the command to use is:
snap install <snap> --channel=<channel>
Examples
Installing the latest stable version of “my-app”:
Channel syntax:
latest/stable
$ snap install my-app Or $ snap install --stable Or $ snap install --channel=latest/stable
Installing the beta version of “my-app” from the 1.0 track:
Channel syntax:
1.0/beta
$ snap install my-app --channel=1.0/beta
Installing “my-app” from the fix-for-bug123 branch of the “stable” level of the “latest” track:
Channel syntax:
latest/stable/fix-for-bug123
$ snap install my-app --channel=stable/fix-for-bug123 Or $ snap install my-app --channel=latest/stable/fix-for-bug123
Viewing the channel mapping of a snap
To view the map of channels of a snap, the command to use is:
As a publisher (complete mapping, including branches expiration date):
snapcraft status <snap>
For example:
$ snapcraft status my-app Track Arch Channel Version Revision Expires at latest amd64 stable 2.2 155 candidate ↑ - beta 2.3 180 edge daily 187 edge/fix-for-bug123 daily 189 2017-09-16T14:03:06.079634 1.0 amd64 stable 1.9 88 candidate - - beta - - edge - -
As a user (public mapping: no branches):
snap info <snap>
For example:
$ snap info my-app [...] channels: stable: 2.2 (155) 43MB - candidate: ↑ beta: 2.3 (180) 37MB - edge: daily (187) 37MB devmode 1.0/stable: 1.9 (88) 42MB - 1.0/candidate: ↑ 1.0/beta: - 1.0/edge: - | https://docs.snapcraft.io/reference/channels | 2018-02-18T03:25:40 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.snapcraft.io |
Robot Operating System (ROS)
Snapcraft builds on top of the
catkin tool, familiar to any ROS developer, to create snaps for people to install on Linux.
What problems do snaps solve for ROS applications?
ROS itself is distributed via the OSRF’s own Debian archive, along with many community-supported tools. It’s possible to get your own application into their archive as well, but it requires that the application is open-source. You’re also left with the question of how to update ROS and your application on a robotic platform that has already been shipped. With snapcraft it’s just one command to bundle a specific ROS version along with your application into a snap that works anywhere and can be automatically updated.
Here are some snap advantages that will benefit many ROS projects:
- Bundle all the runtime requirements, including the exact version of ROS, system libraries, etc.
- Expand the distributions supported beyond just Ubuntu.
- Directly control the delivery of application updates.
- Extremely simple creation of daemons.
Getting started
Let’s take a look at the talker and listener out of the ROS tutorials, and show how simple that system is to snap.
ROS Tutorials
Snaps are defined in a single yaml file placed in the root of your project. Here is the entire
snapcraft.yaml for this project. Let’s break it down.
name: ros-talker-listener version: 0.1 summary: ROS Talker/Listener Example description: | This example requires roscore as well as a talker and listener. grade: devel confinement: devmode parts: ros-tutorials: source: source-branch: kinetic-devel plugin: catkin rosdistro: kinetic source-space: roscpp_tutorials/ catkin-packages: [roscpp_tutorials] apps: run: command: roslaunch roscpp_tutorials talker_listener.launch-talker-listener version: 0.1 summary: ROS Talker/Listener Example description: | This example requires roscore as well as a: ros-tutorials. Parts can point to local directories, remote git repositories, or tarballs.
The Catkin plugin will bundle
roscore in the snap. It will also use
rosdep to determine the dependencies of the
catkin-packages provided, download them from the ROS archive, and unpack them into the snap. Finally, it will build the
catkin-packages specified.
Important note: Most ROS developers run out of the
devel space. As a result, it’s easy to forget the importance of good install rules, i.e. rules for installing every component of the package necessary to run, or every component necessary to use a given library. The Catkin packages you’re building must have good install rules, or Snapcraft won’t know which components to place into the snap. Make sure you install binaries, libraries, header files, launch files, etc.
parts: ros-tutorials: source: source-branch: kinetic-devel plugin: catkin rosdistro: kinetic source-space: roscpp_tutorials/ catkin-packages: [roscpp_tutorials]
Apps
Apps are the commands and services exposed to end users. If your command name matches the snap
name, users will be able run the command directly. If the names differ, then apps are prefixed with the snap
name (
ro.
Here we simply run a launch file that brings up roscore along with the talker and listener.
apps: run: command: roslaunch roscpp_tutorials talker_listener.launch-talker-listener snapcraft
The resulting snap can be installed locally. This requires the
--dangerous flag because the snap is not signed by the Snap Store. The
--devmode flag acknowledges that you are installing an unconfined application:
sudo snap install ros-talker-listener_*.snap --devmode --dangerous
You can then try it out:
$ ros-talker-listener.run <snip> SUMMARY ======== PARAMETERS * /rosdistro: kinetic * /rosversion: 1.12.7 NODES / listener (roscpp_tutorials/listener) talker (roscpp_tutorials/talker) auto-starting new master process[master]: started with pid [919] ROS_MASTER_URI= setting /run_id to a2132f48-a959-11e7-a19a-346895ed0f23 process[rosout-1]: started with pid [932] started core service [/rosout] process[listener-2]: started with pid [935] process[talker-3]: started with pid [936] [ INFO] [1507158810.260508402]: hello world 0 [ INFO] [1507158810.360553002]: hello world 1 [ INFO] [1507158810.460584229]: hello world 2 [ INFO] [1507158810.460985451]: I heard: [hello world 2] [ INFO] [1507158810.560586692]: hello world 3 [ INFO] [1507158810.560894817]: I heard: [hello world 3] [ INFO] [1507158810.660587011]: hello world 4
Removing the snap is simple too:
sudo snap remove ros-talker-listenerrossnap
Be sure to update the
name: in your
snapcraft.yaml to match this registered name, then run
snapcraft again.
Upload your snap
Use snapcraft to push the snap to the Snap Store.
snapcraft push --release=edge myross Catkin. - rosinstall-files: (list of strings) List of rosinstall files to merge while pulling. Paths are relative to the source. - 'target' attribute.)
You can view them locally by running:
snapcraft help catkin.
For example, while the ros_tutorials have proper install rules, say you were creating a snap of an upstream ROS application that didn’t, and you wanted a launch file out of it. You could make use of the
install keyword to get around the lack of install rules:
parts: ros-tutorials: source: plugin: catkin rosdistro: kinetic catkin-packages: [no_install_rules] install: | mkdir -p "$SNAPCRAFT_PART_INSTALL/opt/ros/kinetic/share/no_install_rules" cp -r no_install_rules/launch "$SNAPCRAFT_PART_INSTALL/opt/ros/kinetic/share/no_install_rules/" | https://docs.snapcraft.io/build-snaps/ros | 2018-02-18T03:20:59 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.snapcraft.io |
Managing Imply Clusters
An Imply cluster consists of the processes that make up Druid, including data servers, query servers, and master servers, along with Pivot.
The way you create and scale a cluster differs depending on how you are running Imply. In an Imply Cloud installation, you can add clusters from the Imply Manager home page, as described in the Quickstart.
With Kubernetes, you can define the cluster to be created using a Helm chart. Servers are automatically added to the cluster when the Kubernetes scheduler creates new pods. For more information on cluster management with Kubernetes, see Deploy with Kubernetes.
Create a clusterCreate a cluster
Imply Manager users with the
ManageClusters permission can create clusters.
To create a cluster, from the Imply Manager home page, click the New cluster button. The new cluster configuration page appears, as follows:
Configure the general settings and choose the instance types for the cluster.
For a highly available cluster, you must have at least 3 master servers. For more high availability information, see Druid High availability.
In the Advanced config section, default settings appear for deep storage and metadata store settings. Zookeeper settings are configurable as well in Imply Private deployments. They are not available for modification in Imply Cloud.
For Imply Private, you can enter Druid server configuration settings in the Server properties fields. Available properties are in Druid configuration reference. In Imply Cloud and GCP enhanced, appropriate values are calculated for you at cluster creation time based on your chosen instance types.
When finished, click Create cluster and confirm the operation to submit the cluster for creation. It may take fifteen to twenty minutes to start up resources for the cluster.
You can track progress from the cluster Overview page for the new cluster. For details, click the number next to messages in the STATUS section of the page. The message log appears:
When the deployment is finished, Running appears next to the cluster state in the overview page. See Viewing cluster status and capacity usage for more information about cluster status.
With Imply Cloud, behind the scenes, clusters are created by a set of Amazon CloudFormation scripts built for you by the Imply team. You shouldn't need to access or change the scripts manually, but it can be helpful to know of their operation behind the scenes, particularly if asked to access the CloudFormation scripts in your AWS account by Imply support in certain troubleshooting scenarios.
Also note that Imply Cloud by default deploys servers across multiple availability zones (AZs) in a single region. This helps to ensure that the services are highly available.
Connect to your clusterConnect to your cluster
Accessing PivotAccessing Pivot
In Imply Cloud and Enhanced GCP, users can access Pivot by clicking Open from the cluster actions menu found on the cluster's overview page.
On Imply Private, users can access Pivot at.
Accessing Druid Web ConsoleAccessing Druid Web Console
Click Load Data in the Pivot window to open the Druid Web Console. See the Quickstart for information on using the Web Console.
API AccessAPI Access
In Imply Cloud and Enhanced GCP, to get API connection information for the cluster, from the respective cluster's overview page in the Imply Manager UI, click the API tab.
Note that if using Imply Cloud or have deployed to a cloud host, these can only be accessed from within the VPC.
For Imply Private deployments, see the Druid API reference for API access information.
Viewing cluster status and capacity usageViewing cluster status and capacity usage
The cluster overview page lets you view the status and health of the cluster at a glance. The top of the page shows the current status of the cluster and the server types that make up the cluster.
The disk utilization of the cluster appears in the Used column of the clusters list. The utilization value reflects the disk usage of the cluster, or more specifically, the segment cache usage compared to the segment cache size for the cluster as a whole. This information helps you determine when you need to scale the cluster or take other remediation actions.
In considering disk usage, note that a certain amount of disk space is reserved for internal Imply operations, and is therefore shown as occupied in the disk usage graph. Specifically, about 25 GB is reserved for purposes of temporary storage space for ingestion, heap dumps, log files, and for other internal processes.
For more information about cluster status and operations, you can access the process logs from across the cluster from the Manager UI:
Updating a ClusterUpdating a Cluster
Imply Manager users with the
ManageClusters permission can update clusters.
To update a cluster, navigate to the Setup tab of the corresponding cluster, and change the parameters that you would like. The type of updates permissible depends on the deployment mode. The following shows the upgrade options for AWS.
Click the Apply Changes button. Before any changes are made, the manager UI informs you of the type of update that will be made.
After confirming the change, you will be brought to the cluster overview page, where you can monitor the progress of the update. Click the Changes button to see the list of changes being made in this update.
When the update finishes successfully the cluster state will show as
RUNNING.
Stopping a ClusterStopping a Cluster
Imply Manager users with the
ManageClusters permission can stop clusters.
To stop a cluster, navigate to the corresponding cluster's overview page, and click the Stop button. You will be prompted to confirm the operation.
Once the operation completes, the cluster state will show as
STOPPED.
You can restart the cluster from the overview page by pressing the Start button
Terminating a ClusterTerminating a Cluster
Imply Manager users with the
ManageClusters permission can terminate clusters. Note, this operation is irreversible; once a cluster is terminated, it cannot be used again.
To terminate a cluster, navigate to the corresponding cluster's overview page, and click the Terminate button. You will be prompted to confirm the operation.
Once the operation completes, the cluster state will show as
TERMINATED. | https://docs.imply.io/2021.02/cluster-management/ | 2021-04-10T18:31:10 | CC-MAIN-2021-17 | 1618038057476.6 | [array(['/2021.02/assets/imply-manager-cloud-new-cluster.png',
'Imply manager cloud create cluster'], dtype=object)
array(['/2021.02/assets/imply-manager-new-cluster-status.png',
'Imply manager Create cluster progress'], dtype=object)
array(['/2021.02/assets/imply-manager-cluster-overview.png',
'Imply manager pivot access'], dtype=object)
array(['/2021.02/assets/imply-manager-server-logs.png',
'Imply manager new servers'], dtype=object)
array(['/2021.02/assets/imply-manager-cluster-update-data-instance-type.png',
'Imply manager cluster update data intstance type'], dtype=object) ] | docs.imply.io |
Configuring Apple's Classroom App with Jamf School
Apple's Classroom app and Jamf Teacher are both apps that teachers can use to manage student's school-issued devices. Each app has a different feature set. You can choose to distribute one or both apps, depending on the needs of your environment. Apple's Classroom app, see Intro to Classroom in Apple's Classroom User Guide. | https://docs.jamf.com/jamf-school/deploy-guide-docs/Configuring_Apple's_Classroom_App_with_Jamf_School.html | 2021-04-10T18:57:52 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.jamf.com |
- Tap Servers (cloud icon) at the bottom of the window and tap the server icon.
- If prompted, supply your RSA user name and passcode, your Active Directory user name and password, or both.
- Touch and hold the desktop name until the context menu appears.
- Tap Log Off in the context menu.
Results
What to do next
Tap the Logout button in the upper-left corner of the window to disconnect from the server. | https://docs.vmware.com/en/VMware-Horizon-Client-for-iOS/4.6/com.vmware.horizon.ios-client-46-install/GUID-5C76B460-EAEA-4D6D-B03D-269BEFBEAB47.html | 2021-04-10T20:15:55 | CC-MAIN-2021-17 | 1618038057476.6 | [] | docs.vmware.com |
Customer Support Policy
1.:
*As used herein Bug shall mean an unexpected error, fault or flaw within the Software Ideas
- access to all relevant data required to pinpoint and solve the issue
This is in addition to any further assistance reasonably requested from the Acrolinx Support team.
Acrolinx assumes all data submitted to support follows all of your company's policies:
- Confidential information (such as nonpublic information, business secrets, trade secrets, payment-related information, cardholder information) generally requires an active nondisclosure agreement.
- Personal information (such as names, email, and IP addresses and employee-related information) generally requires an active data processing agreement.
If in doubt, please contact your compliance or legal department. Acrolinx's legal department ([email protected]) is also happy to help.
6. Service and Support Coverage
For SaaS customers, Acrolinx provides the following under support:
- Bug fixes
- New software versions/enhancements that are available to all customers at no additional fee
- Integration enhancements categorized as generally available
- Additional modifications and enhancements that aren’t related to integrations or the Acrolinx Private Cloud Platform
Not included are:
- Enhancements to linguistic writing guides or guidance
- Enhancements or specific actions related to customer-specific workflows or integrations are provided at the discretion of Acrolinx
We do our best to avoid bugs, and we apologize in advance if a bug affects you. However, we know that there will be bugs in our system, as in almost all IT systems, despite using industry best practices. We don 4 weeks’ written notice to [email protected] and [email protected] before commencing any penetration testing or related activities.
8. Personal Data
10. Data Processing Addendum
Please note the Acrolinx International Data Processing Addendum ("DPA") generally applies to Acrolinx's Services (as defined in the DPA). To the extent a DPA is required for lawful provision of our Services and not yet in place, the DPA may come into force automatically according to the DPA. You can download a copy and formally sign the DPA at acrolinx.com/dpa.
Related articles | https://docs.acrolinx.com/kb/en/customer-support-policy-13731247.html | 2022-08-08T04:28:34 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.acrolinx.com |
Localizing Applications
Introduction to International Applications Based on the .NET Framework
Discusses the concepts related to developing software for an international market using Visual Basic or Visual C#.
Globalizing Windows Forms
Provides links to pages about creating Windows applications that support multiple cultures.
ASP.NET Globalization and Localization
Provides links to pages about creating Web applications that support multiple cultures.
Best Practices for Developing World-Ready Applications
Provides information on programming for an international audience, such as design issues and terminology. | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2013/z68135h5(v=vs.120) | 2022-08-08T05:48:51 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.microsoft.com |
Import Google Analytics data
Google Analytics is shutting down its Universal Analytics properties. To keep this data you can choose to import it into Simple Analytics. We built a simple import tool where you can import your data in just a few clicks.
With Simple Analytics you can import Universal Analytics properties and Google Analytics 4 properties.
Import Google Analytics 4 data
We filter the Google Analytics 4 data based on a match on
eventName. When this
eventName is
page_view, we import that event. All other events are discarded. Another filter we apply is a hostname filter. When you have multiple hostnames in your data, we ask you which hostnames to import. For example you might have and
m.example.com. You can pick the hostnames you want in our UI.
Dimensions
dateHour– The combined values of date and hour formatted as YYYYMMDDHH.
fullPageUrl– The hostname, page path, and query string for web pages visited; for example, the
fullPageUrlportion of.
sessionSource– The source that initiated a session on your website or app.
deviceCategory– The type of device: Desktop, Tablet, or Mobile.
countryId– The geographic ID of the country from which the user activity originated, derived from their IP address. Formatted according to ISO 3166-1 alpha-2 standard.
browser– The browsers used to view your website.
operatingSystem– The operating systems used by visitors to your app or website. Includes desktop and mobile operating systems such as Windows and Android.
eventName– The name of the event. For example,
page_view.
hostname– Includes the subdomain and domain names of a URL; for example, the Host Name of.
Metrics
screenPageViews– The number of app screens or web pages your users viewed. Repeated views of a single page or screen are counted. (
screen_view+
page_viewevents).
totalUsers– The number of distinct users who have logged at least one event, regardless of whether the site or app was in use when that event was logged.
userEngagementDuration– The total amount of time (in seconds) your website or app was in the foreground of users’ devices.
Use the GA4 query explorer to see which data we can import from Google Analytics 4. We prefilled the dimensions and metrics if you use our link to the query explorer.
Go to our Google Analytics importer in our dashboard to import your Google Analytics data.
Import Google Universal Analytics data
With Universal Analytics we have one filter we apply, which is the hostname filter. When you have multiple hostnames in your data, we ask you which hostnames to import. For example you might have and
m.example.com. You can pick the hostnames you want in our UI.
Dimensions
ga:dateHour– The combined values of date and hour formatted as YYYYMMDDHH.
ga:pagePath– A page on the website specified by path and/or query parameters. Use this with hostname to get the page’s full URL.
ga:fullReferrer– The full referring URL including the hostname and path.
ga:deviceCategory– The type of device: desktop, tablet, or mobile.
ga:countryIsoCode– Users’ country’s ISO code (in ISO-3166-1 alpha-2 format), derived from their IP addresses or Geographical IDs. For example, BR for Brazil, CA for Canada.
ga:browser– The name of users’ browsers, for example, Internet Explorer or Firefox.
ga:browserVersion– The version of users’ browsers, for example, 2.0.0.14.
Metrics
ga:pageviews– The total number of pageviews for the property.
ga:sessions– The total number of sessions.
ga:timeOnPage– Time (in seconds) users spent on a particular page, calculated by subtracting the initial view time for a particular page from the initial view time for a subsequent page. This metric does not apply to exit pages of the property.
Use the Universal Analytics Query Explorer to see which data we can import from Universal Analytics. With Universal Analytics you have to manually copy and paste the dimensions and metrics from above into the Query Explorer.
Go to our Google Analytics importer in our dashboard to import your Google Analytics data. | https://docs.simpleanalytics.com/import-google-analytics-data | 2022-08-08T05:06:21 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['/images/pencil.svg', 'edit'], dtype=object)] | docs.simpleanalytics.com |
See “CREATE AUTHORIZATION” in Teradata Vantage™ - SQL Data Definition Language Detailed Topics , B035-1184 for more information about creating authorization objects.
See CREATE FUNCTION and REPLACE FUNCTION (External Form), CREATE FUNCTION and REPLACE FUNCTION (Table Form), and CREATE PROCEDURE and REPLACE PROCEDURE (External Form) for information about how authorization objects are used with external UDFs and external SQL procedures.
See the cufconfig utility in Teradata Vantage™ - Database Utilities , B035-1102 for information about how to configure external routine server processes and how to alter secure group membership. | https://docs.teradata.com/r/Teradata-VantageTM-SQL-Data-Definition-Language-Syntax-and-Examples/September-2020/Authorization-Statements-for-External-Routines/CREATE-AUTHORIZATION-and-REPLACE-AUTHORIZATION/Related-Topics | 2022-08-08T05:25:15 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.teradata.com |
Data Collection for IoT Security
Use Encapsulated Remote Switched Port Analyzer (ERSPAN) to collect IoT device data from switches.
Unless device traffic is visible to a firewall, the firewall cannot include it in the logs it forwards to IoT Security. When you need to collect data for devices whose traffic doesn't pass through a firewall, mirror their traffic on network switches and use Encapsulated Remote Switched Port Analyzer (ERSPAN) to send it to the firewall through a Generic Routing Encapsulation (GRE) tunnel. After the firewall decapsulates the traffic, it inspects it similar to traffic received on a TAP port. The firewall then creates enhanced application logs (EALs) and traffic, threat, WildFire, URL, data, GTP (when GTP is enabled), SCTP (when SCTP is enabled), tunnel, auth, and decryption logs. It forwards them to the logging service where IoT Security can access and analyze the IoT device data.
You can use this feature for any deployments where traffic from remote switches needs to be inspected. IoT Security is just one use case.
This feature requires switches that support ERSPAN such as Catalyst 6500, 7600, Nexus, and ASR 1000 platforms.
- Configure a switch that supports ERSPAN to mirror traffic on one or more source ports or VLANs, and forward it through a GRE tunnel to a destination port on a next-generation firewall.For configuration instructions, see the Cisco documentation for your switch.
- Enable ERSPAN support on the firewall.By default, ERSPAN support is disabled.
- Log in to the firewall and select(for Session Settings).DeviceSessionEdit
- Enable ERSPAN Supportand thenOK.The ERSPAN Support check box in the Session Settings section is now selected.
- Commityour change.
- Create a Layer 3 security zone specifically to terminate the GRE tunnel and receive mirrored IoT device traffic from the source port on the network switch.
- SelectandNetworkZonesAdda zone.
- Enter the following and leave the other settings at their default values:Name: Enter a meaningful name for the zone such asERSPAN-IoT-data.Log Setting: SelectIoT Security Default Profileor another log forwarding profile that sends the required types of logs to the logging service for IoT Security.Type:Layer3
- ClickOK.
- Create a Layer 3 interface and bind it to the zone you just created.
- Selectand then click the Ethernet interface on which you want to terminate the GRE tunnel from the switch. Optionally, use a subinterface.NetworkInterfacesEthernet
- Enter the following and leave the other settings at their default values:Comment: Enter a meaningful note about the interface for later reference.Interface Type:Layer3Virtual Router: Choose the virtual router you want to route to the interface. Consider using a separate virtual router exclusively for ERSPAN traffic.Security Zone: Choose the zone you just created.
- ClickIPv4, selectStaticfor the address type, andAddan IP address for the interface.The switch uses this in its GRE tunnel configuration as the IP address of its peer.
- ClickAdvancedand either add aNew Management Profileor select a previously defined profile that allows the Ethernet interface to accept different types of administrative traffic.
- ClickOKto save the new interface management profile and then clickOKagain to save the Ethernet interface configuration.
- Create a tunnel interface with an IP address in the same subnet as that of the corresponding tunnel interface on the switch and bind it to the zone you just created.
- SelectandNetworkInterfacesTunnelAddthe logical tunnel interface for the GRE tunnel from the switch.
- Enter the following and leave the other settings at their default values:Interface Name: The field on the left is read-only and contains the text “tunnel”. Enter a number in the field on the right to complete the name. For example, enter8to make the nametunnel.8.Virtual Router: Choose the same router you used for the Layer 3 interface.Security Zone: Choose the same zone to which you bound the Layer 3 interface.
- ClickIPv4andAddan IP address that’s in the same subnet as the IP address of the logical tunnel interface on the switch.
- ClickAdvancedand either add aNew Management Profile, or select a previously defined profile, to allow the tunnel interface to accept different types of administrative traffic.
- ClickOK.
- Configure static routes for the virtual router (VR) for ERSPAN.
- Selectand click the virtual router for ERSPAN.NetworkVirtual Routers
- ClickStatic Routesand then click+ Add.
- Enter the following and leave the other settings at their default values:Name: Enter a name for the static route.Destination:0.0.0.0/0If you know the subnets beyond the switch, create individual static routes for each of them. Otherwise, use a separate VR for ERSPAN and set a default route.Interface:ethernet1/3(the interface you previously configured)Next Hop:None
- ClickOK.
- Configure a GRE tunnel with ERSPAN enabled.
- Selectand clickNetworkGRE Tunnels+ Add.
- Enter the following and leave the other settings at their default values:Name: Enter a name for the GRE tunnel; for example,GRE-ESPAN-for-IoT-dataInterface: Choose the Layer 3 interface you configured for GRE tunnel termination.Local Address: ChooseIPand the IP address of the Layer 3 interface where the GRE tunnel terminates.Peer Address: Enter the IP address of the switch egress interface from which it initiates the GRE tunnel.Tunnel Interface: Choose the logical tunnel interface you configured for the GRE tunnel.ERSPAN: (select)
- ClickOK.The IP addresses of the Ethernet and tunnel interfaces in relation to each other and the rest of the network look like this.
- Commityour changes.
Most Popular
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/pan-os/10-2/pan-os-new-features/iot-security-features/data-collection-for-iot-security | 2022-08-08T05:09:14 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['/content/dam/techdocs/en_US/dita/_graphics/10-2/iot/con-switch-erspan-data-collection.png/jcr:content/renditions/original',
None], dtype=object) ] | docs.paloaltonetworks.com |
📄️ Introduction
What is Meta Box?
📄️ Installation
Requirements
📄️ Custom post types
When building a website, there may be sections on the website such as events and projects where the content and appearance are very different from posts and pages. That's when you need custom post types.
📄️ Custom. | https://docs.metabox.io/category/getting-started/ | 2022-08-08T05:10:34 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.metabox.io |
How to create.
- Press the + Create A New Queue button. A prompt will appear:
- Fill in the required information:
Queue Name
The only non-optional item is the name.
Description
This is optional. Please use a description that your agents will understand.
Rules
Here you can choose the rule in which the alerts created by said rule will filter into your new alert queue. This is optional but must be filled in later if omitted during queue creation (fill it in during rule creation).
Route alerts to a queue using the rule's logic (or by manually assigning it after the alert is generated).
ONLY 1 RULE CAN BE ASSOCIATED WITH A QUEUE.
If rule is already associated with another queue, it will get disassociated from that queue immediately.
Team
Here you can choose which team or teams can read alerts in this new queue.
Only agents who are assigned to a queue can investigate its alerts.
Order in which alerts are consumed from this queue
There are three ways to designate the order in which alerts are investigated:
- Click Create Queue.
Your new alert queue has been created.
Permissions required for Alert Queue Creation:
- To fully work with alert queues, you need at least the following permissions:
create/edit rules
create/edit alert queues
reassign queues
create/edit alerts
Updated 3 months ago | https://docs.unit21.ai/u21/docs/create-alert-queues | 2022-08-08T05:06:59 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['https://files.readme.io/b8f8861-ezgif-4-2d52f77bd7.gif',
'ezgif-4-2d52f77bd7.gif 800'], dtype=object)
array(['https://files.readme.io/b8f8861-ezgif-4-2d52f77bd7.gif',
'Click to close... 800'], dtype=object)
array(['https://files.readme.io/9f38e00-Unit21-Queues-7.png',
'Unit21-Queues-7.png 5344'], dtype=object)
array(['https://files.readme.io/9f38e00-Unit21-Queues-7.png',
'Click to close... 5344'], dtype=object)
array(['https://files.readme.io/83d562a-Unit21-Queues-8.png',
'Unit21-Queues-8.png 5344'], dtype=object)
array(['https://files.readme.io/83d562a-Unit21-Queues-8.png',
'Click to close... 5344'], dtype=object)
array(['https://files.readme.io/730da14-Unit21-Queues-9.png',
'Unit21-Queues-9.png 5344'], dtype=object)
array(['https://files.readme.io/730da14-Unit21-Queues-9.png',
'Click to close... 5344'], dtype=object) ] | docs.unit21.ai |
You are looking at documentation for an older release. Not what you want? See the current release documentation. 4 via the following topics:
A list of things you need to do before the upgrade.
How to upgrade from eXo Platform 4.3 to eXo Platform 4.4.
Some tips that help you monitor the upgrade.
Common steps for upgrading your add-ons along with the new Platform version. | https://docs-old.exoplatform.org/public/topic/PLF44/PLFAdminGuide.Upgrade.html | 2022-08-08T04:36:58 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs-old.exoplatform.org |
Installation on Windows (fully manual)¶
Attention
This pages details all of the manual steps to install our software on Windows. However, do note that we recommend to use our scripted installation instead: Installation on Windows.
Preparation¶
Install dependency: Microsoft Visual C++ Redistributable¶
Executables and DLLs are self-contained and have no dependencies other than the Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Please install this package if you did not do so already:
Invoke-WebRequest -UseBasicParsing ` -Uri "" ` -OutFile vc_redist.x64.exe # After downloading the package, run it to install it .\vc_redist.x64.exe /install /quiet /norestart
Download and verify integrity of packages¶
You don't use a package manager to install our software on Windows, meaning you
will need to verify the integrity of the packages you download. You can do this
verify that the hash of a package corresponds with a checksum we provide for
each package. You can do this like so ((where the value for
$usp_tool should
be the 'name' in the zip that you downloaded):
$repo = "stable" $usp_tool = "mp4split" $usp_version = "1.11.14" $package = "$usp_tool-$usp_version-win64.zip" $baseurl = "" Invoke-WebRequest -UseBasicParsing -Uri "$baseurl/$package" -OutFile $package $checksum = (Invoke-WebRequest -UseBasicParsing ` -Uri "$baseurl/win.sha256sum").rawcontent ` -split "[`r`n]" | Select-String "$package" $package_checksum = ($checksum.ToString().ToUpper() -split " ")[0] (Get-FileHash $package).Hash -eq $package_checksum
This process verifies the integrity of the package against the checksum only. It doesn't verify that the checksum is authentic. If you want to check this as well, please see: Extra: How to verify the signature and prove the authenticity of our Windows packages.
Extra: How to verify the signature and prove the authenticity of our Windows packages¶
Verifying the integrity of our packages is relatively straightforward on Windows: Download and verify integrity of packages. However, if you also want to verify the authenticity of the checksums that can be used to check the integrity of our packages, things become more complicated because Windows does not offer the right tools for this.
You will need two Unix command-line tools to fully verify the authenticity of
our Windows packages:
openssl and
sha256sum. And you will also need the
following files from our repository:
- List of checksums (hashes) per zipped package, per version:
win.sha256sum
- Signature of
win.sha256sum:
win.sha256sum.asc
- Our public key to verify the signature's authenticity:
usp-win-2019-public.pem
You can download these files from the following locations:
Invoke-WebRequest -UseBasicParsing -Uri -OutFile win.sha256sum Invoke-WebRequest -UseBasicParsing -Uri -OutFile win.sha256sum.asc Invoke-WebRequest -UseBasicParsing -Uri -OutFile usp-win-2019-public.pem
As both
openssl and
sha256sum are shipped with most Linux distributions,
there are several options you have to check our Windows packages after you have
downloaded them:
- On a machine running Linux
- On a machine running Windows, by running Linux using Windows Subsystem for Linux
- On a machine running Windows, by running Cygwin (with both tools installed)
- On 'any' machine, by using Docker to spin up Linux container
In all of the above scenarios you need to make sure the directory you
downloaded the zipped package(s) also contains the file with checksums
(
win.sha256sum), the signature of that file (
win.sha256sum.asc) and our
public key (
usp-win-2019-public.pem).
In the first three scenarios run the following two commands in this directory:
#!/bin/bash openssl dgst -sha256 -verify usp-win-2019-public.pem -signature win.sha256sum.asc win.sha256sum sha256sum -c win.sha256sum 2>&1 | grep OK
In the fourth scenario using Docker, run the following in this directory:
$install_openssl = "apk add --no-cache openssl" $verify_signature = "openssl dgst -sha256 -verify usp-win-2019-public.pem -signature win.sha256sum.asc win.sha256sum" $verify_checksum = "sha256sum -c win.sha256sum 2>&1 | grep OK" docker run --rm -v ${pwd}:/data -w /data alpine:latest sh -c "$install_openssl && $verify_signature && $verify_checksum"
All scenarios should result in a
Verified OK message being printed to
confirm the authenticity of the signature, followed by the name(s) of the zipped
packages you have downloaded along with an
OK. For example, when you are
verifying
mp4split-1.10.18-win64.zip the result should be:
Verified OK mp4split-1.11.14-win64.zip: OK
Installation¶
Command-line tools 'mp4split', 'manifest_edit', 'unified_capture' and 'unified_remix'¶
To use Unified Packager or any of our other command-line tools when you don't need the Unified Origin or Unified Remix web server modules, install 'mp4split' and our other command-line tools will be installed as dependencies alongside it. Assuming you have already downloaded and verified the packages (see sections above), installing is done like so:
$usp_tool = "mp4split" $usp_version = "1.11.14" $package = "$usp-tool-$usp_version-win64.zip" $target_dir = "C:\Program Files\Unified Streaming" # Create a 'Unified Streaming' directory if it does not already exist if ( -Not (Test-Path -PathType Container -Path $target_dir)) { New-Item -ItemType Directory $target_dir } # Expand archive package into the 'Unified Streaming' directory Write-Host "Extracting $package into $target_dir" Expand-Archive $package -DestinationPath $target_dir -Force
Add 'mp4split' and other command-line tools to system-wide path settings¶
In order to run 'mp4split' and the other Unified Streaming command-line tools without explicitly specifying the paths to their executables, you can add the path to where they are installed to the system-wide 'path' settings. To do this, follow the instructions below (but be aware that these will make changes to your registry):
$usp_path = "C:\Program Files\Unified Streaming" # Get current system-wide 'path' settings, store them in variable and print result (Get-ItemProperty ` -Path 'Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment' ` -Name PATH).path | ` Set-Variable -Name "old_path" -PassThru | ` Select-Object -ExpandProperty value # Add the path to where 'mp4split' was installed to the system-wide 'path' settings and print result Set-ItemProperty ` -Path 'Registry::HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment' ` -Name PATH -Value "$old_path;$usp_path" -PassThru | ` Select-Object -ExpandProperty PATH # Reload 'path' settings while maintaining the same shell session $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") ` + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
Unified Origin web server module ('mp4split' and 'mod_smooth_streaming')¶
To run Unified Origin, you need to:
- Install 'mp4split' (which will also install our other command-line tools)
- Install Apache
- Install 'mod_smooth_streaming' module
- Install 'mod_unified_s3_auth' if you need Amazon S3 authentication
How to install 'mp4split' is described in the section above.
Then, to install Apache:
- Download the latest version (e.g.,)
- Verify its authenticity (i.e.,)
- Expand the downloaded package to
C:\, installing Apache in
C:\Apache24
Note
Apache Lounge blocks downloads from a shell.
Finally, assuming you have already downloaded and verified the 'apache_mod_smooth_streaming' package, run the following:
$usp_tool = "apache_mod_smooth_streaming"
And, for 'mod_unified_s3_auth':
$usp_tool = "apache_mod_unified_s3_auth"
Configure Apache¶
After you have installed our software and Apache you still need to configure several things before you can successfully stream video. Please see the How to Configure (Unified Origin) section for the necessary information on how to do this. | https://docs.unified-streaming.com/installation/distributions/windows-manual.html | 2022-08-08T03:43:04 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.unified-streaming.com |
2. Creating an Entity
Entities are the basing building block for Transaction Monitoring.
Entities are either businesses or users such as:
- User Name: Sarah Smith
- User ID: 9547-sngfB
- Business Name: JHK Corp
- Business ID automatically created by your company: ja-945Jgd92N
Create a Sample Entity:
Typically, an engineer from your company will automatically create entities for you using our APIs and sending data to our system. Let's go ahead and create one ourselves using the manual upload option:
- Log into your Dashboard.
- Navigate to the Data Explorer:
- Go to the Upload File tab:
- Download this
JSONfile on your computer.
[{ "user_data": { "first_name": "Larry", "last_name": "Baker", "gender": "male", "year_of_birth": 1962, "month_of_birth": 5, "day_of_birth": 21 }, "general_data": { "entity_id": "Baker01", "entity_type": "user" } }]
BONUS:
At this time we highly encourage you to change the name and ID of the user to create something unique.
There is an icon if you hover over the window that can be used to copy the text. You can also highlight the text and copy-paste it into a text editor like Notepad. Save the file as
Create_entity_baker01.json and make sure no extra text is added and that the file extension is JSON and not TXT.
This
JSON file will create an entity that is a user called
Larry Baker with the Entity ID
Baker01.
Unit21 also assigns a numeric ID (seen in the URL) called the Unit21 ID, in this example, it is:
719170349.
What is a JSON file?
JSON is a standard data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays such as our "entity" object.
This is the format your engineers or IT team will use to send data to Unit21.
- Drag and drop the
Create_entity_baker01.jsonfile to the Upload File widget.
- Make sure the file uploaded correctly, it should be labeled as
Awaiting Trigger:
- In the table of Recent Uploads, select the ... menu item (three dots on the right hand side) of your
Create_entity_baker01.jsonfile:
- Select Process in the menu:
Unit21 will process the file and add the
Baker entity to our system.
The Process & Execute rules option is for uploaded files that can also run workflows such as identity verification.
- Patiently wait while Unit21 validates and processes the file.
- Refresh the page until you see
Process Successunder the status of the file:
My Validation Failed!
If your file upload failed, it means that the text in the JSON file was corrupted. Try to copy it again and make sure nothing is deleted or added.
You can use this online tool to check the validity of the JSON file:
- Success! Now let's see the results by navigating to the Entities tab:
I don't see my Entity!
If you don't see Baker in the entity list, you may need to reset the search filters. You also need to make sure you have the right permissions.
- You can click on the
Bakerentity in the table to view more information:
Click on the Go to Detail Page -> to learn more about
Larry Baker:
Once there is more data on your Dashboard, you can use the menu items to view transactions and instruments related to your entity.
You can also look at more entity details that may be populated in the future, such as the entity status or risk score.
Next, we will see how you can search for your new entity in the system.
Updated 4 months ago | https://docs.unit21.ai/u21/docs/2-agent-creating-an-entity | 2022-08-08T03:36:52 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['https://files.readme.io/0d05d9a-Unit21-Agent-Entity-1.png',
'Unit21-Agent-Entity-1.png 1200'], dtype=object)
array(['https://files.readme.io/0d05d9a-Unit21-Agent-Entity-1.png',
'Click to close... 1200'], dtype=object)
array(['https://files.readme.io/fb89310-Unit21-Agent-Entity-2.png',
'Unit21-Agent-Entity-2.png 1200'], dtype=object)
array(['https://files.readme.io/fb89310-Unit21-Agent-Entity-2.png',
'Click to close... 1200'], dtype=object)
array(['https://files.readme.io/76a4c58-Unit21-Agent-Entity-3.png',
'Unit21-Agent-Entity-3.png 1200'], dtype=object)
array(['https://files.readme.io/76a4c58-Unit21-Agent-Entity-3.png',
'Click to close... 1200'], dtype=object)
array(['https://files.readme.io/e668213-Unit21-Agent-Entity-4.png',
'Unit21-Agent-Entity-4.png 1200'], dtype=object)
array(['https://files.readme.io/e668213-Unit21-Agent-Entity-4.png',
'Click to close... 1200'], dtype=object)
array(['https://files.readme.io/dc048fe-Unit21-Agent-Entity-6.png',
'Unit21-Agent-Entity-6.png 1200'], dtype=object)
array(['https://files.readme.io/dc048fe-Unit21-Agent-Entity-6.png',
'Click to close... 1200'], dtype=object)
array(['https://files.readme.io/1200bbb-Unit21-Agent-Entity-7.png',
'Unit21-Agent-Entity-7.png 1200'], dtype=object)
array(['https://files.readme.io/1200bbb-Unit21-Agent-Entity-7.png',
'Click to close... 1200'], dtype=object)
array(['https://files.readme.io/b3a93e5-Unit21-Agent-Entity-8.png',
'Unit21-Agent-Entity-8.png 1200'], dtype=object)
array(['https://files.readme.io/b3a93e5-Unit21-Agent-Entity-8.png',
'Click to close... 1200'], dtype=object)
array(['https://files.readme.io/908858b-Unit21-Agent-Entity-9.png',
'Unit21-Agent-Entity-9.png 1200'], dtype=object)
array(['https://files.readme.io/908858b-Unit21-Agent-Entity-9.png',
'Click to close... 1200'], dtype=object)
array(['https://files.readme.io/b9984b4-Unit21-Agent-Entity-10.png',
'Unit21-Agent-Entity-10.png 1200'], dtype=object)
array(['https://files.readme.io/b9984b4-Unit21-Agent-Entity-10.png',
'Click to close... 1200'], dtype=object)
array(['https://files.readme.io/927b740-Unit21-Agent-Entity-11.png',
'Unit21-Agent-Entity-11.png 1200'], dtype=object)
array(['https://files.readme.io/927b740-Unit21-Agent-Entity-11.png',
'Click to close... 1200'], dtype=object) ] | docs.unit21.ai |
How do I create an Enterprise account in Sizzle?
Create an account by going to the upper right hand corner of the sizzlesells.com home page.
What?
How can I make sure I can sell my products, services or experiences in Sizzle.shop?
We have spelled out all of the various categories of goods that you are allowed to sell in Sizzle.shop in this overview.
What is Sizzle’s policy towards content?
There are many forms of content, conduct and behavior which all must be adhered to by every Sizzle Enterprise account and Merchant.
How can I best keep my Sizzle account secure?
Your account security is of the utmost importance. Please keep up to date on our policies and procedures. | http://docs.sizzle.network/faq-category/accounts/ | 2022-08-08T05:03:18 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.sizzle.network |
Change PO Number
Process
Pick Ticket/Z Slip
Invoice
To change the Customer Purchaser Order number on an invoice
- In the invoice screen recall the invoice using F3 then keying in the invoice number
- Select the Change option using “C”
- Press Enter through Payment Method, Invoice Discount, Freight
- Make Necessary changes to the PO Number and press enter
- Press Accept (F1) to complete the change.
- A Modified reprint appear if the parameters are configured to print Modified Invoices | https://docs.amscomp.com/books/counterpoint/page/change-po-number | 2022-08-08T03:42:27 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.amscomp.com |
78 commands
Nyx // Fun // Logging // Moderation // Music // Autorole // Settings // Utility
By default, the prefix is ;, or you can use "nyx (command)"
You can always change this by using nyx prefix <prefix>
Command usage:
- []: Optional argument.
- <>: Required argument.
NOTE: don't put the symbols inside of the actual command, or in other words, when purging 500 messages in a channel, don't use "purge <500>", use "purge 500"
This command measures the time between your message and Nyx's response, along with latency to Discord's API
Give you useful resources for usage with nyx, along with module/command details if provided.
Give you useful links regarding Nyx, such as commands list, Nyx's server, invite link, and Patreon!
Shows you the statistics of Nyx, and a few useful links.
All the fun commands!
Some commands require the "Embed Links" permission
Shake the magic 8ball and find an answer to a question
but on one condition, you really don't.
Get a rating on how gay someone is...a very trustworthy rating...yes...
TURN 👏 YOUR 👏 MESSAGE 👏 INTO 👏 THIS 👏 ABOMINATION 💪😔💦
OwO-ify your text
be cawefuw though, it's vewy bad
Search a term on the best dictionary on the planet
play a bruh sound effect in your voice channel
play a vine boom sound effect in your voice channel
Are you, by chance, sick of someone? t h e n k i l l e m .
Give someone a kiss!
Give someone a hug, they might need it
Cuddle up with someone, make them feel better,,!
Boop someone on the nose,,!!
Pat someone on the head!
play an oof sound effect in your voice channel
Wholesome time,,,send a picture of a cat!
Wholesome time,,,send a picture of a fox!
Wholesome time,,,send a picture of a dog!
Turn a specified image, or your profile picture, into a Nyx-like avatar!
This module is disabled by default. You can enable it using "nyx module enable logging"
Show the current logging settings, or change logging settings!
Set the server logs channel! [Nyx requires the create webhook permission for that channel.]
Commands can NOT be ran by members if Nyx does not identify them as a Server Moderator or Server Administrator.
Purge up to 1,000 messages in a channel! (or, purge messages found within a count of 1,000 messages)
Warn a user who did a bad, bad thing. [logs with "moderation" logging enabled]
Kick a user who did an even worse thing. [logs with "moderation" logging enabled]
Mute a user who just won't stop breaking rules. [logs with "moderation" logging enabled]
Ban a user who did something so bad, muting and kicking them was just not enough. [logs with "moderation" logging enabled]
Unmute a user that was muted using Nyx. [WON'T WORK IF NYX DID NOT MUTE THIS PERSON] [logs with "moderation" logging enabled]
Unban a user [logs with "moderation" logging enabled]
Show a list of warnings on a user.
Clear the warnings off of a user. [logs with "moderation" logging enabled]
Remove a specific warning off of a user. [logs with "moderation" logging enabled]
Show a list of moderation logs performed on a user, such as ban, unban, warn, etc.
KEEP IN MIND THIS IS STRONGLY IN BETA; if a function doesn't work as intended, please refer to our Support Server, and report the issue there.
Let Nyx join your voice channel to play music!
Play a song in your voice channel! (Supports SoundCloud, YouTube, and uploaded audio files!)
Show the current song that is playing, along with the duration!
Pause the currently playing song!
Set the bass multiplier for the playback of your music!
Set the treble multiplier for the playback of your music!
Resume the current song in the queue.
Voteskip the current track! [bypassable with Manage Server permission]
Remove every track after the current song! (using this command with the only song being the currently played will skip it)
Set the volume in the music session. (for those who just LOVE distortion)
Enable looping for either the song or the queue!
Show the current queue!
Shuffle the queue!
Replay the current song!
Remove a specific song from the queue!
Find the lyrics of a certain song, or the currently playing song!
Remove songs from people who have left the voice channel
Leave the channel, and clear the queue!
Search SoundCloud for a song!
Search YouTube for a song!
This module is disabled by default. You can enable it using "nyx module enable autorole"
Add or remove roles to be added in the autorole!
These are the server/user settings! (server settings mostly will require the Server Administrator permission)
View/change the settings for the server!
Enable a specific disabled command for your server!
Disable a specific command for your server!
Check module settings, or change them!
Add/remove a Server Moderator role!
Add/remove a Server Administrator role!
Set a Mute role, or remove the current one!
Set the server's prefix for Nyx to understand!
Set your own pronouns for the bot's usage! (typically used in Fun commands)
Going AFK for a bit? Let Nyx know, and they will tell whoever pings you that you're away!
Set your own nickname for the bot to use!
Set a reminder for yourself in the future!
Set a reminder for yourself in the specified time, and then every 24 hours!
Show your currently active reminders!
Delete a specific reminder on your list!
If a command is not working, use uptime to see if that service is online.
This command uses Nyx's API to create a shortened link of a long url.
Gives you the current information about a server, such as the server owner, the member count, server region, etc.
Gives the current information about a specified user, such as the Username, Discrim, ID, etc. along with details specific to the server.
This command lets you send a message as Nyx.
Grabs the URL of a mentioned user's avatar.
Catch a message that was recently deleted. | https://docs.nyx.bot/article/commands | 2022-08-08T04:39:05 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.nyx.bot |
Surface Tools
Surface
Surface Edit Tools >
Refit to Tolerance
The FitSrf command tries to reduce the number of surface control points while maintaining the surface's same general shape.
DeleteInput
Deletes the original geometry.
Retains the original geometry.
ReTrim
Trims the new fit surface with the original trimming curves.
UDegree/VDegree
The degree of the surface in the u or v direction..
Edit surfaces
Rhinoceros 7 © 2010-2022 Robert McNeel & Associates. 28-Jul-2022 | http://docs.mcneel.com/rhino/7/help/en-us/commands/fitsrf.htm | 2022-08-08T05:25:21 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.mcneel.com |
Data Migration Questionnaire
Abstract
This document is intended for those requesting data migration (DM) into WebChart ( WC ), and should be used for surveying any and all requirements for that purpose. Data migration is defined as the movement, or transference of data from one system to another. For example, the moving of data from a legacy application (i.e. Medgate, UL/OHM, etc.) or spreadsheet, to a solution such as WebChart , is understood as data migration.
I. Data Sources and Storage
This section is intended to provide an overview of the current state of data, its storage, and potential restrictions.
- Where do you store legacy data that is being considered for data migration?
- List and describe the purpose of all legacy EMRs or commercial applications in use.
- List and describe the purpose of all custom applications storing data that may need migrated to WC .
- List and describe the purpose of all spreadsheets storing data intended for migration.
- Are there any data shares of files outside of the legacy applications or data sources that should be considered? For example, scanned images, results, opinion letters, and so on may be stored outside of an application but requested for migration.
- For each of the legacy data sources, are the data or applications hosted locally or off-site? Specifically, do you perform your own data backups and extractions, or will a request need to be made from a third party that supports the application or data storage?
- If the legacy data above is in a database, what database version is used for each data application or source (e.g. Oracle 12c, MSSQL 13.0, MySQL 5.7.17, etc.)?
- How will the data be delivered? For example, previous clients have performed their own backup or data extraction, and via SFTP, the data is then uploaded to our secure data center or, if applicable, to standalone hardware provided by WC .
- Are there any restrictive permissions with sharing your data? Most clients provide full database backups of the legacy system(s) requested for migration; however, we have had instances where the client was severely restricted in what could be released. Please verify what data restrictions, if any, may be present for each data source or application, prior to the data migration kickoff call.
II. Who’s Who?
We’d like to know everyone that has a voice in this project. Everyone with decision-making ability needs to be trained in our product. This will help them in decision-making when we have questions about what data is required and how it will be migrated.
- Identify key stakeholders in the data migration project and describe their role(s) in the project.
- Identify any relevant Subject Matter Experts (SMEs) and/or any key personnel and their titles/roles, if not already listed as a key stakeholder. Examples include:
- Medical Director
- Clinical SMEs
- Report Writers (for each data source)
- Interface Engineers (for each data source)
- Any other SMEs requiring engagement for non-clinical workflows or data migrations
- Please provide an organizational chart, if possible. This allows for a better understanding of the parties involved, while reducing planning time and determining resource availability.
- Identify the individuals responsible for data validation. 6. Is there a user-acceptance process?
- Identify the SME for each data source, or the individual(s) best suited to address questions regarding the location and function of specific records or fields during the data migration. 7. What is their availability for potential questions or concerns?
III. Client & Workflow-Specific Requirements
- typically encounters the following common data migrations from legacy data source(s). Which of the following data will your migration require? Also briefly describe any relevant reports for the legacy data, including the frequency of use and whether or not you analyze trends in historical data. When considering reports, please consider those used by not only clinical teams, but also caseworkers, medical directors, executives, and anyone else who may be accessing this data.
- To ensure a full scope of the project is at hand, are there any additional migrations needing considered?
- Some followup questions regarding the above items:
- If you track cases, such as injuries, illnesses, or visits, and discrete data is intended to be moved, is there an established method for differentiating open and closed cases? For example, some workflows dictate that data is not entered into the system until a resolution is determined; whereas some workflows will begin a case as soon as the employee walks into a clinic.
- If you track Health Surveillance (HS) membership and due dates, please list and briefly discuss the HS programs you track.
- If you track HS membership and due dates, what determines the next due date? For example, previous clients have chosen a medical anniversary date corresponding to the employee’s date of birth, date of hire, Cost Center, or Organizational Unit. Most clients, however, use the last test date, and then schedule the next due date at the time of the last exam.
- Is there a need to store sensitive data, and if so, are there any controls that need considered in its migration? Examples may be, but are not limited to: Employee Assistance Program (EAP) information, Fit For Duty evaluations, Psychological notes, or data related to highly placed executives/officials.
- With regard to sensitive information, is there a need for relationship mapping, or security rules, intended to limit access to specific data?
- Identify and describe any custom reports or tools used to extract data for workflows.
- What interfaces (electronic or manual) interact with your legacy data source(s)? Are they inbound, outbound, or bidirectional? Briefly describe each interface. If there are any forms or requirements associated with these interfaces, please include examples.
- Will there be a Human Resources (HR) interface? If so, will there be any demographic information that will be required beyond what comes over the HR interface? Please consider dependents, applicants, contractors, and other non-employees that may be seen in a clinic, but not included on the HR feed.
- If the answer is Yes above, please consider the following questions around your HR Interface:
- What is your source system and version number?
- Do you host your HR application?
- Will you be providing the periodic data extraction? Or will there be a 3rd party?
- Data Format: CSV
- Confirm delimiter (comma, tab, and vertical bar are most common). Choose something not present within any of the data fields for the HR data file.
- What population will be included in each file? Everyone or only people who have had demographic updates since the last extraction (deltas)?
- Frequency of the data file: daily, weekly, etc.
- Standard connectivity for HR interfaces include MIE hosting FTPS (preferred) or SFTP.
- What IP Address or Range(s) will be used to connect to MIE’s interface server to deliver the data file?
- Please discuss any items from the Workflow Considerations section of the informational document provided with this questionnaire.
- Confirm that the Employee ID passed as the first Medical Record number is 100% populated, never changes, and is never reused.
- Termination procedure
- Applicant procedure
- File name convention
_YYYYMMDDHHMMSS.csv
- EG: eh_workday_dev_20170628095942.csv or eh_sap_prod_20170420000000.csv
- Are there any workflows that may be unique to your situation? Or do you have special input screens to facilitate a workflow in your legacy data systems?
- Will different employee/patient populations need to be restricted from certain clinical personnel? For example, would you want to restrict clinicians to work only with employees and personnel by country, person type (i.e. employee, applicant, contractor, etc), or employer organization (e.g. company, subsidiary, contractor, agency, prime, etc)?
IV. Next Steps
- What documentation exists for the legacy data sources? Please provide any documentation available (e.g. diagrams, schemas, videos, tutorials, screenshots, specifications, third-party interface requirements, etc.).
- If Crystal Reports or similar database connections are used to extract or mine data, or the database has a custom front-end (e.g. MSSQL or Access often have these), please provide the queries for common reports and key screens on the front end. These queries are used to identify key discrete data in your systems and understand workflow.
- Can you supply de-identified data of 15-20 complex or “interesting” patient/employee charts/histories/data, which can provide a good representation of all the different types of data intended for migration? This is crucial for understanding data early on as well as for validating data during the migration.
- Though this will be detailed outside of this questionnaire, it is important to understand that workflow discovery is critical to configuration and data migration. Please provide any available workflow diagrams or documentation, to allow for immediate review and preparation. Be sure to consider the workflows of all users of the system including clinicians, caseworkers, report writers, directors, and executives.
- What details/criteria may be needed to configure the employer organizations (e.g., companies, subsidiaries, contractors, agencies, primes, etc)?
- Please discuss, generally, not only what data will be extracted from your legacy source(s), but how much? For example, consider data retention rules around retirees, applicants who never became employees, test charts, etc., which may be unnecessary data to migrate. | https://docs.webchartnow.com/functions/system-administration/data-migration/data-migration-questionnaire.html | 2022-08-08T04:17:14 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.webchartnow.com |
You are looking at documentation for an older release. Not what you want? See the current release documentation.
eXo Web Service provides a framework to cross-domain AJAX. This section shows you how to use this framework.
You can checkout the source code at..
To describe the method for the cross-domain AJAX solution, consider the following scheme that contains 3 components:
1..
Place the
client.html and
xda.js files in ServerA.
Place the
server.html file in ServerB.
Declare
xda.js in>. | https://docs-old.exoplatform.org/public/topic/PLF44/WS.CrossDomainAJAX.html | 2022-08-08T05:05:04 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs-old.exoplatform.org |
An Act to create 20.445 (1) (c) and 106.45 of the statutes; Relating to: grants for certain University of Wisconsin and technical college graduates who paid nonresident tuition; granting rule-making authority; and making an appropriation. (FE)
Amendment Histories
Bill Text (PDF: )
Fiscal Estimates and Reports
AB888 ROCP for Committee on Workforce Development (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2017 Senate Bill 732 - S - Universities and Technical Colleges | https://docs.legis.wisconsin.gov/2017/proposals/ab888 | 2022-08-08T04:31:48 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.legis.wisconsin.gov |
Subscription
Jmix Studio commercial subscription provides additional visual designers for convenient work with entities, screens, fetch plans, and roles.
You can open Jmix Studio Premium dialog from the toolbar of the Jmix Tool Window using Subscription Information… action.
In the dialog, you can enter Studio subscription key or request a free trial subscription.
When subscription is activated, you can see its details and the list of the add-ons included in your subscription.
Jmix Studio subscription unlocks the following premium functionality:
Entity designer
Enumeration designer
Screen designer
Fetch plan designer
Role designer
Visual editor for the theme variables
A trial subscription can be requested once by every new user. It allows a developer to evaluate full capabilities of the Studio for 28 days. Click Request Trial to check if you are eligible for the trial.
| https://docs.jmix.io/jmix/studio/subscription.html | 2022-08-08T05:04:53 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['_images/open-subscription.png', 'open subscription'], dtype=object)
array(['_images/subscription.png', 'subscription'], dtype=object)
array(['_images/active-license.png', 'active license'], dtype=object)
array(['_images/subscription-trial.png', 'subscription trial'],
dtype=object) ] | docs.jmix.io |
We’re excited to announce the New Relic Infrastructure’s PowerDNS integration.
Customers with a subscription to New Relic One can now pull data from their PowerDNS servers directly into New Relic without installing any third-party software. This on-host integration allows you to track key metrics and gain critical insights into how your PowerDNS is performing, giving you improved visibility into the parts of your infrastructure served by PowerDNS.
New Relic offers observability for the two main products of the PowerDNS platform.
- Authoritative server
- Recursor
Troubleshoot your PowerDNS infrastructure faster.
Check the status of your PowerDNS infrastructure by accessing the Entity Explorer.
You will easily identify the actual status by reviewing the most important metrics on a summarized list.
View your authoritative server dashboard.
Check the health of your authoritative servers by looking at the information provided by the dashboard.
This integration is compatible with PowerDNS authoritative server and recursor 3.x and above.
For installation configuration and exact compatibility requirements, check our PowerDNS integration for New Relic documentation.
Suggest a change and learn how to contribute | https://docs.newrelic.com/whats-new/2021/11/whats-new-powerdns/ | 2022-08-08T04:34:38 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.newrelic.com |
Google Data Studio connector
Some customers use Google Data Studio for building reports. Because we care about privacy, we don’t store any personal data. That’s why we are comfortable enough to let customers choose to connect with Data Studio while Google can’t abuse our customers’ data or their visitors.
This connector lets customers query data from the API of Simple Analytics. It does need access to your Simple Analytics data. We need the Google scope “connect to an external service” for that.
- Go to…eEg
- Follow the steps in the video:
When using this connector you are subjected to our general terms and conditions and privacy policy.
The source code for the connector is hosted on GitHub. | https://docs.simpleanalytics.com/google-data-studio | 2022-08-08T05:00:17 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['/images/pencil.svg', 'edit'], dtype=object)
array(['https://assets.simpleanalytics.com/docs/google-data-studio/connector.jpg',
'Simple Analytics connector for Google Data Studio'], dtype=object) ] | docs.simpleanalytics.com |
FuseSoC User Guide¶
The FuseSoC User Guide is aimed at hardware developers utilizing FuseSoC to build and integrate their hardware designs.
Learn how to use FuseSoC in an existing project.
Have you checked out a hardware design project that uses FuseSoC and are trying to understand how to build the design? Get started by installing FuseSoC, and then have a look at the usage documentation.
Add FuseSoC support to your hardware project.
If you are starting a new hardware design project, or already have source files and are looking for a better way to build your project and integrate third-party components? Get started by installing FuseSoC, read a bit about the concepts and terminology of FuseSoC, and then move on to add FuseSoC core description files to your project.
Inside this User Guide
- Why FuseSoC?
- Installing FuseSoC
- Understanding FuseSoC
- Running FuseSoC
- Building a design with FuseSoC
- The FuseSoC package manager
- Common Problems and Solutions | https://fusesoc.readthedocs.io/en/latest/user/index.html | 2022-08-08T05:16:01 | CC-MAIN-2022-33 | 1659882570765.6 | [] | fusesoc.readthedocs.io |
Customising FTGate Webmail
FTGate allows for a very simple method of customising the initial welcome screens and logos used in the user interface.
Process
- Locate the folder Webs5/assets
- Copy the contents to a new folder (this is to prevent your logos being overwritten if we update our logos)
- Replace the logo files with your own matching files. Keep the names and dimensions the same.
- In the Services/WebMail Interface/virtuals add a new entry
url: /assets
path: the path to your files (e.g. c:program filesftgate2009myassets)
- test your changes | http://docs.ftgate.com/ftgate-documentation/using-ftgate/management/customising-webmail/ | 2022-08-08T03:19:08 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.ftgate.com |
iRO.
…).
The number of parallel connections is controlled using the lower right stepper control in the Transfers window.
| https://docs.cyberduck.io/protocols/irods/?action=diff&version=13 | 2021-11-27T01:41:41 | CC-MAIN-2021-49 | 1637964358078.2 | [array(['../../_images/Use_parallel_transfer_option.png',
'Open multiple connections'], dtype=object)
array(['../../_images/Limit_Number_of_Transfers.png',
'Limit Number of Transfers'], dtype=object) ] | docs.cyberduck.io |
SSL
This is a guide to setting up SSL using the C/C++ driver. This guide will use self-signed certificates, but most steps will be similar for certificates generated by a certificate authority (CA). The first step is to generate a public and private key pair for all Cassandra nodes and configure them to use the generated certificate.
Some notes on this guide:
- Keystore and truststore might be used interchangeably. These can and often times are the same file. This guide uses the same file for both (keystore.jks) The difference is keystores generally hold private keys and truststores hold public keys/certificate chains.
- Angle bracket fields (e.g.
<field>) in examples need to be replaced with values specific to your environment.
- keytool is an application included with Java 6+
SSL can be rather cumbersome to setup; if assistance is required please use the mailing list or #datastax-drivers on
irc.freenode.net <> for help.
Generating the Cassandra Public and Private Keys
The most secure method of setting up SSL is to verify that DNS or IP address used to connect to the server matches identity information found in the SSL certificate. This helps to prevent man-in-the-middle attacks. Cassandra uses IP addresses internally so that’s the only supported information for identity verification. That means that the IP address of the Cassandra server where the certficate is installed needs to be present in either the certficate’s common name (CN) or one of its subject alternative names (SANs). It’s possible to create the certficate without either, but then it will not be possible to verify the server’s identity. Although this is not as secure, it eases the deployment of SSL by allowing the same certficate to be deployed across the entire Cassandra cluster.
To generate a public/private key pair with the IP address in the CN field use the following:
keytool -genkeypair -noprompt -keyalg RSA -validity 36500 \ -alias node \ -keystore keystore.jks \ -storepass <keystore password> \ -keypass <key password> \ -dname "CN=<IP address goes here>, OU=Drivers and Tools, O=DataStax Inc., L=Santa Clara, ST=California, C=US"
If SAN is preferred use this command:
keytool -genkeypair -noprompt -keyalg RSA -validity 36500 \ -alias node \ -keystore keystore.jks \ -storepass <keystore password> \ -keypass <key password> \ -ext SAN="<IP address goes here>" \ -dname "CN=node1.datastax.com, OU=Drivers and Tools, O=DataStax Inc., L=Santa Clara, ST=California, C=US"
NOTE: If an IP address SAN is present then it overrides checking the CN.
Enabling
client-to-node Encryption on Cassandra
The generated keystore from the previous step will need to be copied to all Cassandra node(s) and an update of the cassandra.yaml configuration file will need to be performed.
client_encryption_options: enabled: true keystore: <Path to keystore>/keystore.jks keystore_password: <keystore password> ## The password used when generating the keystore. truststore: <Path to keystore>/keystore.jks truststore_password: <keystore password> require_client_auth: <true or false>
NOTE: In this example keystore and truststore are identical.
The following guide has more information related to configuring SSL on Cassandra.
Setting up the C/C++ Driver to Use SSL
A
CassSsl object is required and must be configured:
#include <cassandra.h> void setup_ssl(CassCluster* cluster) { CassSsl* ssl = cass_ssl_new(); // Configure SSL object... // To enable SSL attach it to the cluster object cass_cluster_set_ssl(cluster, ssl); // You can detach your reference to this object once it's // added to the cluster object cass_ssl_free(ssl); }
Exporting and Loading the Cassandra Public Key
The default setting of the driver is to verify the certificate sent during the SSL handshake. For the driver to properly verify the Cassandra certificate the driver needs either the public key from the self-signed public key or the CA certificate chain used to sign the public key. To have this work, extract the public key from the Cassandra keystore generated in the previous steps. This exports a PEM formatted certificate which is required by the C/C++ driver.
keytool -exportcert -rfc -noprompt \ -alias node \ -keystore keystore.jks \ -storepass <keystore password> \ -file cassandra.pem
The trusted certificate can then be loaded using the following code: “`c int load_trusted_cert_file(const char* file, CassSsl* ssl) { CassError rc; char* cert; long cert_size;
FILE *in = fopen(file, “rb”); if (in == NULL) { fprintf(stderr, “Error loading certificate file ‘%s’\n”, file); return 0; }
fseek(in, 0, SEEK_END); cert_size = ftell(in); rewind(in);
cert = (char*)malloc(cert_size); fread(cert, sizeof(char), cert_size, in); fclose(in);
// Add the trusted certificate (or chain) to the driver rc = cass_ssl_add_trusted_cert_n(ssl, cert, cert_size); if (rc != CASS_OK) { fprintf(stderr, “Error loading SSL certificate: %s\n”, cass_error_desc(rc)); free(cert); return 0; }
free(cert); return 1; } “`
It is possible to load multiple self-signed certificates or CA certificate chains. In the event where self-signed certificates with unique IP addresses are being used this will be required. It is possible to disable the certificate verification process, but it is not recommended.
// Disable certifcate verifcation cass_ssl_set_verify_flags(ssl, CASS_SSL_VERIFY_NONE);
Enabling Cassandra identity verification
If a unique certificate has been generated for each Cassandra node with the IP address in the CN or SAN fields, identity verification will also need to be enabled.
NOTE: This is disabled by default.
// Add identity verification flag: CASS_SSL_VERIFY_PEER_IDENTITY cass_ssl_set_verify_flags(ssl, CASS_SSL_VERIFY_PEER_CERT | CASS_SSL_VERIFY_PEER_IDENTITY);
Using Cassandra and the C/C++ driver with client-side certificates
Client-side certificates allow Cassandra to authenticate the client using public key cryptography and chains of trust. This is same process as above but in reverse. The client has a public and private key and the Cassandra node has a copy of the private key or the CA chain used to generate the pair.
Generating and loading the client-side certificate
A new public/private key pair needs to be generated for client authentication.
keytool -genkeypair -noprompt -keyalg RSA -validity 36500 \ -alias driver \ -keystore keystore-driver.jks \ -storepass <keystore password> \ -keypass <key password>
The public and private key then need to be extracted and converted to the PEM format.
To extract the public:
keytool -exportcert -rfc -noprompt \ -alias driver \ -keystore keystore-driver.jks \ -storepass <keystore password> \ -file driver.pem
To extract and convert the private key:
keytool -importkeystore -noprompt -srcalias certificatekey -deststoretype PKCS12 \ -srcalias driver \ -srckeystore keystore-driver.jks \ -srcstorepass <keystore password> \ -storepass <key password> \ -destkeystore keystore-driver.p12 openssl pkcs12 -nomacver -nocerts \ -in keystore-driver.p12 \ -password pass:<key password> \ -passout pass:<key password> \ -out driver-private.pem
Now PEM formatted public and private key can be loaded. These files can be loaded using the same code from above in load_trusted_cert_file().
CassError rc = CASS_OK; char* cert = NULL; size_t cert_size = 0; // Load PEM-formatted certificate data and size into cert and cert_size... rc = cass_ssl_set_cert_n(ssl, cert, cert_size); if (rc != CASS_OK) { // Handle error } char* key = NULL; size_t key_size = 0; // A password is required when the private key is encrypted. If the private key // is NOT password protected use NULL. const char* key_password = "<key password>"; // Load PEM-formatted private key data and size into key and key_size... rc = cass_ssl_set_private_key_n(ssl, key, key_size, key_password, strlen(key_password)); if (rc != CASS_OK) { // Handle error }
Setting up client authentication with Cassandra
The driver’s public key or the CA chain used to sign the driver’s certificate will need to be added to Cassandra’s truststore. If using self-signed certificate then the public key will need to be extracted from the driver’s keystore generated in the previous steps.
Extract the public key from the driver’s keystore and add it to Cassandra’s truststore.
keytool -exportcert -noprompt \ -alias driver \ -keystore keystore-driver.jks \ -storepass cassandra \ -file cassandra-driver.crt keytool -import -noprompt \ -alias truststore \ -keystore keystore.jks \ -storepass cassandra \ -file cassandra-driver.crt
Client authentication in cassandra.yaml will also need to be enabled
require_client_auth: true | https://docs.datastax.com/en/developer/cpp-driver/2.3/topics/security/ssl/ | 2021-11-27T03:23:13 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.datastax.com |
IWorkbook.Range Property
Provides access to the cell range in the worbook.
Namespace: DevExpress.Spreadsheet
Assembly: DevExpress.Spreadsheet.v21.2.Core.dll
Declaration
Property Value
Remarks
A cell range is a rectangular block of cells that is specified by the CellRange object. The Range property returns the IRangeProvider object whose members can be used to obtain a cell range.
IRangeProvider.Item, IRangeProvider.Parse - obtain a cell range by its cell reference or name.
To access a cell range located in a specific worksheet of the workbook, specify this worksheet name before the cell reference, and separate them with an exclamation point (for example, workbook.Range.Parse(“Sheet2!C3:D9”)). If you do not specify the worksheet name explicitly, the cell range located on the currently active worksheet is returned.
IRangeProvider.FromLTRB - obtains a cell range by the indexes of its top left and bottom right cells.
This method returns a cell range located in the worksheet that is currently active.
In addition, to get a cell range located in a specific worksheet, you can use the Worksheet.Range property of the corresponding worksheet object.
Example
This example demonstrates how to access ranges of cells in a worksheet. There are several ways to accomplish this.
- The Worksheet.Item property obtains cell ranges defined by a cell reference (using A1 style) or a defined name.
- Ranges defined by a cell reference (using R1C1 or other reference styles), a defined name in a workbook, or by indexes of the bounding rows and columns - use the IRangeProvider.Item, IRangeProvider.Parse and IRangeProvider.FromLTRB members. Access the IRangeProvider object by the Worksheet.Range or IWorkbook.Range property.
// A range that includes cells from the top left cell (A1) to the bottom right cell (B5). CellRange rangeA1B5 = worksheet["A1:B5"]; // A rectangular range that includes cells from the top left cell (C5) to the bottom right cell (E7). CellRange rangeC5E7 = worksheet["C5:E7"]; // The C4:E7 cell range located in the "Sheet3" worksheet. CellRange rangeSheet3C4E7 = workbook.Range["Sheet3!C4:E7"]; // A range that contains a single cell (E7). CellRange rangeE7 = worksheet["E7"]; // A range that includes the entire column A. CellRange rangeColumnA = worksheet["A:A"]; // A range that includes the entire row 5. CellRange rangeRow5 = worksheet["5:5"]; // A minimal rectangular range that includes all listed cells: C6, D9 and E7. CellRange. CellRange rangeA1D3 = worksheet.Range.FromLTRB(0, 0, 3, 2); // A range that includes the intersection of two ranges: C5:E10 and E9:G13. // This is the E9:E10 cell range. CellRange rangeE9E10 = worksheet["C5:E10 E9:G13"]; // Create a defined name for the D20:G23 cell range. worksheet.DefinedNames.Add("MyNamedRange", "Sheet1!$D$20:$G$23"); // Access a range by its defined name. CellRange rangeD20G23 = worksheet["MyNamedRange"]; CellRange rangeA1D4 = worksheet["A1:D4"]; CellRange rangeD5E7 = worksheet["D5:E7"]; CellRange rangeRow11 = worksheet["11:11"]; CellRange rangeF7 = worksheet["F7"]; // Create a complex range using the Range.Union method. CellRange complexRange1 = worksheet["A7:A9"].Union(rangeD5E7); // Create a complex range using the IRangeProvider.Union method. CellRange complexRange2 = worksheet.Range.Union(new CellRange[] {; | https://docs.devexpress.com/OfficeFileAPI/DevExpress.Spreadsheet.IWorkbook.Range | 2021-11-27T02:45:56 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.devexpress.com |
$ZDATEH (ObjectScript)
Synopsis
$ZDATEH(date,dformat,monthlist,yearopt,startwin,endwin,mindate,maxdate,erropt,localeopt) $ZDH(date
The $ZDATEH function validates a specified date and converts it from any of the formats supported by the $ZDATE function to $HOROLOG format. The exact action $ZDATEH performs depends on the arguments.
Arguments argument. validDATE monthlist, yearopt, mindate and maxdate argument defaults. The date separator will always be a “-”. Current locale defaults are ignored.
If dformat is 16 or 17 (Japanese date formats), the date format is independent of the locale setting. You can use Japanese-format dates, 8, 9, 15, 18, or 20. If dformat is any other value $ZDATE date an <ILLEGAL VALUE> error is generated.
If you omit monthlist or specify a monthlist value of -1, $ZDATE
A numeric code that specifies whether to represent years as either two-digit values or four-digit values. Valid values are:
If you omit yearopt or specify a yearopt value of -1, $ZDATE the last day of the year (December 31) the $HOROLOG date (for example “62823,43200”), but only the date portion of mindate is parsed. Specifying a dateDATEH("05/29/1805" date values. Instead of generating <ILLEGAL VALUE> or <VALUE OUT OF RANGE> errors, the $ZDATEH function returns the erropt value.
InterSystems IRIS performs standard numeric evaluation on date, which must evaluate to an integer date. Errors generated due to invalid or out of range values of other arguments will always generate errors whether or not erropt has been supplied. For example, an <ILLEGAL VALUE> error is always generated when $ZDATE arguments. This sliding window can also be set for your locale.
The following example shows how the dformat argument
More information on locales in the section on “System Classes for National Language Support” in Specialized System Tools and Utilities. | https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=RCOS_FZDATEH&ADJUST=1 | 2021-11-27T02:42:22 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.intersystems.com |
Title
The 20 year evolution of an energy conversion course at the United States Military Academy
Document Type
Article
Publication Title
Energy Conversion and Management
Publication Date
3-1-2004
Abstract
Over the past several years, an energy conversion course offered by the Mechanical Engineering Program at the United States Military Academy in West Point, New York, has evolved into a cohesive series of lessons addressing three general topical areas: advanced thermodynamics, advanced mechanical system analysis, and direct energy conversion systems. Mechanical engineering majors enroll in Energy Conversion Systems (ME 472) during the fall semester of their senior year as an advanced elective. ME 472 builds directly on the material covered in Thermodynamics (EM 301) taken during the student's junior year. In the first segment of ME 472, the students study advanced thermodynamic topics including exergy and combustion analyses. The students then analyze various mechanical systems including refrigeration systems, internal combustion engines, boilers, and fossil fuel fired steam and gas turbine combined power plants. Exergetic efficiencies of various equipment and systems are determined. The final portion of the course covers direct energy conversion technology, including fuel cells, photovoltaics, thermoelectricity, thermionics and magnetohydrodynamics. Supplemental lessons on energy storage, semi-conductors and nonreactive energy sources (such as solar collectors, wind turbines, and hydroelectric plants) are included here. This paper discusses the evolution of ME 472 since its inception and explains the motivations for the course's progress. © 2003 Elsevier Ltd. All rights reserved.
Volume
45
Issue
4
First Page
495
Last Page
509
DOI
10.1016/S0196-8904(03)00161-4
Recommended Citation
Bailey, M., Arnas, A., Potter, R., & Samples, J. (2004). The 20 year evolution of an energy conversion course at the United States Military Academy. Energy Conversion and Management, 45 (4), 495-509.
ISSN
01968904 | https://docs.rwu.edu/seccm_fp/88/ | 2021-11-27T02:53:54 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.rwu.edu |
Support | Blog | Contact Us
Trifacta SaaS
Release 7.1
Release 6.8
Release 6.4
Release 6.0
Release 5.1
Release 5.0
Trifacta Wrangler free
Schedule Enterprise Demo
Feedback
Trifacta.com
Enterprise Release 5.1
Outdated release! Latest docs are Release 8.7: Discovery Tasks
Use the techniques in the following topics to identify patterns, inconsistencies, and issues in your datasets.
Topics:
Explore SuggestionsFind Missing DataManage Null ValuesFind Bad DataFilter DataLocate OutliersAnalyze across Multiple ColumnsCalculate Metrics across ColumnsImport Excel DataCreate Dataset with SQLParse fixed-width file and infer columnsCreate Dataset with Parameters
Topics:
Search Community:
Send Feedback
This page has no comments.
© 2013-2021 Trifacta® Inc. Privacy Policy | Terms of Use
This page has no comments. | https://docs.trifacta.com/pages/viewpage.action?pageId=118229322 | 2021-11-27T03:17:41 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.trifacta.com |
Entity—Boundary—Interactor¶
A modern application architecture
This repository contains a description and an example implementation examples of the Entity—Boundary—Interactor (EBI) application architecture, derived from ideas initially conceived by Uncle Bob in his series of talks titled Architecture: The Lost Years and his book.
The EBI architecture is a modern application architecture suited for a wide range of application styles. It is especially suitable for web application APIS, but the idea of EBI is to produce an implementation agnostic architecture, it is not tied to a specific platform, application, language or framework. It is a way to design programs, not a library.
The name Entity–Boundary—Interactor originates from a master’s thesis where this architecture is studied in depth. Names that are common or synonymous are EBC where C stands for Controller.
Examples of how to implement the architecture are given in this document and are written in Elixir, a dynamically typed language with a simple and powerful syntax.
Entity–Boundary–Interactor¶ | https://ebi.readthedocs.io/en/latest/index.html | 2021-11-27T03:00:13 | CC-MAIN-2021-49 | 1637964358078.2 | [] | ebi.readthedocs.io |
ebook deal of the week: Exam Ref 70-532 Developing Microsoft Azure Solutions, 2nd Edition
This offer expires on Sunday, May 20 at 7:00 AM GMT... | https://docs.microsoft.com/en-us/archive/blogs/microsoft_press/ebook-deal-of-the-week-exam-ref-70-532-developing-microsoft-azure-solutions-2nd-edition | 2021-11-27T01:51:38 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.microsoft.com |
Feature updates for Windows 10 and later policy in Intune
This feature is in public preview.
With Feature updates for Windows 10 and later in Intune, you can select the Windows feature update version that you want devices to remain at, like Windows 10 version 1909 or a version of Windows 11. Intune supports setting a feature level to any version that remains in support at the time you create the policy.
You can also use feature updates policy to upgrade devices that run Windows 10 to Windows 11.
Windows feature updates policies work with your Update rings for Windows 10 and later policies to prevent a device from receiving a Windows feature version that's later than the value specified in the feature updates policy.
When a device receives a policy for Feature updates:
The device updates to the version of Windows specified in the policy. A device that already runs a later version of Windows remains at its current version. By freezing the version, the devices feature set remains stable during the duration of the policy.
Note
A device won't install an update when it has a safeguard hold for that Windows version. When a device evaluates applicability of an update version, Windows creates the temporary safeguard hold if an unresolved known issue exists. Once the issue is resolved, the hold is removed and the device can then update.
Learn more about safeguard holds in the Windows documentation for Feature Update Status.
To learn about known issues that can result in a safeguard hold, see Windows release information and then reference the relevant Windows version from the table of contents for that page.
For example, for Windows version 2004, open Windows release information, and then from the left-hand pane, select Version 2004 and then Known issues and notifications. The resultant page details known issues for that Windows version that might result in safeguard hold.
Unlike using Pause with an update ring, which expires after 35 days, the Feature updates policy remains in effect. Devices won't install a new Windows version until you modify or remove the Feature updates policy. If you edit the policy to specify a newer version, devices can then install the features from that Windows version.
You can configure policy to manage the schedule by which Windows Update makes the offer available to devices. For more information, see Rollout options for Windows Updates.
Prerequisites
The following are prerequisites for Intune's Feature updates for Windows 10 and later:
In addition to a license for Intune, your organization must have one of the following subscriptions:
-
- Microsoft 365 Business Premium
Review your subscription details for applicability to Windows 11.
Devices must:
Run a version of Windows 10/11 that remains in support.
Be enrolled in Intune MDM and be Hybrid AD joined or Azure AD joined.
Have Telemetry turned on, with a minimum setting of Required.
Devices that receive a feature updates policy and that have Telemetry set to Not configured (off), might install a later version of Windows than defined in the feature updates policy. The prerequisite to require Telemetry is under review as this feature moves towards general availability.
Configure Telemetry as part of a Device Restriction policy for Windows 10/11. In the device restriction profile, under Reporting and Telemetry, configure the Share usage data with a minimum value of Required. Values of Enhanced (1903 and earlier) or Optional are also supported.
The Microsoft Account Sign-In Assistant (wlidsvc) must be able to run. If the service is blocked or set to Disabled, it fails to receive the update. For more information, see Feature updates aren't being offered while other updates are. By default, the service is set to Manual (Trigger Start), which allows it to run when needed.
Feature updates are supported for the following Windows 10/11 editions:
- Windows 10/11 Pro
- Windows 10/11 Enterprise
- Windows 10/11 Pro Education
- Windows 10/11 Education
Note
Unsupported versions and editions:
Windows 10/11 Enterprise LTSC: Windows Update for Business (WUfB) does not support the Long Term Service Channel release. Plan to use alternative patching methods, like WSUS or Configuration Manager.
Limitations for Feature updates for Windows 10 and later policy
When you deploy a Feature updates for Windows 10 and later policy to a device that also receives an Update rings for Windows 10 and later policy, review the update ring for the following configurations:
- The Feature update deferral period (days) must be set to 0.
- Feature updates for the update ring must be running. They must not be paused.
Tip
If you're using feature updates, we recommend you end use of deferrals as configured in your update rings policy. Combining update ring deferrals with feature updates policy can create complexity that might delay update installations.
For more information, see Move from update ring deferrals to feature updates policy
Feature updates for Windows 10 and later policies cannot be applied during the Autopilot out of box experience (OOBE). Instead, the policies apply at the first Windows Update scan after a device has finished provisioning, which is typically a day.
If you co-manage devices with Configuration Manager, feature updates policies might not immediately take effect on devices when you newly configure the Windows Update policies workload to Intune. This delay is temporary but can initially result in devices updating to a later feature update version than is configured in the policy.
To prevent this initial delay from impacting your co-managed devices, configure a Feature updates for Windows 10 and later policy and target the policy to your devices before you configure them for co-management or you shift the Windows Update workload to Intune. You can validate whether a device is enrolled for the feature update profile by checking the Windows feature update report under the Reporting node in the Microsoft Endpoint Management admin console.
When the device checks in to the Windows Update service, the device's group membership is validated against the security groups assigned to the feature updates policy settings for any feature update holds.
Managed devices that receive feature update policy are automatically enrolled with the Windows Update for Business deployment service. The deployment service manages the updates a device receives. The service is utilized by Microsoft Endpoint Manager and works with your Intune policies for Windows updates to deploy feature updates to devices.
When a device is no longer assigned to any feature update policies, Intune waits 90 days to unenroll that device from feature update management and to unenroll that device from the deployment service. This delay allows time to assign the device to a different policy and ensure that in the meantime the device doesn’t receive a feature update that wasn't intended.
This means that when a feature updates policy no longer applies to a device, that device won’t be offered any feature updates until one of the following happens:
- 90 days elapse.
- The device is assigned to a new feature update profile.
- The device is unenrolled from Intune, which unenrolls the device from feature update management by the Deployment Service.
- You use the Windows Update for Business deployment service graph API to remove the device from feature update management.
To keep a device at its current feature update version and prevent it from being unenrolled and updated to the most recent feature update version, ensure the device remains assigned to a feature update policy that specifies the devices current Windows version.
Create and assign Feature updates for Windows 10 and later policy
Select Devices > Windows > Feature updates for Windows 10 and later > Create profile.
Under Deployment settings:
Specify a name, a description (optional), and for Feature update to deploy, select the version of Windows with the feature set you want, and then select Next. Only versions of Windows that remain in support are available to select.
Configure Rollout options to manage when Windows Updates makes the update available to devices that receive this policy. For information about using these options, see Rollout options for Windows Updates.
Under Assignments, choose + Select groups to include and then assign the feature updates deployment to one or more device groups. Select Next to continue.
Under Review + create, review the settings. When ready to save the Feature updates policy, select Create.
Upgrade devices to Windows 11
You can use policy for Feature updates for Windows 10 and later to upgrade devices that run Windows 10 to Windows 11.
When you use feature updates policy to deploy Windows 11, you can target the policy to any of your Windows 10 devices and only devices that meet the Windows 11 minimum requirements will upgrade. Devices that don’t meet the requirements for Windows 11 won’t receive the update and remain at their current Windows 10 version.
When there are multiple versions of Windows 11 available, you can choose to deploy the latest build. When you deploy the latest build to a group of devices, those devices that already run Windows 11 will update while devices that still run Windows 10 will upgrade to that version of Windows 11 if they meet the upgrade requirements. In this way, you can always upgrade supported Windows 10 devices to the latest Windows 11 version even if you choose to delay the upgrade of some devices until a future time.
Prepare to upgrade to Windows 11
The first step in preparing for a Windows 11 upgrade is to ensure your devices meet the minimum system requirements for Windows 11.
You can use Endpoint analytics in Microsoft Endpoint Manager to determine which of your devices meet the hardware requirements. If some of your devices don't meet all the requirements, you can see exactly which ones aren't met. To use Endpoint analytics, your devices must be managed by Intune-managed, co-managed, or have the Configuration Manager client version 2107 or newer with tenant attach enabled.
If you’re already using Endpoint analytics, navigate to the Work from anywhere report, and select the Windows score category in the middle to open a flyout with aggregate Windows 11 readiness information. For more granular details, go to the Windows tab at the top of the report. On the Windows tab, you’ll see device-by-device readiness information.
Licensing for Windows 11 versions
Windows 11 includes a new license agreement, which can be viewed at. This license agreement is automatically accepted by an organization that submits a policy to deploy Windows 11.
When you use configure a policy in the Microsoft Endpoint Manager admin center to deploy any Windows 11 version, the Microsoft Endpoint Manager admin center displays a notice to remind you that by submitting the policy you are accepting the Windows 11 License Agreement terms on behalf of the devices, and your device users. After submitting the feature updates policy, end users won’t see or need to accept the license agreement, making the update process seamless.
This license reminder appears each time you select a Windows 11 build, even if all your Windows devices already run Windows 11. This prompt is provided because Intune doesn’t track which devices will receive the policy, and its possible new devices that run Windows 10 might later enroll and be targeted by the policy.
For more information including general licensing details, see the Windows 11 documentation.
Create policy for Windows 11
To deploy Windows 11, you’ll create and deploy a feature updates policy just as you might have done previously for a Windows 10 device. It’s the same process though instead of selecting a Windows 10 version, you’ll select a Windows 11 version from the Feature update to deploy dropdown list. The dropdown list displays both Windows 10 and Windows 11 version updates that are in support.
- Deploying an older Windows version to a device won’t downgrade the device. Devices only install an update when it's newer than the devices current version.
- Policies for Windows 11 and Windows 10 can exist side by side in Microsoft Endpoint Manager.
Manage Feature updates for Windows 10 and later policy
In the admin center, go to Devices > Windows > Feature updates for Windows 10 and later to view your profiles.
For each profile you can view:
Feature Update Version – The feature update version in the profile.
Assigned – If the profile is assigned to one or more groups.
Support: The status of the feature update:
- Supported – The feature update version is in support and can deploy to devices.
- Support Ending - The feature update version is within two months of its support end date.
- Not supported – Support for the feature update has expired and it no longer deploys to devices.
Support End Date – The end of support date for the feature update version.
Selecting a profile from the list opens the profiles Overview pane where you can:
- Select Delete to delete the policy from Intune and remove it from devices.
- Select Properties to modify the deployment. On the Properties pane, select Edit to open the Deployment settings or Assignments, where you can then modify the deployment.
- Select End user update status to view information about the policy.
Validation and reporting
There are multiple options to get in-depth reporting for Windows 10/11 updates with Intune. Windows update reports show details about your Windows 10 and Windows 11 devices side by side in the same report.
To learn more, see Intune compliance reports.
Next steps
- Use Windows update rings in Intune
- Use Intune compliance reports for Windows 10/11 updates | https://docs.microsoft.com/en-us/mem/intune/protect/windows-10-feature-updates | 2021-11-27T03:35:30 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.microsoft.com |
What the security styles and their effects are
Contributors
There are four different security styles: UNIX, NTFS, mixed, and unified. Each security style has a different effect on how permissions are handled for data. You must understand the different effects to ensure that you select the appropriate security style for your purposes.
It is important to understand that security styles do not determine what client types can or cannot access data. Security styles only determine the type of permissions ONTAP uses to control data access and what client type can modify these permissions.
For example, if a volume uses UNIX security style, SMB clients can still access data (provided that they properly authenticate and authorize) due to the multiprotocol nature of ONTAP. However, ONTAP uses UNIX permissions that only UNIX clients can modify using native tools.
FlexVol volumes support UNIX, NTS, and mixed security styles. When the security style is mixed or unified, the effective permissions depend on the client type that last modified the permissions because users set the security style on an individual basis. If the last client that modified permissions was an NFSv3 client, the permissions are UNIX NFSv3 mode bits. If the last client was an NFSv4 client, the permissions are NFSv4 ACLs. If the last client was an SMB client, the permissions are Windows NTFS ACLs.
Beginning with ONTAP 9.2, the
show-effective-permissions parameter to the
vserver security file-directory command enables you to display effective permissions granted to a Windows or UNIX user on the specified file or folder path. In addition, the optional parameter
-share-name enables you to display the effective share permission. | https://docs.netapp.com/us-en/ontap/smb-admin/security-styles-their-effects-concept.html | 2021-11-27T03:32:53 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.netapp.com |
Did you found an bug or you would like to suggest a new feature? I’m open for feedback. Please open a new issue and let me know what you think.
You’re also welcome to contribute with pull requests.
This document explains how to contribute changes to the pREST project. open discussion here. sending code out for review, run all the tests for the whole tree to make sure the changes don’t break other usage and keep the compatibility on upgrade. To make sure you are running the test suite exactly like we do - the tests are run in GitHub Actions, I recommend reading Development Guides that explains how to run the tests locally.:
This project and everyone participating in it are governed by the prestd code of conduct. By participating, you are expected to uphold this code. Please read the full text so that you can read which actions may or may not be tolerated.
In order to accept your pull request, we need you to submit a CLA. You only need to do this once. If you are submitting a pull request for the first time, you can complete your CLA here or just submit a Pull Request and our CLA Bot will ask you to sign the CLA before merging your Pull Request.
If you are making contributions to our repositories on behalf of your company, then we will need a Corporate Contributor License Agreement (CLA) signed. In order to do that, please contact us at opensource@prestd Github Discussions..
Since pREST is maintained by community and prestd (a company that supports the community, not owner, but helper),.
After the election, the new owners should proactively agree with our CONTRIBUTING (this page) requirements on the Github Discussions. Below are the words to speak:
Code that you contribute should use the standard copyright header:
// Copyright 2016 The prest. | https://docs.prestd.com/contribute/ | 2021-11-27T03:00:30 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.prestd.com |
The Administrative Console Guide
Full online documentation for the WP EasyCart eCommerce plugin!
Full online documentation for the WP EasyCart eCommerce plugin!
Google Merchant Options allow you to specify the google meta data that is required if going to move toward a google listed merchant product. Not all products sold have GTIN/MPN bar codes and are available to be listed on Google merchants, but if you do, then we give options to access here.
Special Note: These value are typically left blank and does not effect overall Search Engine Optmization. Your site and products will still be indexed as usual pages and searchable without values here.
To edit be sure to review the valid values available on Google’s website. They are consistently evolving so be sure to check there for status. | https://docs.wpeasycart.com/wp-easycart-administrative-console-guide/?section=google-merchant-options | 2021-11-27T03:33:27 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.wpeasycart.com |
SettingsController¶
Package: Controller
extends Omeka_Controller_AbstractActionController
- SettingsController::checkImagemagickAction()¶
Determine whether or not ImageMagick has been correctly installed and configured.
In a few cases, this will indicate failure even though the ImageMagick program works properly. In those cases, users may ignore the results of this test. This is because the ‘convert’ command may have returned a non-zero status code for some reason. Keep in mind that a 0 status code always indicates success. | https://omeka.readthedocs.io/en/stable-2.2/Reference/controllers/SettingsController.html | 2021-11-27T02:07:56 | CC-MAIN-2021-49 | 1637964358078.2 | [] | omeka.readthedocs.io |
Bearer provides 3 types of keys. Each of them serves different purposes to ensure your API calls are secured. Find below the documentation about the differences between your Secret Key, Publishable Key and Encryption Key
Your Bearer Secret Key is used to authenticate your application with Bearer when setting up the Bearer Agent. The Secret key is intended to used from server side only.
Your Secret Key must not be shared with anyone and never be exposed.
For some requests performed, you'll need to provide a Publishable Key that identifies your website to Bearer. For instance, you use that key with our Connect Button or our Setup Component.
In some very particular cases, you might want to perform frontend API calls. You'll use your Publishable Key for that. But takes extra security in doing so, as this key has someone with bad intentions could also perform API calls on your behalf.
By default, the Publishable Key has very limited access to your Bearer account. Which means that this key is safe to be dropped into your frontend code.
Refer to the JavaScript client to find out how to use your Publishable Key in different context.
At Bearer, we love webhooks and even more when webhooks are secure. For that reason, whenever you receive a webhook from Bearer, Bearer injects a specific header to the request containing the payload signature. This signature is generated using your Encryption Key and ensures the payload hasn't been compromised or changed.
Encryption Key must not be shared with anyone and never be exposed to the frontend
Refer to Webhooks section to learn how to use your Encryption Key to protect your application from receiving unexpected webhooks.
By default, Bearer provides 2 environments (Production and Sandbox) and each of them get its own credentials (developer keys).
For that purpose, all your developer keys are prefixed with the right environment they are intended to be used with. Some examples below: | https://docs.bearer.sh/dashboard/settings | 2020-02-17T00:53:49 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.bearer.sh |
Introduction to CDAP
The Cask Data Application Platform (CDAP) is a Unified Integration Platform for Big Data applications. It provides simple, easy-to-use abstractions to process and analyze huge datasets, without having to write and debug low-level code, leading to rapid time-to-value. It integrates the latest Big Data technologies from data integration, discovery and data science to app development, security, operations and governance. CDAP also provides familiar UIs that make Big Data accessible to anyone.
CDAP provides two visual interfaces for working with data:
- Data Preparation is an interactive UI that enables you to explore, cleanse, prepare, and transform your data using simple, point-and-click transformation directives. Learn more about Data Preparation.
- Data Pipelines allow to interactively build data pipelines by linking together data sources, transformations, and sinks. It also provides insights into logs, metrics and other management capabilities for pipelines that help to operationalize data processing in production-ready, revenue-critical environments. Learn more about Data Pipelines.
More about CDAP
Cask Data Application Platform (CDAP) is an open source framework to build and deploy data applications on Apache™ Hadoop®.
CDAP provides visual tools that help ease Data Scientists work with large datasets:
- Data Collection: Both Data Preparation and Data Pipelines provide simple UIs for gathering data that is stored in files, databases, or real-time streams.
- Data Exploration: Data Preparation allows you to view and explore your data in spreadsheet format. You can also apply simple transformations to your data before deploying in a pipeline.
- Data Processing: Your Data Preparation transforms and custom programmatic logic are automatically translated into Spark and MapReduce jobs when deployed in Pipelines. As a result, it simple to analyze vast quantities of data very quickly.
- Data Storage: CDAP uses internal datasets that provide a common visual and RESTful interfaces for accessing data. The abstraction of datasets makes it simple to work with several different data formats and database types -- for instance, Avro or RDBMS -- all in the same pipeline or program, without needing to worry about the low-level details of complex data formats.
CDAP makes it simple for data scientists and analysts to explore their data and deploy production-ready pipelines. To get started with CDAP, download CDAP here and follow this tutorial this tutorial to set up CDAP own your own machine.
Advanced Users
In addition to the capabilities provided by Data Preparation, Data Pipelines, and Metatada, CDAP is a full Unified Integration Platform that allows you to quickly develop and deploy custom Java applications. If you organization needs to develop a custom application, please visit the Developer Manual. | https://docs.cask.co/cdap/develop/en/user-guide/overview.html | 2020-02-17T01:13:48 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.cask.co |
TechEd EMEA for Developers is coming up next week – visit our Windows Mobile Talks!
PDC is barely over, and we already have TechED EMEA for Developer coming up next week, so some of us are heading over to Barcelona this weekend.
Maarten Struys and I will be giving the following sessions. Please make sure to visit us. Here are the details about our sessions:
MBL305
Creating Location Aware Applications for Windows Mobile Devices
November 14 10:45 - 12:00
Room 119 (DEV)
More and more Windows Mobile powered devices ship with integrated global positioning system (GPS) hardware. Since Windows Mobile devices are typically used on the road, it makes a lot of sense to add location awareness to your applications. In the upcoming future, Constanze Roman and Maarten Struys foresee location aware applications moving beyond traditional navigation software. Adding location awareness to all kinds of social networking applications could be the next big thing for Windows Mobile devices. In this sample filled session, Constanze and Maarten show you how you can make use of the GPS Intermediate Driver to retrieve GPS information from inside managed applications. You will learn how to use the FakeGPS utility to test location-enabled applications without needing access to a physical GPS device and you will also learn how to feed FakeGPS with your own recorded location information. Of course you will also see a real location-aware application on a Windows Mobile Device in action.
MBL303
Unit Testing for Devices: The Holy Grail or Something to Use in Your Day-to-Day Work?
November 12 09:00 - 10:15
Room 119 (DEV)
With Microsoft Visual Studio 2008, Unit Testing is now available for device developers as well. Is Unit Testing just a hype or can you increase the quality of your code by creating unit tests? In this sample-filled presentation, Maarten Struys and Constanze Roman explore Unit Testing for Devices. Not only do you learn how to create unit tests for your smart device application, you also learn how to create better applications by taking advantage of unit tests. By attending this session you get even more knowledge. How do you test your unit tests? Learn how to create unit tests for your application, and also find out how to debug your unit tests. Please join us for this fun-filled presentation around unit testing for devices. After attending this session, you can rest assured that your code works as intended.
MBL02-IS
Programming with State and Notification Broker in Windows Mobile 6.0
November 10 17:45 - 19:00
Room 134
Windows Mobile 6 and Windows Mobile 5.0 powered devices have an incredibly powerful collection of managed APIs. Attend this session to see how State and Notification Broker in Windows Mobile 6 can provide easy access to over 120 different hardware and system states, such as network connectivity and battery power, that are all consistently within reach by managed code. Discover how you can even extend State and Notification Broker by adding your own user-defined states. See how you can use State and Notification Broker to create really smart applications. Learn how to have your application react to state changes in your device and how to start your application automatically when a particular system state changes. Make sure to join Constanze Roman and Maarten Struys for a demo filled session that will teach you how to create better and more efficiently managed applications for Windows Mobile devices.
Also, on Friday, November 14, we’ll be hosting a Panel Discussion on Windows Mobile Application Development. Please make sure to visit us:
MBL01-PAN
Ask the Windows Mobile Panel
November 14 15:15 - 16:30
Room 119 (DEV)
In this panel session, you will have an opportunity to ask the team of Microsoft and community experts any burning questions related to Windows Mobile! Share industry experience, express your opinions and make your wishes known to Microsoft about what you want the future of Windows Mobile to look like. This is a great opportunity to drive dialogue among your peers, Microsoft and our community leads, to clarify technical understanding, provide ideas for moving ahead and just have fun networking with our Windows Mobile gurus.
And last, but not least, make sure to visit us for the Windows Mobile Smackdown on Tuesday, November 11. You’ll get to enjoy some cool demos and other surprises and you’ll even have a change to win some fun prizes. Here’s the info:
MBL201
Windows Mobile Smackdown!
November 11 13:30 - 14:45
Room 114
Do you have a Windows Mobile device? Ever wondered what else you can do with it apart from Email and talking to someone? Take a look at some of the coolest demos from the huge list of applications that exist for the Windows Mobile phone today. This will be a high-energy, fun session with lots of SWAGs and giveaways!
As always, we ask you to submit feedback on the sessions you attend, you may even have the chance to win a HTC Touch Dual phone by doing so!
C U in Barcelona!
Constanze | https://docs.microsoft.com/en-us/archive/blogs/croman/teched-emea-for-developers-is-coming-up-next-week-visit-our-windows-mobile-talks | 2020-02-17T02:37:02 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
I goofed on PSSUG June meeting
Mea culpa...I made a mistake when publicizing this month's Philadelphia SQL Server User Group meeting in the localized MSDN Flash. I misread their website, and inadvertently posted the data of the meeting as today, June 1st, with an incorrect speaker and topic.
The correct info is as follows:
Our next meeting is June 14 at the Microsoft office in Malvern PA, and our keynote speaker is Kevin Goff. He'll be presenting an interactive session called "T-SQL 2005 for Application Developers."
My apologies for any confusion or inconvenience. | https://docs.microsoft.com/en-us/archive/blogs/gduthie/i-goofed-on-pssug-june-meeting | 2020-02-17T02:41:05 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
In your data, you might have groups of events with related field values. To search more efficiently for these groups of event data, you can assign tags and aliases to your data.
If you tag tens of thousands of items, use field lookups. Using many tags will not affect indexing, but your search has better event categorization when using lookups. For more information on field lookups, see About lookups.
Tags enable you to assign names to specific field and value combinations, including event type, host, source, or source type.
You can use tags to help you track abstract field values, like IP addresses or ID numbers. For example, you could have an IP address related to your main office with the value 192.168.1.2. Tag that
IPaddress value as mainoffice, and then search for that tag to find events with that IP address.
You can use a tag to group a set of field values together, so that you can search for them with one command. For example, you might find that you have two host names that refer to the same computer. You could give both of those values the same tag. When you search for that tag, events that involve both host name values are returned.
You can give extracted fields multiple tags that reflect different aspects of their identity, which enable you to perform tag-based searches to help you narrow the search results.
Tags example
You have an extracted field called
IPaddress, which refers to the IP addresses of the data sources within your company intranet. You can tag each IP address based on its functionality or location. You can tag all of your routers' IP addresses as router, and, use the following search.
tag=router tag=SF NOT (tag=Building1)
Tags and the search-time operations sequence
When you run a search, Splunk software runs several operations to derive knowledge objects and apply them to events returned by the search. Splunk software performs these operations in a specific sequence.
Search-time operation order
Tags come last in the sequence of search-time operations.
Restrictions
The Splunk software applies tags to field/value pairs in events in ASCII sort order. You can apply tags to any field/value pair in an event, whether it is extracted at index time, search time, or added through some other method, such as an event type, lookup, or calculated field.
For more information
For more information about search-time operations, see search-time operations sequence.
Field aliases
Field aliases enable you to normalize data from multiple sources. You can add multiple aliases to a field name or use these field aliases to normalize different field names. The use of Field aliases does not rename or remove the original field name. When you alias a field, you can search for it with any of its name aliases. You can alias field names in Splunk Web or in props.conf. See Create field aliases in Splunk Web.
You can use aliases to assign different extracted field names to a single field name.
Field aliases for all source types are used in all searches, which can produce a lot of overhead over time.
Field Aliases example
One data model might have a field called
http_referrer. This field might be misspelled in your source data as
http_referer. Use field aliases to capture the misspelled field in your original source data and map it to the expected field name.
Field aliases and the search-time operations sequence
Search-time operations order
Field aliasing comes fourth in the search-time operations order, before calculated fields but after automatic key-value field extraction.
Restrictions
Splunk software processes field aliases belonging to a specific host, source, or sourcetype in ASCII sort order. You can create aliases for fields that are extracted at index time or search time. You cannot create aliases for fields that are added to events by search-time operations that come after the field aliasing process.
For more information
For more information about search-time operations, see search-time operations sequence.! | https://docs.splunk.com/Documentation/Splunk/6.6.2/Knowledge/Abouttagsandaliases | 2020-02-17T01:12:06 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
You can check current permissions for privacy-related privileges and request user permission to use specified privileges.
This feature is supported in mobile and wearable applications only. Privacy Privilege API include:
Checking privilege status. enumeration (in mobile and wearable callback used in the
requestPermission() method.
The user decision is returned in the first parameter of the callback as a value of the
PermissionRequestResult enumeration (in mobile and wearable);
PPM_ALLOW_FOREVERor
PPM_DENY_FOREVER, the decision is definitive and the application can react appropriately. It can finish its execution (if denied permission) or start to use protected APIs (if granted permission).
PPM_DENY_ONCE,:
PPM_ALLOW_FOREVER mobile and wearable and
requestPermissions mobile and wearable.. | https://docs.tizen.org/application/web/guides/security/privacy-related-permissions | 2020-02-17T00:36:01 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.tizen.org |
Building NSClient++##
The dependencies are different on different Linux systems so we will start with a section on installing dependencies on various platforms.
Dependencies on Ubuntu#
First we need to install a set of packages:
sudo apt-get install -y git sudo apt-get install -y build-essential sudo apt-get install -y cmake sudo apt-get install -y python python-dev sudo apt-get install -y libssl-dev sudo apt-get install -y libboost-all-dev sudo apt-get install -y protobuf-compiler python-protobuf libprotobuf-dev sudo apt-get install -y python-sphinx sudo apt-get install -y libcrypto++-dev libcrypto++ sudo apt-get install -y liblua5.1-0-dev sudo apt-get install -y libgtest-dev
Getting the code from github#
Next up we download the source code from github:
git clone --recursive
Building NSClient++#
Create a folder in which we will build the code:
Vagrant#
We provide a number of vagrant profiles which will built NSClient++ as well:
git clone --recursive
cd vagrant cd precise32 # Replace this with precise64 or oracle-linux-6.4_64 vagrant up -- provision
The resultiung packages will be found under packages | http://docs.nsclient.org/0.5.1/about/build/ | 2020-02-17T01:25:48 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.nsclient.org |
There is a management SSID that lets users know the current status when an access point connects to EnGenius Cloud. If an access point has lost its connection to the Internet but still receives power, it will broadcast a management service set identifier (SSID) that can be connected to for administrative tasks.
Connect to the default SSID by completing the following steps:
Physically check that the access point has power.
Check if a known default SSID is being broadcast.
If a management SSID is being broadcast, connect your device to it.
After connecting, check your gateway IP address to connect to the local status page. If you can't find the gateway IP, please make sure the access point is in NAT mode.
<EnMGMTxxxx>-SSID_name>-No_Eth
Cause: AP does not have Ethernet connection.
Solution: Check if the Ethernet cable is unplugged.
<EnMGMTxxxx>-No_IP
Cause: AP cannot get an IP address seems to be working, but a connection to EnGenius Cloud cannot be established.
Solution: Check EnGenius Cloud server status with EnGenius. | https://docs.engenius.ai/engenius-cloud/appendix/ssid-trouble-shooting-naming-rules | 2020-02-17T00:11:02 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.engenius.ai |
NATS account configurations are built using the
nsc tool. The NSC tool allows you to:
Create and edit Operators, Accounts, Users
Manage publish and subscribe permissions for Users
Define Service and Stream exports from an account
Reference Service and Streams from another account
Generate Activation tokens that grants access to a private service or stream
Generate User credential files
Describe Operators, Accounts, Users, and Activations
Push and pull account JWTs to an account JWTs server
Installing
nsc is easy:
curl -L | python
The script will download the latest version of
nsc and install it into your system.
Alternatively, you can use
nsc with the nats-box Docker image:
$ docker run --rm -it -v $(pwd)/nsc:/nsc synadia/nats-box:latest# In case NSC not initialized already:nats-box:~# nsc initnats-box:~# chmod -R 1000:1000 /nsc$ tree -L 2 nsc/nsc/├── accounts│ ├── nats│ └── nsc.json└── nkeys├── creds└── keys5 directories, 1 file
You can find various task-oriented tutorials to working with the tool here:
For more specific browsing of the tool syntax, check out the
nsc tool documentation. It can be found within the tool itself:
> nsc help
Or an online version here. | https://docs.nats.io/nats-tools/nsc | 2020-02-17T02:10:33 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.nats.io |
Advanced Restore Options (General)
Use this dialog box to access additional restore options.
Use hardware revert capability if available
Not applicable.
Impersonate User
Specifies whether to submit Windows logon information for the current restore operation. This information is needed only if you intend to restore data to a shared network drive or directory to which you have no write, create, or change privileges.
The user account specified must have permissions for the UNC path to which the data will be restored. This user should be allowed to create files in the destination folder of the virtual machine through the destination proxy computer. Without these permissions, the recovery operation will not complete successfully. | http://docs.snapprotect.com/netapp/v11/article?p=products/vs_rhev/help/restore_adv_general.htm | 2020-02-17T02:06:50 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.snapprotect.com |
Viewing Payouts
With the WP1099 plugin, you can view payouts made to each vendor or affiliate during the year.
Simply click on WP1099 in the left side menu, then click the Payouts tab at the top.
This page will show you each vendor who has reached the applicable minimum payout threshold, with the total of all the payouts made to them for the current tax year.
Payout Amounts
Payout amounts listed are for the tax year selected in the WP1099 Settings. If you wish to view a different tax year, click the Settings tab, choose a different tax year, and click Save Changes. This will update the amounts shown on the Payouts tab with the correct year's payments.
Different Plugins
If you are using more than one of the compatible plugins along with WP1099, you will see a link at the top of the Payouts tab for each plugin. Click on that link to see the results separated by payouts in that plugin.
| https://docs.amplifyplugins.com/article/6-viewing-payouts | 2020-02-17T02:00:35 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594bf4a704286305c68d4a6f/images/594c111b2c7d3a0747ce1c7d/file-XHZMiM2Zoc.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594bf4a704286305c68d4a6f/images/594c27632c7d3a0747ce1d3b/file-pSBfjGzLLu.gif',
None], dtype=object) ] | docs.amplifyplugins.com |
Supported upgrades for domain controllers to Windows 2008 (Melting Pot in CorpNet)
Currently.
- You can have DCs with down-level OS down to Windows 2000 SP4 in the same forest along with WS2008 DCs.
- This means you can have forests with a mix of WS2008, WS2003 SP2, WS2003 R2, WS2003 SP1 and Win2K SP4 (please have in mind that this depends on the forest and domain functional levels).
- If you have a down-level only forest (i.e. no WS2008) and want to introduce a new WS2008 you will need to run ADPrep (ForestPrep and DomainPrep).
- You can run ADPrep having down-level OS down to Win2K SP4, you don’t need to have all of them with WS03 SP2.
- However if you are going to in-place upgrade any of the down-level DCs, these have to be at least WS2003 SP1.
Refs:
Upgrading Active Directory Domains to Windows Server 2008 AD DS Domains
What Service Packs can be upgraded to Windows 2008 | https://docs.microsoft.com/en-us/archive/blogs/brad_rutkowski/supported-upgrades-for-domain-controllers-to-windows-2008-melting-pot-in-corpnet | 2020-02-17T02:34:21 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
Windows Server Technical Preview – My Favourite Features – Part 1
Microsoft released the first Technical Preview of Windows 10 to much acclaim back in October. There have been three releases so far and we currently sit on the ‘last release of the Calendar year’ – Build 9879.
The Technical Preview is intended primarily for the enterprise to evaluate the changes and inform the development of new and evolved features of the client operating system. This is a brave and intelligent step. Most followers of Windows in an enterprise will know that Microsoft traditionally release their Client and Server platforms in pairs. XP/ 2003, Vista/2008, Win 7/ 2008R2, Win 8/2012 and most recently Win8.1/2012R2.
The dramatic changes inside Microsoft have not led to a change in this pattern and there is a new server platform being developed alongside Windows 10, this Server is as yet un-named but is also in Technical Preview.
If you have an MSDN subscription you can find it there in both ISO and VHD formats (the new Hyper V server is there too). If you do not subscribe then you can find it here. The new Remote Server Administration Tools for Windows 10 Technical Preview have also been released to allow you to remotely manage your new server from your new client. The RSAT can be found here. They are available in 32bit and 654bit flavours.
For anyone interested in the Server Technical Preview, just about everything you could want to know can be accessed from this blog site. This is Jose Barreto’s blog, Jose is a member of the File Server team within Microsoft and has put together this invaluable survival guide. As you might imagine, it is Storage focussed but does cover most other areas too.
There is one final way you can have a look at and run the Server Technical Preview and that is as a Virtual machine in Microsoft Azure. If you do not have an Azure subscription, again this is part of your MSDN benefit. (MSDN is sounding more and more like good value). Otherwise you can sign up for a cost free trial, here
Windows Server 2012 was a huge leap in performance and function for the Windows Server family and despite the familiar look and feel to the new server and most of its tools, there have been significant new features and improvements to old one. BUT please remember when looking at and playing with this new server operating system.
THIS IS A TECHNICAL PREVIEW – do not use it in production, do not rely on it for any tasks you cannot afford to lose. Having said that I have found it stable and reliable (as with the Windows 10 client. The difference being I use the Windows 10 client on my main work machine and just about all other machines I use – a couple of exceptions) Whereas the server version is very definitely a test rig setup for me at present.
So, what is new and of those new things, what are my favourite features and why. This is the first post in a series examining major new functionality in the Technical Preview.
In Server 2012 one of the big five features for me was Hyper-V Replica. The first new feature of the Technical Preview I want to describe is called Storage Replica.
To quote the TechNet site,.
Ok that sounds a.) A lot of technical stuff and b.) Pretty exciting and revolutionary for an out of the box no cost inclusion in a server operating system. So what exactly does it do and how does it do it.
Well, Server 2012 introduced the next version of SMB (SMB 3.0) this allowed a vast number of performance and reliability improvements with file servers and storage as well as normal communications using the SMB protocol.
In short the feature allows an All-Microsoft DR solution for both planned and unplanned outages of your mission-critical tasks. It also allows you to stretch your clusters to a Metropolitan scale.
What is it NOT?
- Hyper-V Replica
- DFSR
- SQLAlwaysOn
- Backup
Many people use DFSR as a Disaster Recovery Solution, it is not suited to this but can be used. Storage Replica is true DR replication in either synchronous or asynchronous fashion
Microsoft have implemented synchronous replication in a different fashion to most others providers, it does not rely on snapshot technology but continuously replicates instead. This does lead to a lower RPO (Recovery point objective – meaning less data could be lost) but it also means that SR relies on the applications to provide consistency guarantees rather than snapshots. SR does guarantee consistency in all of its replication modes.
There is a step-by-step guide available here, but I have included some other notes below for those who don’t want to read it all now (all 38 pages of it). (Images are taken from that guide and live screenshots too)
The Technical Preview does not currently allow cluster to cluster replication.
Storage replica is capable of BOTH synchronous and asynchronous replication as shown below. And anyone who knows anything about replication knows that to do this there must be some significant hardware and networking requirements.
So what are the pre-requisites to be able to use Storage Replica in a stretch cluster?
The diagram below represents such a stretch cluster.
There must be a Windows Active Directory (not necessary to host this on Technical preview)
Four servers running Technical Preview all must be able to run Hyper-V have a minimum of 4 cores and 8GB RAM. (Note Physical servers are needed for this scenario, you can use VM’s to test Server to Server but not a stretch cluster with Hyper-V).
There needs to be two sets of shared storage each one available to one pair of servers.
Each server MUST have at least one 10GB Ethernet connection.
Ports open for ICMP, SMB (445) and WS-Man (5985) in both directions between all 4 Servers
The test network MUST have at LEAST 8Gbps throughput and importantly round trip latency of less than or equal to 5ms. (This is done using 1472 byte ICMP packets for at least 5 minutes, you can measure that with the simple ping command below)
Finally Membership in the built-in Administrators group on all server nodes is required.
This is no small list of needs.
The step by step guide uses two ways of demonstrating the set up. And is a total of 38 pages long.
All scenarios are achievable using PowerShell 5.0 as available in the Technical Preview. Once the cluster is built it requires just a single command to build the stretch cluster.
You could of course choose to do it in stages using the New-SRGroup and New-SRPartnership CmdLets.
If, like me you do not have the hardware resources lying around to build such a test rig you may want to try and test the server to server replica instead.
This requires,
Windows Server Active Directory domain (does not need to run Windows Server Technical Preview).
Two servers with Windows Server Technical Preview installed. Each server should be capable of running Hyper-V, have at least 4 cores, and have at least 4GB of RAM. (Physical or VM is ok for this scenario)
Two sets of storage. The storage should contain a mix of HDD and SSD media.
(Note USB and System Drives are not eligible for SR and no disk that contains a Windows page file can be used either)
At least one 10GbE connection on each file server.
The test network MUST have at LEAST 8Gbps throughput and importantly round trip latency of less than or equal to 5ms. (This is done using 1472 byte ICMP packets for at least 5 minutes, you can measure that with the simple ping command below)
Ports open for ICMP, SMB (445) and WS-Man (5985) in both directions between both Servers.
Membership in the built-in Administrators group on all server nodes.
NOTE – the PowerShell CmdLets for the Server to Server scenario work remotely and locally, but only for the creation of the Replica, not to remove or amend using Remove or Set CmdLets (make sure you run these CmdLets locally ON the server that you are targeting for the Group and Partnership tasks).
I do urge you to go off and read more about this solution and test it if you can but remember things are not yet fully baked and will change with each release AND do not use them in Production yet. Read the guide for known issues as well, there are a few.
Finally – why do I love this feature – NO one likes to think of a disaster but if you don’t plan for it, when it does happen it truly will be a disaster in every respect. This allows a much cheaper but effective way of maintaining a current accurate replica of data either on a separate server or on a separate site within a stretch cluster.
Still pricey on hardware and networking, BUT much cheaper than a full hot site DR centre with old style full synchronous replication.
Watch this space for more Server Technical Preview hot features.
The post Windows Server Technical Preview – My Favourite Features – Part 1? appeared first on Blogg(Ed). | https://docs.microsoft.com/en-us/archive/blogs/uktechnet/windows-server-technical-preview-my-favourite-features-part-1 | 2020-02-17T02:34:52 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
The NATS.io team is always working to bring you features to improve your NATS experience. Below you will find feature summaries for new implementations to NATS. Check back often for release highlights and updates.
NATS introduces
logfile_size_limit allowing auto-rotation of log files when the size is greater than the configured limit set in
logfile_size_limit as a number of bytes. You can provide the size with units, such as MB, GB, etc. The backup files will have the same name as the original log file with the suffix .yyyy.mm.dd.hh.mm.ss.micros. For more information see Configuring Logging in the NATS Server Configuration section.
Full list of Changes 2.1.2...2.1.4
Queue Permissions allow you to express authorization for queue groups. As queue groups are integral to implementing horizontally scalable microservices, control of who is allowed to join a specific queue group is important to the overall security model. Original PR -
More information on Queue Permissions can be found in the Developing with NATS section.
As services and service mesh functionality has become prominent, we have been looking at ways to make running scalable services on NATS.io a great experience. One area we have been looking at is observability. With publish/subscribe systems, everything is inherently observable, however we realized it was not as simple as it could be. We wanted the ability to transparently add service latency tracking to any given service with no changes to the application. We also realized that global systems, such as those NATS.io can support, needed something more than a single metric. The solution was to allow any sampling rate to be attached to an exported service, with a delivery subject for all collected metrics. We collect metrics that show the requestor’s view of latency, the responder’s view of latency and the NATS subsystem itself, even when requestor and responder are in different parts of the world and connected to different servers in a NATS supercluster.
Full list of Changes 2.0.4...2.1.0
For services, the authorization for responding to requests usually included wildcards for _INBOX.> and possibly $GR.> with a supercluster for sending responses. What we really wanted was the ability to allow a service responder to only respond to the reply subject it was sent.
Exported Services were originally tied to a single response. We added the type for the service response and now support singletons (default), streams and chunked. Stream responses represent multiple response messages, chunked represents a single response that may have to be broken up into multiple messages.
Full list of Changes 2.0.2...2.0.4 | https://docs.nats.io/whats_new | 2020-02-17T01:35:19 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.nats.io |
The Shader Generator¶
The Shader Generator¶
As of version 1.5.0, panda supports several new features:
per-pixel lighting
normal mapping
gloss mapping
glow mapping
high-dynamic range rendering
cartoon shading
It’s not that these things weren’t possible before: they were. But previously, you had to write shaders to accomplish these things. This is no longer necessary. As of version 1.5.0, all that has to happen is for the artist to apply a normal map, gloss map, or glow map in the 3D modeling program. Then, the programmer gives permission for shader generation, and Panda3D handles the rest.
A few of these features do require minimal involvement from the programmer: for instance, high-dynamic range rendering requires the programmer to choose a tone-mapping operator from a small set of options. But that’s it: the amount of work required of the programmer is vastly less than before.
Many of these features are complementary with image postprocessing operations, some of which are now nearly-automatic as well. For example, HDR combines very nicely with the bloom filter, and cartoon shading goes very well with cartoon inking.
Individually, these features are not documented in this chapter of the manual. Instead, they’re documented in the portion of the manual where they make the most sense. For example, normal mapping, gloss mapping, and glow mapping are all documented in the section on Texturing. HDR and cartoon shading are documented under Render Attributes in the subsection on Light Ramps.
However, to enable any of these features, you need to tell Panda3D that it’s OK to automatically generate shaders and send them to the video card. The call to do this is:
nodepath.set_shader_auto();
If you don’t do this, none of the features listed above will have any effect. Panda will simply ignore normal maps, HDR, and so forth if shader generation is not enabled. It would be reasonable to enable shader generation for the entire game, using this call:
window->get_render().set_shader_auto();
Sample Programs¶
Three of the sample programs demonstrate the shader generator in action:
In each case, the sample program provides two versions: Basic and Advanced. The Basic version relies on the shader generator to make everything automatic. The Advanced version involves writing shaders explicitly.
Per-Pixel Lighting¶
Simply turning on
setShaderAuto causes one immediate change: all lighting
calculations are done per-pixel instead of per-vertex. This means that models do
not have to be highly tesselated in order to get nice-looking spotlights or
specular highlights.
Of course, the real magic of
setShaderAuto is that it enables you to use
powerful features like normal maps and the like.
Known Limitations¶
The shader generator replaces the fixed-function pipeline with a shader. To make this work, we have to duplicate the functionality of the entire fixed function pipeline. That’s a lot of stuff. We haven’t implemented all of it yet. Here’s what’s supported:
flat colors, vertex colors and color scales
lighting
normal maps
gloss maps
glow maps
materials, but not updates to materials
1D, 2D, 3D, cube textures
most texture stage and combine modes
light ramps (for cartoon shading)
some texgen modes
texmatrix
fog
Here’s what’s known to be missing:
some texgen modes
Note that although vertex colors are supported by the ShaderGenerator, in order
to render vertex colors you need to apply a
ColorAttrib.makeVertex() attrib
to the render state. One easy way to do this is to call
NodePath.setColorOff() (that is, turn off scene graph color, and let vertex
color be visible). In the fixed-function renderer, vertex colors will render
with or without this attrib, so you might not notice if you fail to apply it.
Models that come in via the egg loader should have this attribute applied
already. However, if you are using your own model loader or generating models
procedurally you will need to set it yourself.
How the Shader Generator Works¶
When panda goes to render something marked
setShaderAuto, it synthesizes a
shader to render that object. In order to generate the shader, it examines all
the attributes of the object: the lights, the material, the fog setting, the
color, the vertex colors… almost everything. It takes into account all of
these factors when generating the shader. For instance, if the object has a
material attrib, then material color support is inserted into the shader. If the
object has lights, then lighting calculations are inserted into the shader. If
the object has vertex colors, then the shader is made to use those.
Caching and the Shader Generator¶
If two objects are rendered using the same RenderState (ie, the exact same attributes), then the shader is only generated once. But even a single change to the RenderState will cause the shader to be regenerated. This is not entirely cheap. Making changes to the RenderState of an object should be avoided when shader generation is enabled, because this necessitates regeneration of the shader.
A few alterations don’t count as RenderState modifications: in particular, changing the positions and colors of the lights doesn’t count as a change to the RenderState, and therefore, does not require shader regeneration. This can be useful: if you just want to tint an object, apply a light to it then change the color of the light.
There is a second level of caching. If the system generates a shader, it will then compare that shader to the other shaders it has generated previously. If it matches a previously-generated shader, it will not need to compile the shader again.
So, to save the full cost, use the same RenderState. To save most of the cost, use two RenderStates that are similar. By “similar,” I mean having the same general structure: ie, two models that both have a texture and a normal map, and both have no vertex colors and neither has a material applied.
Combining Automatic Shaders with Manual Shaders¶
Sometimes, you will want to write most of a game using panda’s automatic shader generation abilitites, but you’ll want to use a few of your own shaders. A typical example would be a scene with some houses, trees, and a pond. You can probably do the houses and trees using panda’s built-in abilities. However, Panda doesn’t contain anything that particularly looks like pond-water: for that, you’ll probably need to write your own shader.
When you use
render.setShaderAuto(), that propagates down the scene graph
just like any other render attribute. If you assign a specific shader to a node
using
nodepath.setShader(myshader), that overrides any shader assignment
propagated down from above, including an Auto shader assignment from above. So
that means it is easy, in the example above, to enable auto shader generation
for the scene as a whole, and then override that at the pond-nodepath.
Creating your own Shader Generator¶
We anticipate that people who are writing full-fledged commercial games using Panda3D might want to write their own shader generators. In this way, you can get any effect you imagine without having to give up the convenience and elegance of being able to simply apply a normal map or a gloss map to a model, and having it “just work.”
To create your own shader generator, you will need to delve into Panda3D’s C++ code. Class ShaderGenerator is meant to be subclassed, and a hook function is provided to enable you to turn on your own generator. | https://docs.panda3d.org/1.10/cpp/programming/shaders/shader-generator | 2020-02-17T01:31:49 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.panda3d.org |
You can send events to multiple DAS/CEP receivers, either by sending the same event to many DAS/CEP receivers or by load balancing events among a set of servers. This handles the fail-over problem. When events are load balanced within a set of servers and if one receiver cannot be reached, events are automatically sent to the other available and active DAS/CEP receivers.
The following scenarios are covered in this section.
All the scenarios described below are different ways to use Data Agents with multiple receivers using the load balancing functionality. Each>
Load balancing events to sets of servers
In this setup there are two sets of servers that are referred to as set-A and set-B. You can send events to both the sets. You can also carry out load balancing for both sets as mentioned in load balancing between a set of servers. This scenario is a combination of load balancing between a set of servers and sending an event to several receivers. An event is sent to both set-A and set-B. Within set-A, it will be sent either to
DAS/CEP ReceiverA-1 or
DAS/CEP ReceiverA-2. Similarly within set-B, it will be sent either to
DAS/CEP ReceiverB-1 or
DAS/CEP ReceiverB-2. In the setup, you can have any number of sets and any number of servers as required by mentioning them accurately in the server URL. receivers
This setup involves sending all the events to more than one DAS/CEP receiver. This approach is mainly followed when you use other servers to analyze events together with DAS/CEP servers. For example, you can use the same Data Agents to publish the events to WSO2
When using the failover configuration in publishing events to DAS/CEP, events are sent to multiple DAS/CEP receivers in a sequential order based on priority. You can specify multiple DAS/CEP receivers so that events can be sent to the next server in the sequence in a situation where they were not successfully sent to the first server. In the scenario depicted in the above image, | https://docs.wso2.com/display/CEP400/Setting+up+Multi+Receiver+and+Load+Balancing+Data+Agent | 2020-02-17T01:35:48 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.wso2.com |
How-to guides¶
A collection of guides covering common issues that might be encountered using Deluge.
Deluge as a service¶
Services are used to start applications on system boot and leave them running in the background. They will also stop the application nicely on system shutdown and automatically restart them if they crash.
The Deluge daemon deluged and Web UI deluge-web can both be run as services. | https://deluge.readthedocs.io/en/latest/how-to/index.html | 2020-02-17T01:36:33 | CC-MAIN-2020-10 | 1581875141460.64 | [] | deluge.readthedocs.io |
>> and troubleshoot issues. How a recovering member resyncs with the cluster.
View replication status
The monitoring console contains a wealth of information about the status of configuration replication. See Use the monitoring console..! | https://docs.splunk.com/Documentation/Splunk/7.0.0/DistSearch/HowconfrepoworksinSHC | 2020-02-17T02:21:00 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Tizen Native API is carefully selected and tightly managed APIs from the Tizen native subsystems. The Tizen Native API Specification available in the Tizen SDK shows full list of the selected native subsystem APIs. The Native API is divided into dozens of API modules; each module represents a logically similar set of submodule APIs, which can be grouped into the same category.
The Tizen native API Reference provides descriptions for All APIs and follows basic principles listed below:
To be able to use an API, you need to include a header in which API is defined. You can find required headers in API reference as illustrated below:
Some of the Tizen native APIs require features to prevent your application from being shown in the application list on the Tizen store. If related Feature is included in API reference as shown below and your application uses that feature, then you need to declare the feature in the tizen-manifest.xml file. For more information, see Application Filtering.
In the function documentation for each module, the functions are described using a unified structure, illustrated in the below example.
The privilege is essential part to get access grant for privacy related data or sensitive system resources. For more information, see Privileges.
Some of Tizen Native API functions require adding appropriate privileges (defined in each API's Privilege section in the specification) to the tizen-manifest.xml file. If required privileges are not included in the tizen-manifest.xml file, then the function will return TIZEN_ERROR_PERMISSION_DENIED error.
For example, see the "Privilege:" section in the following picture: | https://docs.tizen.org/application/native/api/wearable/2.3.2/index.html | 2020-02-17T00:39:08 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.tizen.org |
Retrieves an existing Amplify App by appId.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
--app-id (string)
Unique Id for an Amplify -> (structure)
Amplify App represents different branches of a repository for building, deploying, and hosting.
appId -> (string)Unique Id for the Amplify App.
appArn -> (string)ARN for the Amplify App.
name -> (string)Name for the Amplify App.
tags -> (map)
Tag for Amplify App.
key -> (string)
value -> (string)
description -> (string)Description for the Amplify App.
repository -> (string)Repository for the Amplify App.
platform -> (string)Platform for the Amplify App.
createTime -> (timestamp)Create date / time for the Amplify App.
updateTime -> (timestamp)Update date / time for the Amplify App.
iamServiceRoleArn -> (string)IAM service role ARN for the Amplify App.
environmentVariables -> (map)
Environment Variables for the Amplify App.
key -> (string)
value -> (string)
defaultDomain -> (string)Default domain for the Amplify App.
enableBranchAutoBuild -> (boolean)Enables auto-building of branches for the Amplify App.
enableBasicAuth -> (boolean)Enables Basic Authorization for branches for the Amplify App.
basicAuthCredentials -> (string)Basic Authorization credentials for branches for the Amplify App.
customRules -> (list)
Custom redirect / rewrite rules for the Amplify App.
(structure)
Custom rewrite / redirect rule.
source -> (string)The source pattern for a URL rewrite or redirect rule.
target -> (string)The target pattern for a URL rewrite or redirect rule.
status -> (string)The status code for a URL rewrite or redirect rule.
condition -> (string)The condition for a URL rewrite or redirect rule, e.g. country code.
productionBranch -> (structure)
Structure with Production Branch information.
lastDeployTime -> (timestamp)Last Deploy Time of Production Branch.
status -> (string)Status of Production Branch.
thumbnailUrl -> (string)Thumbnail URL for Production Branch.
branchName -> (string)Branch Name for Production Branch.
buildSpec -> (string)BuildSpec content for Amplify App.
enableAutoBranchCreation -> (boolean)Enables automated branch creation for the Amplify App.
autoBranchCreationPatterns -> (list)
Automated branch creation glob patterns for the Amplify App.
(string)
autoBranchCreationConfig -> (structure)
Automated branch creation config for the Amplify App.
stage -> (string)Stage for the auto created branch.
framework -> (string)Framework for the auto created branch.
enableAutoBuild -> (boolean)Enables auto building for the auto created branch.
environmentVariables -> (map)
Environment Variables for the auto created branch.
key -> (string)
value -> (string)
basicAuthCredentials -> (string)Basic Authorization credentials for the auto created branch.
enableBasicAuth -> (boolean)Enables Basic Auth for the auto created branch.
buildSpec -> (string)BuildSpec for the auto created branch.
enablePullRequestPreview -> (boolean)Enables Pull Request Preview for auto created branch.
pullRequestEnvironmentName -> (string)The Amplify Environment name for the pull request. | https://docs.aws.amazon.com/ja_jp/cli/latest/reference/amplify/get-app.html | 2020-02-17T00:45:23 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.aws.amazon.com |
Overview
In the Particle Editor, the features Location: Omni and Motion: Collisions work together to create an omnipresent colliding effect. By setting different parameters appropriately, an omnipresent environmental effect like rain or snow can be created, making sure that the particles are generated far from the viewer and collide with objects before coming into view.
Modifying the features mentioned below and setting them to specific values force a particle effect to stay out of certain areas. It makes sure that physical entities such as roofs, overhangs, walls etc. kill the particles on collision before they can penetrate the surface and enter indoors or other areas.
Rain outside, but not inside
When assigned to a newly created effect, the following features would create a basic effect and wouldn't include the necessary visual properties such as a texture. To achieve the best result, other Particle Effect features should be assigned.
To learn how to create a new particle effect from scratch, please see Creating a Particle Effect.
For more information about functionalities of the Particle Effect Features, please refer to the Particle Effect Features page.
Open the Particle Editor by choosing Tools → Particle Editor. Then, on the Editor, go to File → Open and choose a particle effect to access its features. Alternatively, you can double-click on the particle asset in the Asset Browser to bring up its properties and features.
On the Effect Graph, or the Effect Tree, select the features to fill the Inspector panel with their parameters and adjust the features as shown below:
- In the Location: Omni feature settings, set Visibility Range to a value based on the desired distance between the viewer and the particle effect's maximum visibility range. Then, set Spawn Outside View to True.
With these modifications, an illusion of omnipresence can be created; in other words, the particles follow the user and appear only in a visible area determined by the Visibility Range value.
- In the Motion: Physics feature settings, set Gravity Scale, Drag, and Local Effectors to their appropriate values based on the desired result. For instance, to replicate an omnipresent weather effect, rain or snow values can be used.
This will determine the characteristics of the particle effect based on the user preferences. You can use the images linked above as references.
- In the Life: Time feature settings, set Life Time to a large value, such as 30 or 60.
This value indicates the maximum life time of each particle. It determines the time the particles spend between their spawn point and the end point. In other words, it calculates the actual distance the particle needs to travel before it reaches its final destination and adjusts the speed accordingly. Life: Time plus the Motion feature parameters determine the offset between the viewer and the point where particles spawn.
- In the Motion: Collisions feature settings, set Collision Limit Mode to Kill and Max Collisions to 1.
Setting Collision Limit Mode to Kill ensures that the particles disappear on collision.
With these features, users can create the illusion of an omnipresent particle effect like rain or snow. Setting the Motion: Collision feature to these values ensures that particles disappear on collision so that they can't penetrate physical entity surfaces end enter indoor or other specific areas. | https://docs.cryengine.com/pages/viewpage.action?pageId=36867992 | 2020-02-17T02:33:08 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.cryengine.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.