content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
AutoVM
AutoVM is an open-source platform to manage virtual machines(VM) on the VMware ESXI virtualization which allows the VPS providers to manage full automation of support and sales process.
With the advanced WISECP module, you can automatically provide server sales and management.
AutoVM Module Features
AutoVM Module Installation
- Follow the path "Admin Area > Services > Hosting Management > Server Settings"
- Click the "Add New Server" button.
- Make definitions as follows on the page that opens.
Hostname : Server IP address or hostname information.
Name servers : It does not need to be defined.
Server Automation Type : Select "AutoVM"
IP Address : IP address of the server where Autovm is installed.
Username : [URL address where AutoVM is installed]/api
(Defining an IP address or a different domain "name/hostname" can cause "Google Chrome Iframe Issues"!)
Password : API key information you created on the AutoVM panel.
SSL : Mark it to establish API connection using SSL.
"Hostname" information must be defined in the "IP Address" field instead of the server IP address.
Upgrade / Downgrade Settings : Mark it as "Do not delete".
Test Connection : Check and test the validity of the information defined.
Using AutoVM "AutoVM" On "Client Area > Order Details", If the buttons are not clicked or a white section appears or redirects to the login page, you may have defined an IP address or a different domain information in the "User Name" section of WISECP shared server settings. Google Chrome requires the same use of "hostname" to run iframe windows. After correcting this situation, delete all folders in the "web/assets" directory where AutoVM is installed and clean your browser cookies and check again.
Important Reminder The admin email address in the AutoVM panel must be the same as the master admin email address in the WISECP admin panel. Otherwise, you will encounter "The admin area could not be accessed" error.. | https://docs.wisecp.com/en/kb/autovm | 2021-09-16T16:34:42 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.wisecp.com |
acd_cli documentation¶
Version 0.3.2
Contents:
Overview¶
acd_cli provides a command line interface to Amazon Drive and allows Unix users to mount their drive using FUSE for read and (sequential) write access. It is currently in beta stage.
Node Cache Features¶
- local caching of node metadata in an SQLite database
- addressing of remote nodes via a pathname (e.g.
/Photos/kitten.jpg)
- file search
CLI Features¶
- tree or flat listing of files and folders
- simultaneous uploads/downloads, retry on error
- basic plugin support
Documentation¶
The full documentation is available at.
Quick Start¶
Have a look at the known issues, then follow the setup guide and authorize. You may then use the program as described in the usage guide.
CLI Usage Example¶
In this example, a two-level folder hierarchy is created in an empty drive.
Then, a relative local path
local/spam is uploaded recursively using two connections.
$ acd_cli sync Getting changes... Inserting nodes.. $ acd_cli ls / [PHwiEv53QOKoGFGqYNl8pw] [A] / $ acd_cli mkdir /egg/ $ acd_cli mkdir /egg/bacon/ $ acd_cli upload -x 2 local/spam/ /egg/bacon/ [################################] 100.0% of 100MiB 12/12 654.4KB/s $ acd_cli tree / egg/ bacon/ spam/ sausage spam [...]
The standard node listing format includes the node ID, the first letter of its status and its full path. Possible statuses are “AVAILABLE” and “TRASH”.
Known Issues¶
It is not possible to upload files using Python 3.2.3, 3.3.0 and 3.3.1 due to a bug in the http.client module.
API Restrictions¶
- the current upload file size limit is 50GiB
- uploads of large files >10 GiB may be successful, yet a timeout error is displayed (please check the upload by syncing manually)
- storage of node names is case-preserving, but not case-sensitive (this should not concern Apple users)
- it is not possible to share or delete files
Contribute¶
Have a look at the contributing guidelines.
Recent Changes¶
0.3.2¶
- added
--remove-source-filesargument to upload action
- added
--timesargument to download action for preservation of modification times
- added streamed overwrite action
- fixed upload of directories containing broken symlinks
- disabled FUSE autosync by default
- added timeout handling for uploads of large files
- fixed exit status >=256
- added config files
- added syncing to/from file
- fixed download of files with failed (incomplete) chunks
0.3.1¶
- general improvements for FUSE
- FUSE write support added
- added automatic logging
- sphinx documentation added
0.2.2¶
- sync speed-up
- node listing format changed
- optional node listing coloring added (for Linux or via LS_COLORS)
- re-added possibility for local OAuth | https://acd-cli.readthedocs.io/en/latest/index.html | 2021-09-16T16:20:58 | CC-MAIN-2021-39 | 1631780053657.29 | [] | acd-cli.readthedocs.io |
ACMP_Init_TypeDef Struct Reference
ACMP initialization structure.
#include <em_acmp.h>
ACMP initialization structure.
Field Documentation
◆ biasProg
Bias current.
See the ACMP chapter about bias and response time in the reference manual for details. Valid values are in the range 0-7.
◆ inputRange
Input range.
Adjust this setting to optimize the performance for a given input voltage range.
◆ accuracy
ACMP accuracy mode.
Select the accuracy mode that matches the required current usage and accuracy requirement. Low accuracy consumes less current while high accuracy consumes more current.
◆ hysteresisLevel
Hysteresis level.
◆ inactiveValue
Inactive value emitted by ACMP during warmup.
◆ vrefDiv
VDD division factor.
VREFOUT = VREFIN * (VREFDIV / 63). Valid values are in the 0-63 range.
◆ enable
If true, ACMP is enabled after configuration. | https://docs.silabs.com/gecko-platform/3.0/emlib/api/efr32xg21/struct-a-c-m-p-init-type-def | 2021-09-16T16:22:03 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.silabs.com |
OMG.
The OmiseGO blockchain facilitates eWallet interchange with a decentralized exchange, cryptocurrency pair matching, order book validation, and compliant clearinghouses without ever taking full-custody of funds.
Mainnet.
Watcher Node (Full Archive).
Mainnet -
In the following subchapters, we will describe how to obtain the End-point from the Ankr platform and how to execute RPC calls. | https://docs.ankr.com/nodes/omisego | 2021-09-16T16:43:48 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.ankr.com |
Getting there
Use this tab to create SQL queries for extracting data from AS/400 files. Type text directly into the text boxes, or click any of the buttons on the right to open a dialog box to build a query.
Note: From the Transfer dialog box, select at least one host file before you build your query. Reflection displays field information from the specified file to help you build the query.
If you need help building your SQL query, consult your SQL documentation.
Start building your SELECT statement by specifying fields (or columns) to transfer.
Where
In this box, add a WHERE clause to your SELECT statement. Specify one or more conditions that must be met for a record to be transferred.
Order by
In this box, add an ORDER BY clause to your SELECT statement to sort the records resulting from the query. You can sort only by fields specified in your SELECT statement.
Group by
In this box, add a GROUP BY clause to your SELECT statement to specify how to group the resulting data after the requested calculation (function) is performed.
This clause is necessary when a function and multiple fields are specified in your SELECT statement.
Having
In this box, add a HAVING clause to apply a condition to a function of the SELECT statement.
To enable the Having box and dialog box, you must first add a GROUP BY clause.
Join by
In this box, add a JOIN clause to your SELECT statement to specify how you want data from multiple files or members combined.
To enable the Join by box and dialog box, you must have selected multiple files or members on the Host side of the Transfer dialog box.
Return records with missing fields
When joining records from more than one file, there may be cases where a record cannot be found to complete the join.
Related Topics
Create an SQL Query
Receive Data from an AS/400
AS/400 Transfer | https://docs.attachmate.com/Reflection/2008/R1SP2/Guide/en/user-html/transfer_setup_sql_cs.htm | 2021-09-16T15:35:03 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.attachmate.com |
Challenges
Our Solutions
Fast Facts
Claims Intake process automation
• Loss Run Report Processing
• Evidence of Insurability (EOI)
• Statement of Value Analysis
System can parse & extract key data points from
• PDF’s
• Excel & word documents
Claims Intake process automation
• First Notice of Loss (FNOL)
• Claims Appeals
• Reward calculation & payment
• Not in Good Order claim
identification process (NIGO)
• Claims subrogation & Recovery
• Property Loss Notices
• Claim Fraud Investigation
• TPA claim filing
Speedup documentation gathering & data analysis
• DocsStream can scan emails &
receive incoming claim
documentation and forms.
• Sort documents according to type.
• Extract key data points such as
incident data, policy data etc
• Set up the claim in internal systems.
• Cross-reference claim or policy
related data against internal systems. | http://docs-stream.com/insurance-solutions/ | 2021-09-16T14:55:01 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs-stream.com |
All About Behavior
My name is Orso and I am 4 years old. Here is my story:
I was found in a dumpster, along with my brother, at a very young age. I fought hard for my resources, especially food. My main objective was to survive. I even fought with my brother. I became an extreme resource guarder, not just with strange dogs, but also within my pack.
Eventually, we were found, fed and received veterinary care. Someone saw our photos on an internet pet adoption site and forwarded my photo/bio to Mom. Mom opened the email and immediately heard me call to her, “You have been chosen to help me. Please come for me!� Within a week, after all adoption polices were met, she came to fetch me up.
At first, I was a little humble to meet her, but it did not take long for me to show her what I was all about. I went right into puppy classes at 10 weeks of age. For the first two weeks, I tolerated the other puppies, as well as the adult dogs in my pack, but then I could not control myself.
I believed that everything should be MINE! Food, toys, beds, Mom’s affection – it should all be mine. I could not control my inner rage. IT SHOULD ALL BE MINE! I felt like I wanted to kill everyone around me who was getting a treat, so Mom pulled me out of puppy classes. My world started to crumble around me, but Mom learned that she had to modify things in order to protect everyone in the pack.
I did not play with the adult dogs of my pack at all. Mom knew that she would have to find a playmate that would play with me, and in turn, that playing would help me to get beyond my issues. That playmate is Cha Cha. She came all the way from Kansas and she has special talents as a healer among both dogs and people.
Changes came very slowly, as everyone learned how to adapt and modify their behavior. They came to recognize the triggers that would set me off.
One day, something really scary happened. Cha Cha and I were playing in the back yard. Mom was diligent about watching all interactions, but natured called and she left for a couple of minutes. We were out in the yard playing as rough as we always did – being terrier mixes. Like we had done many times before, we were grabbing each other collars and flipping each other around. I loved to flip Cha Cha, but this time I flipped her too many times and her jaw got caught in my collar. As I flipped her around, the collar got tighter and tighter. Suddenly, I could not breathe because the collar had become so tight that it cut off my air supply. Cha Cha’s jaw was stuck in my collar and I was choking. My heart stopped and I turned purple.
Mom came running out when she saw both of us lying on the ground, facing each other and seemingly lifeless. She was unable to release Cha Cha’s jaw because the collar had twisted so tightly that she couldn’t get her fingers under it enough to loosen it. She quickly called her brother-in-law, who was there in a flash with leather snips.
He managed to unravel us, took Cha Cha inside and Mom got to work. There was no heartbeat. Nor was there airflow, so Mom, being a veterinarian, knew to perform CPR. It took awhile, but I made it back. I was under and out for quite a long time, but apparently, miracles do happen. I passed through – but then I returned. I guess it was not yet my time to go.
The experience changed my life. I still had far to go, but things after that encounter, things were different. Mom continued to work with me, along with my pack mates, using alternative methods to help make me whole – in an effort to help me to become a dog! Indeed, I have changed. Now I play with the adults and it’s okay if they touch my toys. I don’t even mind it when Mom gives them treats and shows them affection. I love when she tells me how proud she is of me, and when she cries tears of joy when she sees how happy I am with life, in general. Mom says I am a work in progress and that we will continue to deal with my issues of lack of socialization and trust of people.
There are a few friends and family who I do love, but I’m still not quite sure of the rest of the world. My philosophy is to believe, to learn and to deal with whatever comes my way, but most of all to seek out help. Mom says that resource guarding is a hard behavioral issue to break, especially if a dog resource guards with people. (And I believe her because she’s a doctor and she’s really smart!) My life is forever evolving, and behavior modification is a lifetime commitment not just for me, but for my Mom and my pack members, as well.
My name is Orso, I am 4 years old and I am becoming a dog!
I am Orso and I am now 8 years of age and finally have become a dog. Each year of my life, I have evolved, learned, and grown into a dog that is more comfortable with life and is able to handle difficult situations. I have learned actually to modify my behavior to make decisions that will keep me from loosing control by pulling myself away from a situation which may become ugly. I cuddle more with Mom, I play well with my pack members, I even sleep on the bed now with all the others. Mom is very much aware of all these changes and really watches for any triggers that might upset me, but now I can walk away to a safe area to diffuse. I finally feel part of the pack. Thank you for becoming a dog.
Orso is special, I have never seen a dog that can truly process a very difficult situation, he has learned how to control himself, and how to deal with his temper, and has learned how to walk away.
His pictures now show you his transformation through his face, he is Finally Happy !!!!!!!!!!!!!!!!
Orso left me this year 2018, suffering from kidney failure. He left me heart broken because I could not protect him from this. We learned, we suffered, had good and bad times but the one thing we all learned how to do was to LOVE!!! | http://www.docsk9center.com/docs/all-about-behavior/ | 2021-09-16T16:39:16 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['http://www.docsk9center.com/docs/wp-content/uploads/2019/04/Header-All-About-Behavior.jpg',
None], dtype=object)
array(['http://www.docsk9center.com/docs/wp-content/uploads/2019/04/all-about-behavior-01.jpg',
None], dtype=object)
array(['http://www.docsk9center.com/docs/wp-content/uploads/2019/04/all-about-behavior-02.jpg',
None], dtype=object)
array(['http://www.docsk9center.com/docs/wp-content/uploads/2019/04/all-about-behavior-03.jpg',
None], dtype=object)
array(['http://www.docsk9center.com/docs/wp-content/uploads/2019/04/all-about-behavior-04.jpg',
None], dtype=object) ] | www.docsk9center.com |
Date: Mon, 29 Dec 2003 09:55:47 +0000 From: Jez Hancock <[email protected]> To: Lowell Gilbert <[email protected]> Cc: [email protected] Subject: Re: setting login.conf doesn't limit my users Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Sun, Dec 28, 2003 at 12:32:20PM -0500, Lowell Gilbert wrote: > Jez Hancock <[email protected]> writes: > > To the OP - it may help if you paste in the contents of your login caps > > file /etc/login.conf or detail exactly what it is you're trying to > > cap/restrict. > > Indeed. There are some limits that aren't implemented, but if the > users can change a limit, that's not what's happening here. Of > course, users can always *lower* their limits, and they can raise > their soft limits up to a maximum of the hard limit (that's what > the distinction is for). This is it eh - there are some limits that can't be set - I remember having to use 'idled' from the ports to monitor the idle times of users and if they get to 1hr of idle time, auto-log them out as it were via idled (the login.conf setting to do this didn't work!!!). There are a few others but I forget what they are now (password expiry perhaps is one?). :P -- Jez Hancock - System Administrator / PHP Developer - personal weblog - ipfw peruser traffic logging
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=375887+0+archive/2003/freebsd-questions/20031231.freebsd-questions | 2021-09-16T15:58:24 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
- Requirements
- Download a GitLab Package
- Install or update a GitLab Package
- Browse to the hostname and login
Manually download and install a GitLab package
If for some reason you don’t use the official repositories, it is possible to download the package and install it manually. The exact same method can be used to manually update GitLab.
Requirements
Before installing GitLab, it is of critical importance to review the system requirements. The system requirements include details on the minimum hardware, software, database, and additional requirements to support GitLab.
Download a GitLab Package
All GitLab packages are posted to the GitLab package server and can be downloaded. Five repositories are maintained:
- GitLab EE: for official Enterprise Edition releases.
- GitLab CE: for official Community Edition releases.
- Unstable: for release candidates and other unstable versions.
- Nighty Builds: for nightly builds.
- Raspberry Pi: for official Community Edition releases built for Raspberry Pi packages.
To download GitLab:
Browse to the repository for the type of package you would like to see the list of packages that are available. There are multiple packages for a single version, one for each supported distribution type. Next to the filename is a label indicating the distribution, as the file names may be the same.
- Find the package version you wish to install and click on it.
- Click the Download button in the upper right corner to download the package.
Install or update a GitLab Package
After the GitLab package is downloaded, install it using the following commands:
For GitLab Community Edition:
# GitLab Community Edition # Debian/Ubuntu dpkg -i gitlab-ce-<version>.deb # CentOS/RHEL rpm -Uvh gitlab-ce-<version>.rpm
For GitLab Enterprise Edition:
# Debian/Ubuntu dpkg -i gitlab-ee-<version>.deb # CentOS/RHEL rpm -Uvh gitlab-ee-<version>.rpm
EXTERNAL_URL="<GitLab URL>"variable to set your preferred domain name. Installation automatically configures and starts GitLab at that URL. Enabling HTTPS requires additional configuration to specify the certificates.
Browse to the hostname and login
On your first visit, you are redirected to a password reset screen. Provide
the password for the initial administrator account and you are redirected
back to the login screen. Use the default account’s username
root to log in.
See our documentation for detailed instructions on installing and configuration. | https://docs.gitlab.com/14.0/omnibus/manual_install.html | 2021-09-16T16:35:13 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.gitlab.com |
Create and use an Internal Load Balancer App Service Environment
Note
This article is about the App Service Environment v2 which is used with Isolated App Service plans
The the name of your ASE. The name of your ASE is used in the domain suffix for the apps in your ASE. The domain suffix for your ILB ASE is <ASE name>.appserviceenvironment.net. Apps that are made in an ILB ASE are not put in the public DNS.
Earlier versions of the ILB ASE required you to provide a domain suffix and a default certificate for HTTPS connections. The domain suffix is no longer collected at ILB ASE creation and a default certificate is also no longer collected. When you create an ILB ASE now, the default certificate is provided by Microsoft and is trusted by the browser. You are still able to set custom domain names on apps in your ASE and set certificates on those custom domain names.
With an ILB ASE, you can do things such as:
- Host intranet applications securely in the cloud, which you access through a site-to-site or ExpressRoute.
- Protect apps with a WAF device
- TLS/SSL binding.
- > App Service Environment.
Select your subscription.
Select or create a resource group.
Enter the name of your App Service Environment.
Select virtual IP type of Internal.
Note
The App Service Environment name must be no more than 37 characters.
Select Networking
Select or create a Virtual Network. If you create a new VNet here, it will be defined with an address range of 192.168.250.0/23. To create a VNet with a different address range or in a different resource group than the ASE, use the Azure Virtual Network creation portal.
Select or create an empty a subnet. If you want to select a subnet, it must be empty and not delegated. The subnet size cannot be changed after the ASE is created. We recommend a size of
/24, which has 256 addresses and can handle a maximum-sized ASE and any scaling needs.
Select Review and Create then select Create.
Create an app in an ILB ASE
You create an app in an ILB ASE in the same way that you create an app in an ASE normally.
In the Azure portal, select Create a resource > Web > Web App.
Enter the name of the app.
Select the subscription.
Select or create a resource group.
Select your Publish, Runtime Stack, and Operating System.
Select a location where the location is an existing ILB ASE. You can also create a new ASE during app creation by selecting an Isolated App Service plan. If you wish to create a new ASE, select the region you want the ASE to be created in.
Select or create an App Service plan.
Select Review and Create then select Create when you are ready.. If your ILB ASE has a domain name that does not end in appserviceenvironment.net, you will need to get your browser to trust the HTTPS certificate being used by your scm site.
DNS configuration
When you use an External ASE, apps made in your ASE are registered with Azure DNS. There are no additional steps then in an External ASE for your apps to be publicly available. With an ILB ASE, you must manage your own DNS. You can do this in your own DNS server or with Azure DNS private zones.
To configure DNS in your own DNS server with your ILB ASE:
- create a zone for <ASE name>.appserviceenvironment.net
- create an A record in that zone that points * to the ILB IP address
- create an A record in that zone that points @ to the ILB IP address
- create a zone in <ASE name>.appserviceenvironment.net named scm
- create an A record in the scm zone that points * to the ILB IP address
To configure DNS in Azure DNS Private zones:
- create an Azure DNS private zone named <ASE name>.appserviceenvironment.net
- create an A record in that zone that points * to the ILB IP address
- create an A record in that zone that points @ to the ILB IP address
- create an A record in that zone that points *.scm to the ILB IP address
The DNS settings for your ASE default domain suffix do not restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an ILB ASE. If you then want to create a zone named contoso.net, you could do so and point it to the ILB IP address. The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at <appname>.scm.<asename>.appserviceenvironment.net.
The zone named .<asename>.appserviceenvironment.net is globally unique. Before May 2019, customers were able to specify the domain suffix of the ILB ASE. If you wanted to use .contoso.com for the domain suffix, you were able do so and that would include the scm site. There were challenges with that model including; managing the default TLS/SSL certificate, lack of single sign-on with the scm site, and the requirement to use a wildcard certificate. The ILB ASE default certificate upgrade process was also disruptive and caused application restarts. To solve these problems, the ILB ASE behavior was changed to use a domain suffix based on the name of the ASE and with a Microsoft owned suffix. The change to the ILB ASE behavior only affects ILB ASEs made after May 2019. Pre-existing ILB ASEs must still manage the default certificate of the ASE and their DNS configuration..
Internet-based CI systems, such as GitHub and Azure DevOps, will still work with an ILB ASE if the build agent is internet accessible and on the same network as ILB ASE. So in case of Azure DevOps, if the build agent is created on the same VNET as ILB ASE (different subnet is fine), it will be able to pull code from Azure DevOps domain suffix <ASE name>.appserviceenvironment.net, and an app named mytest, use mytest.<ASE name>.appserviceenvironment.net for FTP and mytest.scm.contoso.net for MSDeploy deployment.
Configure an ILB ASE with a WAF device
You can combine a web application firewall (WAF) device with your ILB ASE to only expose the apps that you want to the internet and keep the rest only accessible from in the VNet. This enables you to build secure multi-tier applications among other things..
ILB ASEs made before May 2019
ILB ASEs that were made before May 2019 required you to set the domain suffix during ASE creation. They also required you to upload a default certificate that was based on that domain suffix. Also, with an older ILB ASE you can't perform single sign-on to the Kudu console with apps in that ILB ASE. When configuring DNS for an older ILB ASE, you need to set the wildcard A record in a zone that matches to your domain suffix.
- To get started with ASEs, see Introduction to App Service environments. | https://docs.microsoft.com/en-us/azure/app-service/environment/create-ilb-ase?WT.mc_id=AZ-MVP-5003408 | 2021-09-16T17:09:55 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
Installing PowerShell on Windows
There are multiple ways to install PowerShell in Windows.
Prerequisites
The latest release of PowerShell is supported on Windows 7 SP1, Server 2008 R2, and later versions.
To enable PowerShell remoting over WSMan, the following prerequisites need to be met:
-.
Download the installer package
To install PowerShell on Windows, download the latest install package from GitHub. You can also find the latest preview version. Scroll down to the Assets section of the Release page. The Assets section may be collapsed, so you may need to click to expand it.
Note
The installation commands in this article are for the latest releases of PowerShell. To install a different version of PowerShell, adjust the command to match the version you need. To see all PowerShell releases, visit the releases page in the PowerShell repository on GitHub.
Installing the MSI package
The MSI file looks like
PowerShell-<version>-win-<os-arch>.msi. For example:
PowerShell-7.1.4-win-x64.msi
PowerShell-7.1.4-win-x86
Note
PowerShell 7.1 installs to a new directory and runs side-by-side with Windows PowerShell 5.1. PowerShell 7.1 is an in-place upgrade that replaces PowerShell 6.x. or PowerShell 7.0.
- PowerShell 7.1 is installed to
$env:ProgramFiles\PowerShell\7
- The
$env:ProgramFiles\PowerShell\7folder is added to
$env:PATH
- The
$env:ProgramFiles\PowerShell\6folder is deleted
If you need to run PowerShell 7.1 side-by-side with other versions, use the ZIP install method to install the other version to a different folder.
Administrative install from the command line
MSI packages can be installed from the command line allowing administrators to deploy packages without user interaction. The MSI package includes the following properties to control the installation options:
- ADD_EXPLORER_CONTEXT_MENU_OPENPOWERSHELL - This property controls the option for adding the Open PowerShell item to the context menu in Windows Explorer.
- ADD_FILE_CONTEXT_MENU_RUNPOWERSHELL - This property controls the option for adding the Run with PowerShell item to the context menu in Windows Explorer.
- ENABLE_PSREMOTING - This property controls the option for enabling PowerShell remoting during installation.
- REGISTER_MANIFEST - This property controls the option for registering the Windows Event Logging manifest.
The following example shows how to silently install PowerShell with all the install options enabled.
msiexec.exe /package PowerShell-7.1.4-win-x64.msi /quiet ADD_EXPLORER_CONTEXT_MENU_OPENPOWERSHELL=1 ENABLE_PSREMOTING=1 REGISTER_MANIFEST=1
For a full list of command-line options for
Msiexec.exe, see
Command line options.
Registry keys created during installation
Beginning in PowerShell 7.1, the MSI package creates registry keys that store the installation
location and version of PowerShell. These values are located in
HKLM\Software\Microsoft\PowerShellCore\InstalledVersions\<GUID>. The value of
<GUID> is unique for each build type (release or preview), major version, and architecture.
This can be used by administrators and developers to find the path to PowerShell. The
<GUID>
values are the same for all preview and minor version releases. The
<GUID>
values are changed for each major release.
Installing the ZIP package
PowerShell binary ZIP archives are provided to enable advanced deployment scenarios. Download one of the following ZIP archives from the releases page.
- PowerShell-7.1.4-win-x64.zip
- PowerShell-7.1.4-win-x86.zip
- PowerShell-7.1.4-win-arm64.zip
- PowerShell-7.1.4-win-arm32.zip.
Note
You can use this method to install any version of PowerShell including the latest:
- Stable release:
- Preview release:
- LTS release:
Deploying on Windows 10 IoT Enterprise
Windows 10 IoT Enterprise comes with Windows PowerShell, which we can use to deploy PowerShell 7.
Create
PSSessionto target device
Set-Item -Path WSMan:\localhost\Client\TrustedHosts <deviceip>
Set up remoting to PowerShell 7
Set-Location .\PowerShell-<version>-win-<os-arch> # Be sure to use the -PowerShellHome parameter otherwise it tries to create a new # endpoint with Windows PowerShell 5.1 .\Install-PowerShellRemoting.ps1 -PowerShellHome . # You get an error message and are disconnected from the device because # it has to restart WinRM
Connect to PowerShell 7 endpoint on device
# Be sure to use the -Configuration parameter. If you omit it, you connect to Windows PowerShell 5.1 Enter-PSSession -ComputerName <deviceIp> -Credential Administrator -Configuration powershell.<version>
Deploying on Windows 10 IoT Core
Windows 10 IoT Core adds Windows PowerShell when you include IOT_POWERSHELL feature, which we can use to deploy PowerShell 7. The steps defined above for Windows 10 IoT Enterprise can be followed for IoT Core as well.
For adding the latest PowerShell in the shipping image, use Import-PSCoreRelease command to include the package in the workarea and add OPENSRC_POWERSHELL feature to your image.
Note
For ARM64 architecture, Windows PowerShell is not added when you include IOT_POWERSHELL. So the
zip based install does not work. You need to use
Import-PSCoreRelease command to add it in
the image.
Deploying on Nano Server
These instructions assume that the Nano Server is a "headless" OS that has a version of PowerShell is already running on it. For more information, see the Nano Server Image Builder documentation.
PowerShell binaries can be deployed need the Windows 10 x64 ZIP release package. Run the commands within an "Administrator" instance of PowerShell.
Offline Deployment of PowerShell
- Use your favorite zip utility to unzip the package to a directory within the mounted Nano Server image.
- Unmount the image and boot it.
- Connect to the built-in instance of Windows PowerShell.
- Follow the instructions to create a remoting endpoint using the "another instance technique".
Online Deployment of PowerShell
Deploy PowerShell to Nano Server using the following steps.
Connect to the built-in_<version>"
If you want WSMan-based remoting, follow the instructions to create a remoting endpoint using the "another instance technique".
Install as a .NET Global tool
If you already have the .NET Core SDK installed, it's easy to install PowerShell as a .NET Global tool.
dotnet tool install --global PowerShell
The dotnet tool installer adds
$env:USERPROFILE\.dotnet\tools to your
$env:PATH environment
variable. However, the currently running shell doesn't have the updated
$env:PATH. You can start
PowerShell from a new shell by typing
pwsh.
Install PowerShell via the Windows Package Manager
The
wing --------------------------------------------------------------------------- PowerShell Microsoft.PowerShell 7.1.4 PowerShell-Preview Microsoft.PowerShell-Preview 7.2.0-preview.5
Install a version of PowerShell using the
--exactparameter
winget install --name PowerShell --exact winget install --name PowerShell-Preview --exact
Installing from the Microsoft Store
PowerShell 7.1 has been published to the Microsoft Store. You can find the PowerShell release on the Microsoft Store website or in the Store application in Windows.
Benefits of the Microsoft Store package:
- Automatic updates built right into Windows 10
- Integrates with other software distribution mechanisms like Intune and SCCM
Limitations:
Windows Store packages run in an application sandbox that virtualizes access to some filesystem and registry locations.
- All registry changes under HKEY_CURRENT_USER are copied on write to a private, per-user, per-app location. Therefore, those values are not available to other applications.
- Any system-level configuration settings stored in
$PSHOMEcannot be modified. This includes the WSMAN configuration. This prevents remote sessions from connecting to Store-based installs of PowerShell. User-level configurations and SSH remoting are supported.
For more information, see Understanding how packaged desktop apps run on Windows.
How to create a remoting endpoint
PowerShell supports the PowerShell Remoting Protocol (PSRP) over both WSMan and SSH. For more information, see:.
Installation support
Microsoft supports the installation methods in this document. There may be other third-party methods of installation available from other sources. While those tools and methods may work, Microsoft cannot support those methods. | https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-core-on-windows?view=powershell-7 | 2021-09-16T17:28:53 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
Throughput Tester Example (SoC Mode)
Background
This code example has a related User's Guide, here: Throughput with Bluetooth Low Energy Technology
Description
With the throughput tester example application, you can measure the Bluetooth connection bitrate of the EFR32/BGM device running the Silicon Labs Bluetooth Low Energy stack. Both acknowledged and unacknowledged operations can be tested, using indications and notifications respectively. The example can also serve as a basis for testing Coded PHY and/or range.
Throughput tester measures the data throughput between a slave and a master device and reports the
bps and GATT operation count values. The slave device functions as a GATT server and sends data to the client (master). The example application implements both roles in the same firmware, which enables using the same firmware for both the master and slave device and testing throughput between two radio boards, or against another third-party device such as a smart phone.
This example demonstrates the SoC version of the throughput tester. For the NCP version see this link.
How it Works
The throughput is tested by generating an array of circular data and sending it to the master device. The throughput will heavily depend on whether unacknowledged or acknowledged GATT operations are used, which, in this example, means notifications or indications respectively (more information Acknowledged vs Unacknowledged GATT operations).
These are the three operation modes for the throughput testing:
- Free Mode – Sends data while a button (on slave device) is held down (default mode)
- Fixed Time Mode – Sends data for a fixed amount of time (default 5 s)
- Fixed Data Mode – Sends fixed amount of data (default 10 kB)
The default values are hard coded to the firmware and can only be changed at compile time in
app_utils.h. In fixed modes, data transfer is triggered by pressing PB0. However, you don't have to keep holding down the button because the application watches for the time/data threshold after it starts transmitting.
The SoC application consists of 5 main files with the following purposes:
Both the master and slave implementations have similar state machine structures using
switch-statements. The program state machine flow after connection is established is shown in the figure below.
free mode flow
When device is booted, it starts in
ADV_SCAN state. After a connection is established, both participants transition to
CONNECTED. The master in the connection proceeds to change PHY and connection parameters if that is necessary. For example, if the connection was lost on 2M PHY,the master will request a PHY change after initial connection with 1M PHY. In the firmware, default values
#definedfor connection parameters are hard coded on each PHY such as connection interval, slave latency and timeout. On PHY change, master will request the use of these parameters.
After the right parameters are set, the master proceeds to subscribe first to notifications with the
gecko_cmd_gatt_set_characteristic_notification command and transition to
SUBSCRIBED_NOTIFICATIONS. On success, a
gatt_procedure_completed event is received. Use the above command to subscribe to indications and then transition to wait for another
gatt_procedure_completed event in
SUBSCRIBED_INDICATIONS. On success, a transition to
SUBSCRIBED is made. At the same time, the slave checks for changes in the Client Characteristic Configuration for the
Notifications and
Indications characteristics. When notifications/indications are enabled by the master, the slave also transitions to
SUBSCRIBED_NOTIFICATIONS /
SUBSCRIBED_INDICATIONS respectively.
In
SUBSCRIBED, data transmission can be initialized from the slave transitions to
NOTIFY or
INDICATE depending on whether the button pressed was PB0 or PB1.
During transmission, master, the receiving side, handles
gecko_evt_gatt_characteristic_value events and keeps incrementing the
bitsSent and
operationCount variables when data is received. It also checks if the event’s
att_opcode shows that an indication to
Indications characteristic is received, in which case a confirmation is sent back to the slave to acknowledge the successful operation. For indications, the slave must wait for the confirmation from the master before registering the bits as sent. The program stays in the
INDICATE state until the last of the confirmations is received and none are left pending (
waitingForConfirmation flag variable is used for this).
The transmission ends after the push button is released on the slave side. Releasing the button triggers another write without response to
Transmission ON characteristic, at which point the end time is taken and the display is refreshed again, whether. They need handling regardless of the main state of the program. The event that is checked is the same that falls through the main state
switch.
Example Test Results
Typical throughput (between 2 BG13 boards):
- 1M PHY notifications ~740 kbps, indications 20 kbps (50 ms interval)
- 2M PHY notifications ~1.3 Mbps, indications 40 kbps (25 ms interval)
- Coded PHY notifications 85 kbps, indications 5 kbps (200 ms interval)
Setting up
To run this example you need the following:
- 2 Wireless Starter Kits (WSTK) if to test between 2 EFR32/BGM
- Radio boards with an EFR32[B|M]x/[B|M]GMx device e.g., BRD4305C (BGM13S) or BRD4104A (EFR32BG13)
- [Optional] EFR Connect App on your smart phone
- Simplicity Studio
Create a new
SoC-Emptyapplication project with
Bluetooth SDKversion 2.12.x or above and selecting your radio board (not an OPN).
img
Click on the
*.iscfile in the project tree, select the
Custom BLE GATTfield on the upper right side of the GATT configurator, and select
Import GATT from .bgproj filefrom the bottom icon on the right side.
Select the
gatt.xmlprovided here, click
Save, and press
Generate. You should now have a new
Throughput Test Serviceand within it four characteristics.
Copy the following files to your project:
- app.c
- app_slave.c
- app_master.c
- app_utils.h
- app_utils.c
- lcd_support.bat
Run the lcd_support.bat batch file, which copies the necessary files to use the LCD screen from the SDK directories to your project in the workspace. To run the file, double-click from the project tree within Simplicity IDE.
Add the following to the include paths, for example for GCC: right-click on the project -> Properties -> "C/C++ Build" -> Settings -> "GNU ARM C Compiler" -> Includes): "${workspace_loc:/${ProjName}/lcd-graphics}"
Add the following line to
hal-config.h:
#define HAL_SPIDISPLAY_FREQUENCY (1000000)
[OPTIONAL] To use TX Power above +10 dBm on the parts that support it, make the following changes:
- Adjust the
TX_POWERmacro used to set your desired TX power in
app_utils.h
Note).
The setpoint defined with
TX_POWERand the value returned by stack from command system_set_tx_power may differ, especially with lower values, as discussed here. The TX power is shown on screen in dBm and it is what gets returned by the system_set_tx_power command.
Now, the project should build without errors. When you flash the application, you should see a screen similar to this on your kit.
Usage
The same Throughput Tester firmware is used for both master and slave devices. The device boots)
Throughput between two WSTKs / Radio Boards
Two (2) radio boards
- Program both radio boards with the throughput tester firmware as discussed in section How to set up.
- Set one of the devices as Slave and the other as Master (hold PB0 on boot)..
Master Device Functionality use cases the PHY will only be changed between 1M and 2M. If no other PHY than 1M is supported, the change will not be triggered.
Slave Device Functionality
In slave role, the device starts an advertisement set with both 1M and Coded PHYs. It advertises the
"Complete Local Name" AD data type with the name
"Throughput Tester".
Depending on which button is pressed and held down, the slave will generate data and send either notifications or indications. After the connection is established, the buttons have the following functions:
- Press and hold PB0 to send notifications.
- Press and hold PB1 to send indications.
You will see the screen stop refreshing while holding a button and the throughput result and operation count will update (TH and CNT) after releasing to ensure that the device is fully dedicated to data exchange over the Bluetooth link and the throughput doesn't get affected by the screen refreshing operation.
Throughput between Radio Board and Smart Phone
A smart phoneby the name of
Throughput Testerand connect to it.
You should see a service (UUID: bbb99e70-fff7-46cf-abc7-2d32c71820f2) with four characteristics down at the bottom. | Characteristic | Description | UUID | |:-----------|:-------|:------| |
Indications|255B array for the indication data|6109b631-a643-4a51-83d2-2059700ad49f| |
Notifications|255B array for the notification data|47b73dd6-dee3-4da1-9be0-f5c539a9a4be| |
Transmission ON|Used to indicate start (1) and end (0) of the transmission|be6b6be1-cd8a-4106-9181-5ffe2bc67718| |
Throughput result| The throughput test result is written to this characteristic after each calculation to be viewed by the client|adf32227-b00f-400c-9eeb-b903a6cc291b|
Throughput resultand
Indicationsor
Notificationscharacteristics (click the icon). The display on the kit should show
"Yes"for
NOTIFYand/or
INDICATEdepending on what you chose.
- Press the buttons on the slave device to transmit. You should see data coming in the corresponding characteristic value field in the.
Throughput between Radio Board and 3rd Party Device
As you might guess from the mobile app approach, you can use any BLE-capable 3rd party device, such as a USB dongle or laptop Bluetooth adapter to issue the GATT commands to write and read the above characteristics. One example with the NCP host application is discussed in later sections. | https://docs.silabs.com/bluetooth/2.13/code-examples/stack-features/system-and-performance/throughput-tester-soc-mode | 2021-09-16T16:51:40 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['/resources/bluetooth/code-examples/stack-features/system-and-performance/throughput-tester-soc-mode/images/throughput_kit_screen.jpg',
'PHY change diagram'], dtype=object)
array(['/resources/bluetooth/code-examples/stack-features/system-and-performance/throughput-tester-soc-mode/images/throughput_kit_screen_connected.jpg',
'PHY change diagram'], dtype=object)
array(['/resources/bluetooth/code-examples/stack-features/system-and-performance/throughput-tester-soc-mode/images/phy_change_diagram.png',
'PHY change diagram'], dtype=object) ] | docs.silabs.com |
The SIOS Protection Suite PostgreSQL Server Recovery Kit software lets you tie the data integrity of PostgreSQL-based databases to the increased availability provided by SIOS Protection Suite for Windows.
The LifeKeeper GUI allows you to easily create a PostgreSQL resource hierarchy. SIOS Protection Suite can then protect all of the disk resources used by the PostgreSQL Server instance, as well as the LifeKeeper network resources used by clients to access the database.
Feedback
Thanks for your feedback.
Post your comment on this topic. | https://docs.us.sios.com/sps/8.6.3/en/topic/sios-protection-suite-postgresql-server-recovery-kit | 2021-09-16T16:38:25 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.us.sios.com |
Working Group Two's API platform allows both Mobile Operators and Third Party Developers to build products that interact with the core network. Operators can authenticate directly using API keys, while Third Party Developers need to obtain consent from subscribers via our Oauth2 service.
If you're a third party developer (someone who has no relation to us), you can can create an account in our Developer Portal at. From here you can create OAuth clients, which will allow our subscribers to grant you an access token to act on their behalf.
If you work for one of our partner operators, you can log in to our Partner Portal at. From here you can create API keys which will grant you full access to all APIs for all of your subscribers. If you follow the examples you will be up and running within minutes!
Send SMS to or from anyone on your platform.
Send MMS to or from anyone on your platform.
Get events for anything happening in the network
Manage your data connection
Access a subscribers Voicemail inbox, including the audio files.
Set a subscribers call forwarding
Manage your subscribers's profiles. Enable/disable calls, SMS, etc.
Access relevant OpenMetrics time series directly from our systems. | https://docs.wgtwo.com/ | 2021-09-16T15:42:15 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.wgtwo.com |
The following AOVs are available for the Toon shader.
Stylized Highlight AOV
Stylized highlight AOV.
Rim Light AOV
Rim light AOV.
Rendering Edges Separately
To save the beauty render (without edges) and edges separately, you must create an RGBA AOV (beauty) and a Custom AOV (edges). Use a box_filter for the RGBA AOV (because the contour_filter uses box_filter internally) and a contour_filter for the edges Custom AOV. This enables you to render two separate images simultaneously, which also saves the overall rendering time.
RGBA (box_filter) and edges custom AOV (contour_filter)
AOV Prefix
An optional aov_prefix that will be prepended to the toon AOVs' names. For instance, if aov_prefix is
"toon_", the toon diffuse AOV will be written out to
"toon_diffuse". This can be used when you need to access both the toon AOVs and the core's LPE AOVs. | https://docs.arnoldrenderer.com/pages/viewpage.action?pageId=71008401 | 2021-09-16T15:18:56 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.arnoldrenderer.com |
Media Manager
Media Files
Files in de
Sorry, you don't have enough rights to read files.
File
- Date:
- 2019/04/05 13:45
- Filename:
- settings_en.jpeg
- Caption:
- ASCII Screenshot
- Format:
- JPEG
- Size:
- 240KB
- Width:
- 1500
- Height:
- 1314
- References for:
- Client installation | https://docs.gwdg.de/doku.php?id=en:services:storage_services:gwdg_cdstar:start&ns=de&tab_files=files&do=media&tab_details=view&image=en%3Aservices%3Astorage_services%3Aown_cloud%3Asettings_en.jpeg | 2021-09-16T16:42:57 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.gwdg.de |
$ curl -O
roxctlCLI
Red Hat Advanced Cluster Security for Kubernetes installs a set services on your OpenShift Container Platform cluster.
This topic describes the installation procedure for installing Red Hat Advanced Cluster Security for Kubernetes on your OpenShift Container Platform cluster by using the
roxctl CLI.
High-level installation flow:
Install the
roxctl CLI.
Use the
roxctl CLI interactive installer to install the centralized components (Central and Scanner).
Install Sensor to monitor your cluster.
Before you install:
To install Red Hat Advanced Cluster Security for Kubernetes you must install the
roxctl CLI by downloading the binary.
You can install
roxctl on Linux, Windows, or macOS.
You can install the
roxctl CLI binary on Linux by using the following procedure.
Download the latest version of the
roxctl CLI:
$ curl -O macOS by using the following procedure.
Download the latest version of the
roxctl CLI:
$ curl -O
Remove all extended attributes from the binary:
$ xattr -c roxctl Windows by using the following procedure.
Download the latest version of the
roxctl CLI:
$ curl -O
Verify the roxctl version you have installed:
$ roxctl version
The main component of Red Hat Advanced Cluster Security for Kubernetes is called Central. You can install Central on OpenShift Container Platform by using the interactive installer. You deploy Central only once and you can monitor multiple separate clusters by using the same installation.
Use the interactive installer to generate the required secrets, deployment configurations, and deployment scripts for your environment.
Run the interactive install command:
$ roxctl central generate interactive
Press Enter to accept the default value for a prompt or enter custom values as required.
Enter path to the backup bundle from which to restore keys and certificates (optional): Enter PEM cert bundle file (optional): (1) Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): openshift Enter the directory to output the deployment bundle to (default: "central-bundle"): Enter the OpenShift major version (3 or 4) to deploy on (default: "0"): 4 Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional): Enter the method of exposing Central (route, lb, np, none) (default: "none"): route (2) Enter main image to use (default: "stackrox.io/main:3.0.61.1"): Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"): Enter whether to enable telemetry (default: "true"): Enter the deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"): Enter Scanner DB image to use (default: "stackrox.io/scanner-db:2.15.2"): Enter Scanner image to use (default: "stackrox.io/scanner:2.15.2"): Enter Central volume type (hostpath, pvc): pvc (3) Enter external volume name (default: "stackrox-db"): Enter external volume size in Gi (default: "100"): Enter storage class name (optional if you have a default StorageClass configured):
On completion, the installer creates a folder named central-bundle, which contains the necessary YAML manifests and scripts to deploy Central. In addition, it shows on-screen instructions for the scripts you need to run to deploy additional trusted certificate authorities, Central and Scanner, and the authentication instructions for logging into the RHACS portal along with the autogenerated password if you did not provide one when answering the prompts.
After you run the interactive installer, you can run the
setup.sh script to install Central.
Run the
setup.sh script to configure image registry access:
$ ./central-bundle/central/scripts/setup.sh
Create the necessary resources:
$ oc create -R -f central-bundle/central
Check the deployment progress:
$ oc get pod -n stackrox -w
After Central is running, find the RHACS portal IP address and open it in your browser. Depending on the exposure method you selected when answering the prompts, use one of the following methods to get the IP address.
You can configure Red Hat Advanced Cluster Security for Kubernetes to obtain image data from a variety of open-source and commercial image scanners.
However, Red Hat Advanced Cluster Security for Kubernetes also provides an image vulnerability scanner component, called Scanner. It enriches deployments with image vulnerability information.
Red Hat recommends deploying Scanner so that it can scan all images, including the images from public registries, for vulnerabilities. You can deploy the Scanner in the same cluster with Central.
You must configure your image registry to allow Scanner to download and scan images. Usually, image registry integrations are created automatically by Red Hat Advanced Cluster Security for Kubernetes.
Run the following command to configure image registry access:
$ ./central-bundle/scanner/scripts/setup.sh
After the script finishes, run the following command to create the scanner service:
$ oc create -R -f central-bundle/scanner
To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. The following steps describe adding Sensor by using the RHACS portal.
On the RHACS portal, navigate to Platform Configuration → Clusters.
Select + New Cluster.
Specify a name for the cluster.
Provide appropriate values for the fields based on where you are deploying the Sensor.
If you are deploying Sensor in the same cluster, accept the default values for all the fields.
If you are deploying into a different cluster, replace
central.stackrox.svc:443 with a load balancer, node port, or other address, including the port number, that is accessible from the other cluster.
If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB), use the WebSocket Secure (
wss) protocol. To use
wss:
Prefix the address with
wss://.
Add the port number after the address, for example,
wss://stackrox-central.example.com:443.
Click Next to continue with the Sensor setup.
Click Download YAML File and Keys to download the cluster bundle (zip archive).
From a system that has access to the monitored cluster, unzip and run the
sensor script from the cluster bundle:
$ unzip -d sensor sensor-<cluster_name>.zip
$ ./sensor/sensor.sh
If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for assistance.
After Sensor is deployed, it contacts Central and provides cluster information.
Return to the RHACS portal and check if the deployment is successful. If it is successful, a green checkmark appears under section #2. If you do not see a green checkmark, use the following command to check for problems:
On OpenShift Container Platform:
$ oc get pod -n stackrox -w
On Kubernetes:
$ kubectl get pod -n stackrox -w
Click Finish to close the window.
After installation, Sensor starts reporting security information to Red Hat Advanced Cluster Security for Kubernetes and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.
After you complete the installation, navigate to the RHACS portal and run a few vulnerable applications to evaluate the results of security assessments and policy violations.
Find the address of the RHACS portal based on your exposure method:
For a route:
$ oc get route central -n stackrox
For a load balancer:
$ oc get service central-loadbalancer -n stackrox
For port forward:
Run the following command:
$ oc port-forward svc/central 18443:443 -n stackrox
Navigate to.
Create a new project:
$ oc new-project test
Start some applications with critical vulnerabilities:
$ oc run shell --labels=app=shellshock,team=test-team \ --image=vulnerables/cve-2014-6271 -n test
$ oc run samba --labels=app=rce \ --image=vulnerables/cve-2017-7494 -n test
Red Hat Advanced Cluster Security for Kubernetes automatically scans these deployments for security risk and policy violations as soon as they are submitted to the cluster.
Navigate to the RHACS portal to view the violations.
You can log in to the RHACS portal by using the default username
admin and the generated password. | https://docs.openshift.com/acs/installing/install-quick-roxctl.html | 2021-09-16T16:35:52 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.openshift.com |
native user functionality or LDAP. You can configure pass-through authentication for one or more users, and for groups of (LDAP) users.
For more information about how pass-through authentication works, see About pass-through authentication.
1. Click Settings > Virtual Indexes.
2. Click the pass-through authentication tab.
3. Select the Provider for which you want to map the.! | https://docs.splunk.com/Documentation/Splunk/8.2.0/HadoopAnalytics/Setupuserimpersonation | 2021-09-16T17:06:59 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
HopsFS consists of the following types of nodes: NameNodes, DataNodes, and Clients. All the configurations parameters are defined in
core-site.xml and
hdfs-site.xml files.
Currently Hops only supports non-secure mode of operations. As Hops is a fork of the Apache Hadoop code base, most of the Apache Hadoop configuration parameters and features are supported in Hops. In the following sections we highlight differences between HDFS and HopsFS and point out new configuration parameters and the parameters that are not supported due to different metadata management scheme . | https://hopsworks.readthedocs.io/en/stable/user_guide/hopsfs.html | 2021-09-16T15:51:26 | CC-MAIN-2021-39 | 1631780053657.29 | [] | hopsworks.readthedocs.io |
© 2020 The original authors.
WildFly bootable JAR application development
This document details the steps to follow in order to develop a WildFly application packaged as a bootable JAR. A bootable JAR can be run both on bare-metal and cloud platforms.
Developing an application packaged as a bootable JAR is not different from developing an application for a traditional WildFly server installation using Maven. The extra steps required to package your application inside a bootable JAR are handled by the org.wildfly.plugins:wildfly-jar-maven-plugin Maven plugin.
This document contains the minimal information set required to build and run a WildFly bootable JAR. Complete information on the Maven plugin usage can be found in the bootable JAR documentation.
1. Adding the bootable JAR Maven plugin to your pom file
This is done by adding an extra build step to your application deployment Maven pom.xml file.
<build> <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <configuration> ... </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
The next chapter covers the plugin configuration items that are required to identify the WildFly server version and content.
2. Galleon configuration
The Bootable JAR Maven plugin depends on Galleon to construct the WildFly server contained in the JAR file.
Galleon is configured thanks to the Maven plugin
element.
<configuration>
The first required piece of information that Galleon needs is a reference to the WildFly Galleon feature-pack. The WildFly Galleon feature-pack is a maven artifact that contains everything needed to dynamically provision a server. This feature-pack, as well as the feature-packs on which its depends, are deployed in public maven repositories.
When the bootable JAR Maven plugin builds a JAR, WildFly feature-packs are retrieved and their content is assembled to create the server contained in the JAR.
Once you have identified a WildFly Galleon feature-pack, you need to select a set of WildFly Layers that are used to compose the server.
The set of Galleon layers to include is driven by the needs of the application you are developing. The list of WildFly Layers provides details on the server features that each layer brings. Make sure that the API and server features you are using (eg: Jakarta RESTful Web Services, MicroProfile Config, datasources) are provided by the layers you are choosing.
If you decide not to specify Galleon layers, a server containing all MicroProfile subsystems is
provisioned. (The server configuration is identical to the
configuration
in the traditional WildFly server zip.)
standalone-microprofile.xml
Maven Plugin configuration extract example:
<build> <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <configuration> <feature-pack-location>wildfly@maven(org.jboss.universe:community-universe)</feature-pack-location> (1) <layers> <layer>jaxrs-server</layer> (2) </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
(1) In this plugin configuration extract, we are retrieving the latest WildFly Galleon feature-pack installed in the
Galleon universe. In case you would like to provision a specific version of the server,
you would need to specify the server version, for example
org.jboss.universe:community-universe
wildfly@maven(org.jboss.universe:community-universe)#21.0.0.Final
(2) The jaxrs-server layer and all its dependencies are provisioned.
2.
2.1.
2.1.2. Basic Galleon Layers
3. Additional configuration
The plugin allows you to specify additional configuration items:
A set of WildFly CLI scripts to execute to fine tune the server configuration.
Some extra content to be copied inside the bootable JAR (e.g.: server keystore).
Check this documentation for more details on how to configure execution of CLI scripts and to package extra content.
4. Packaging your application
Call
` mvn package` to package both your application and the bootable JAR in the file
<project build directory>/<project final name>-bootable.jar
In order to speed-up the development of your application (avoid rebuilding the JAR each time your application is re-compiled), the Maven plugin offers a development mode that allows you to build and start the bootable JAR only once.
Check this documentation for more details on the development mode.
5. Running your application
Call
java -jar <path to bootable JAR> <arguments>
In additon, you can use the
and
wildfly-jar:run
plugin goals to launch the bootable JAR.
wildfly-jar:start
5.1. Bootable JAR arguments
The following arguments can be used when starting the bootable JAR: | http://docs.wildfly.org/24/Bootable_Guide.html | 2021-09-16T14:55:14 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.wildfly.org |
Reading Related Data with the Entity Framework in an ASP.NET MVC Application (5 of 10)
by Tom Dykstra
The Contoso University sample web application demonstrates how to create ASP.NET MVC 4 applications using the Entity Framework 5 Code First and Visual Studio 2012. For information about the tutorial series, see the first tutorial in the series.
Note
If you run into a problem you can't resolve, download the completed chapter and try to reproduce your problem. You can generally find the solution to the problem by comparing your code to the completed code. For some common errors and how to solve them, see Errors and Workarounds.
In the previous tutorial you completed the School data model. In this tutorial you'll read and display related data — that is, data that the Entity Framework loads into navigation properties.
The following illustrations show the pages that you'll work with.
.
Eager loading. When the entity is read, related data is retrieved along with it. This typically results in a single join query that retrieves all of the data that's needed. You specify eager loading by using the
Includemethod.
Explicit loading. This is similar to lazy loading, except that you explicitly retrieve the related data in code; it doesn't happen automatically when you access a navigation property. You load related data manually by getting the object state manager entry for an entity and calling the
Collection.Loadmethod for collections or the
Reference.Loadmethod for properties that hold a single entity. (In the following example, if you wanted to load the Administrator navigation property, you'd replace
Collection(x => x.Courses)with
Reference(x => x.Administrator).)
Because they don't immediately retrieve the property values, lazy loading and explicit loading are also both known as deferred loading.
In general, if you know you need related data for every entity retrieved, eager loading..
The database context class performs lazy loading by default. There are two ways to disable lazy loading:
For specific navigation properties, omit the
virtualkeyword when you declare the property.
For all navigation properties, set
LazyLoadingEnabledto
false. For example, you can put the following code in the constructor of your context class:
this.Configuration.LazyLoadingEnabled = false;.
Create a Courses Index that you did earlier for the
Student controller, as shown in the following illustration (except unlike the image, your context class is in the DAL namespace, not the Models namespace):
Open Controllers\CourseController.cs and look at the
Index method:
public ViewResult Index() { var courses = db.Courses.Include(c => c.Department); return View(courses.ToList()); }
The automatic scaffolding has specified eager loading for the
Department navigation property by using the
Include method.
Open Views\Course\Index.cshtml and replace the existing code with the following code. The changes are highlighted:
@model IEnumerable<ContosoUniversity.Models.Course> @{ ViewBag.Title = "Courses"; } <h2>Courses</h2> <p> @Html.ActionLink("Create New", "Create") </p> <table> <tr> <th></th> <th>Number</th> <th>Title</th> <th>Credits</th> <th>Department</th> </tr> @foreach (var item in Model) { <tr> <td> @Html.ActionLink("Edit", "Edit", new { id=item.CourseID }) | @Html.ActionLink("Details", "Details", new { id=item.CourseID }) | @Html.ActionLink("Delete", "Delete", new { id=item.CourseID }) </td> <td> @Html.DisplayFor(modelItem => item.CourseID) </td> <td> @Html.DisplayFor(modelItem => item.Title) </td> <td> @Html.DisplayFor(modelItem => item.Credits) </td> <td> @Html.DisplayFor(modelItem => item.Department.Name) </td> </tr> } </table>
You've made the following changes to the scaffolded code:
- Changed the heading from Index to Courses.
- Moved the row links to the left.
- Added a column under the heading Number that shows the
CourseIDproperty value. (By default, primary keys aren't scaffolded because normally they are meaningless to end users. However, in this case the primary key is meaningful and you want to show it.)
- Changed the last column heading from DepartmentID (the name of the foreign key to the
Departmententity) to Department.
Notice that for the last.
Create an Instructors Index Page That Shows Courses and Enrollments
In this section you'll create a controller and view for the
Instructor entity in order to display the Instructors Index page:
Instructor Index; } } }
Adding a Style for Selected Rows
To mark selected rows you need a different background color. To provide a style for this UI, add the following highlighted code to the section
/* info and errors */ in Content\Site.css, as shown below:
/* info and errors */ .selectedrow { background-color: #a4d4e6; } .message-info { border: 1px solid; clear: both; padding: 10px 20px; }
Creating the Instructor Controller and Views
Create an
InstructorController controller as shown in the following illustration:
Open Controllers\InstructorController.cs and add a
using statement for the
ViewModels namespace:
using ContosoUniversity.ViewModels;
The scaffolded code in the
Index method specifies eager loading only for the
OfficeAssignment navigation property:
public View.InstructorID ==.InstructorID == id.Value)
Instead of:
.Where(I => i.InstructorID ==; }
Modifying the Instructor Index View
In Views\Instructor\Index.cshtml, replace the existing code with the following code. The changes are highlighted:
@model ContosoUniversity.ViewModels.InstructorIndexData @{ ViewBag. <td> @Html.ActionLink("Select", "Index", new { id = item.InstructorID }) | @Html.ActionLink("Edit", "Edit", new { id = item.InstructorID }) | @Html.ActionLink("Details", "Details", new { id = item.InstructorID }) | @Html.ActionLink("Delete", "Delete", new { id = item.InstructorID }) </td> <td> @item.LastName </td> <td> @item.FirstMidName </td> <td> @Html.DisplayFor(modelItem => item.HireDate) </td> <td> @if (item.OfficeAssignment != null) { @item.OfficeAssignment.Location } </td> </tr> } </table>
You've made the following changes to the existing code:
Changed the model class to
InstructorIndexData.
Changed the page title from Index to Instructors.
Moved the row link columns to the left.
Removed the FullName column.="selectedrow"to the
trelement of the selected instructor. This sets a background color for the selected row using the CSS class that you created earlier. (The
valignattribute will be useful in the following tutorial when you add a multi-row column to the table.)
string selectedRow = ""; if (item.InstructorID == ViewBag.InstructorID) { selectedRow = "selectedrow"; } .
In the Views\Instructor\Index.cshtml file, after the closing
table element (at the end of the file), add the following highlighted code. This displays a list of courses related to an instructor when an instructor is selected.
<td> @if (item.OfficeAssignment != null) { @item.OfficeAssignment.Location } </td> </tr> } </table> @if (Model.Courses != null) { <h3>Courses Taught by Selected Instructor</h3> <table> <tr> <th></th> <th>ID</th> <th>Title</th> <th>Department</th> </tr> @foreach (var item in Model.Courses) { string selectedRow = ""; if (item.CourseID == ViewBag.CourseID) { selectedRow = "selectedrow"; } .
Note
The .css file is cached by browsers. If you don't see the changes when you run the application, do a hard refresh (hold down the CTRL key while clicking the Refresh button, or press CTRL+F5).> .
. You can.
Links to other Entity Framework resources can be found in the ASP.NET Data Access Content Map. | https://docs.microsoft.com/en-us/aspnet/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/reading-related-data-with-the-entity-framework-in-an-asp-net-mvc-application | 2021-09-16T15:17:06 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['reading-related-data-with-the-entity-framework-in-an-asp-net-mvc-application/_static/image1.png',
None], dtype=object)
array(['reading-related-data-with-the-entity-framework-in-an-asp-net-mvc-application/_static/image2.png',
'Instructors_index_page_with_instructor_and_course_selected'],
dtype=object)
array(['https://asp.net/media/2577868/Windows-Live-Writer_Reading-Re.NET-MVC-Application-5-of-10h1_ADC3_Add_Controller_dialog_box_for_Course_controller_c167c11e-2d3e-4b64-a2b9-a0b368b8041d.png',
'Add_Controller_dialog_box_for_Course_controller'], dtype=object)
array(['reading-related-data-with-the-entity-framework-in-an-asp-net-mvc-application/_static/image3.png',
'Courses_index_page_with_department_names'], dtype=object)
array(['reading-related-data-with-the-entity-framework-in-an-asp-net-mvc-application/_static/image4.png',
'Instructors_index_page_with_instructor_and_course_selected'],
dtype=object)
array(['https://asp.net/media/2577909/Windows-Live-Writer_Reading-Re.NET-MVC-Application-5-of-10h1_ADC3_Add_Controller_dialog_box_for_Instructor_controller_f99c10aa-1efd-49d6-af1d-b00461616107.png',
'Add_Controller_dialog_box_for_Instructor_controller'],
dtype=object)
array(['reading-related-data-with-the-entity-framework-in-an-asp-net-mvc-application/_static/image5.png',
'Instructors_index_page_with_nothing_selected'], dtype=object)
array(['reading-related-data-with-the-entity-framework-in-an-asp-net-mvc-application/_static/image6.png',
'Instructors_index_page_with_instructor_selected'], dtype=object)
array(['reading-related-data-with-the-entity-framework-in-an-asp-net-mvc-application/_static/image7.png',
'Instructors_index_page_with_instructor_and_course_selected'],
dtype=object) ] | docs.microsoft.com |
Tranzman Shares
Tranzman Appliance has flexible and effortless approach towards sharing of data.It uses SMB (Server Message Block) for providing shared access to files. Depending upon the scenarios, a need may arise to read or write data on to directories in Tranzman Appliance. Tranzman Shares can be used in such cases.
Please follow the below steps for accessing the Tranzman Shares over SMB.
Step 1
Login to the Tranzman CLISH and navigate to srl_support -> shares
Step 2
Press the ? on the keyboard and it should list you all the options in there.
Step 3
Collect the IP of the machine where the Tranzman Shares will be mounted and open the share using the below command.
Step 4
Do a show command after that to list the shares.
Step 5
Go the client machine and mount the share.
So this is how easily Tranzman Shares can be used for data sharing. | https://docs.stoneram.com/index.php/Tranzman_Shares | 2021-09-16T16:38:59 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.stoneram.com |
content
Since
v0.14.0
Generated content can be customized further away with
content in configuration.
If the
content is empty the default order of sections is used.
asciidocand
markdown.
contentwill be ignored for other formatters.
content is a Go template with following additional variables:
{{ .Header }}
{{ .Footer }}
{{ .Inputs }}
{{ .Modules }}
{{ .Outputs }}
{{ .Providers }}
{{ .Requirements }}
{{ .Resources }}
and following functions:
{{ include "relative/path/to/file" }}
These variables are the generated output of individual sections in the selected
formatter. For example
{{ .Inputs }} is Markdown Table representation of inputs
when formatter is set to
markdown table and so on.
sections.showand
sections.hide) takes precedence over the
content.
Options
Available options with their default values.
content: ""
Examples
Content can be customized, rearranged. It can have arbitrary text in between sections:
content: |- Any arbitrary text can be placed anywhere in the content {{ .Header }} and even in between sections {{ .Providers }} and they don't even need to be in the default order {{ .Outputs }} {{ .Inputs }}
Relative files can be included in the
content:
content: |- include any relative files {{ include "relative/path/to/file" }}
include can be used to add example snippet code in the
content:
content: |- # Examples ```hcl {{ include "examples/foo/main.tf" }} ```
In the following example, although
{{ .Providers }} is defined it won’t be
rendered because
providers is not set to be shown in
sections.show.
sections: show: - header - inputs - outputs content: |- {{ .Header }} Some more information can go here. {{ .Providers }} {{ .Inputs }} {{ .Outputs }} | https://terraform-docs.io/user-guide/configuration/content/ | 2021-09-16T16:48:02 | CC-MAIN-2021-39 | 1631780053657.29 | [] | terraform-docs.io |
Vumi Go’s HTTP API¶
The API allows for sending & receiving Vumi messages via HTTP. These messages are plain JSON strings. Three types of messages are available:
- Inbound and outbound user messages (e.g. SMSes, USSD responses, Twitter messages)
- Events (e.g. delivery reports, acknowledgements)
- Metrics (values recorded at a specific time)
Inbound user messages and events can be received via streaming HTTP or can be pushed to a third party URL via HTTP POST. Outbound messages and metrics can be pushed to Vumi Go via HTTP PUT.
Each HTTP api is bound to a conversation which stores all of the messages sent and received. HTTP Basic auth is used for authentication, the username is the Vumi Go account key and the password is an access token that is stored in the conversation. In order to connect three keys are required:
- The account key
- The accesss token
- The conversation key
Inbound and Outbound User Messages¶
This is the format for messages being sent to, or received from, a person.
User messages are JSON objects of the following format:
{ "message_id": "59b37288d8d94e42ab804158bdbf53e5", "in_reply_to": null, "session_event": null, "to_addr": "1234", "to_addr_type": "msisdn", "from_addr": "+27761234567", "from_addr_type": "msisdn", "content": "This is an incoming SMS!", "transport_name": "smpp_transport", "transport_type": "sms", "transport_metadata": { // this is a dictionary containing // transport specific data }, "helper_metadata": { // this is a dictionary containing // application specific data } }
A reply to this message would put the value of the “message_id” in the “in_reply_to” field so as to link the two.
The from_addr_type and to_addr_type fields describe the type of address declared in from_addr and to_addr. The default for to_addr_type is msisdn, and the default for from_addr_type is null, which is used to mark that the type is unspecified. The other valid values are irc_nickname, twitter_handle, gtalk_id, jabber_id, mxit_id, and wechat_id.
The “session_event” field is used for transports that are session oriented, primarily USSD. This field will be either “null”, “new”, “resume” or “close”. There are no guarantees that these will be set for USSD as it depends on the networks whether or not these values are available. If replying to a message in USSD session then set the “session_event” to “resume” if you are expecting a reply back from the user or to “close” if the message you are sending is the last message and the session is to be closed.
The go-heroku application is an example app that uses the HTTP API to receive and send messages.
A Python client for the HTTP API is available at. It can be installed with
pip install go-http.
Sending Messages¶
$ curl -X PUT \ --user '<account-key>:<access-token>' \ --data '{"in_reply_to": "59b37288d8d94e42ab804158bdbf53e5", \ "to_addr": "+27761234567", \ "to_addr_type": "msisdn", \ "content": "This is an outgoing SMS!"}' \<conversation-key>/messages.json \ -vvv
The UI expects you to specify an access token. All requests to the API require you to use your account key as the username and the token as the password.
The response to the PUT request is the complete Vumi Go user message
and includes the generated Vumi
message_id which should be stored
if you wish to be able to associate events with the message later.
If a message is sent to a recipient that has opted out, the response will be an HTTP 400 error, with the body detailing that the recipient has opted out. Messages sent as a reply will still go through to an opted out recipient. The following is an example response of the error returned by the API:
{ "success": false, "reason": "Recipient with msisdn +12345 has opted out" }
This behaviour can be overridden by setting the disable_optout flag in the account to True. Ask a Vumi Go admin if you need to have optouts disabled.
Receiving User Messages¶
Vumi Go will forward any inbound messages to your application via an HTTP POST. Please specify the URL in the Go UI. You can include a username and password in the URL and use HTTPS if you require authentication.
There is a separate URL for receiving events.
Events¶
This is the format for events. Each event is associated with an outbound user message.
Events are JSON messages with the following format:
{ "message_type": "event", "event_id": "b04ec322fc1c4819bc3f28e6e0c69de6", "event_type": "ack", "user_message_id": "60c48289d8d94e42ab804159acce42d4", "helper_metadata": { // this is a dictionary containing // application specific data }, "timestamp": "2014-10-28 16:19:37.485612", "sent_message_id": "external-id", }
The
event_id unique id for this event.
The
user_message_id is the id of the outbound message the event is
for (this should be returned to you when you post the message to the
HTTP API).
The
event_type is the type of event and can be either
ack,
nack or
delivery_report.
An
ack indicates that the outbound message was succesfully sent to
a third party (e.g. a cellphone network operator) for sending. A
nack indicates that the message was not successfully sent to a
third party and should be resent. The reason the message could not be
sent will be given in the
nack_reason field. Every outbound
message should receive either an
ack or a
nack event.
A
delivery_report indicates whether a message has successfully
reached it’s final destination (e.g. a cellphone). Delivery reports
are only available for some SMS channels. The delivery status will be
given in the
delivery_status field and can be one of
pending
(SMS is still waiting to be delivered to the cellphone),
failed
(the cellphone operator has given up attempting to deliver the SMS) or
delivered (the SMS was successfully delivered to the cellphone).
Note
The meaning of delivery statuses can vary subtly between cellphone operators and should not be relied upon without careful testing of your specific use case.
Receiving Events¶
Vumi Go will forward any events to your application via an HTTP POST. Please specify the URL in the Go UI. You can include a username and password in the URL and use HTTPS if you require authentication.
This is a separate URL to the one for receiving user messages.
Publishing Metrics¶
You are able to publish metrics to Vumi Go via the HTTP APIs metrics endpoint. These metrics are able to be displayed in the Vumi GO UI using the dashboards.
How these dashboards are configured is explained in Vumi Go Dashboards.
PUT<conversation-key>/metrics.json
An example using curl from the commandline:
$ curl -X PUT \ --user '<account-key>:<access-token>' \ --data '[["total_pings", 1200, "MAX"]]' \<conversation-key>/metrics.json \ -vvv | https://vumi-go.readthedocs.io/en/feature-issue-1352-event-downloads/http_api.html | 2021-09-16T15:56:10 | CC-MAIN-2021-39 | 1631780053657.29 | [] | vumi-go.readthedocs.io |
Now that a project has been created and a repository location has been specified, the project can be made available to other team members.
In one of the navigation views select the project JanesTeamProject.
From the project's context menu choose Team > Share Project. If more than one repository provider is installed, select CVS and select Next.
In the sharing wizard page, select the location that was previously created. | http://docs.streambase.com/latest/topic/org.eclipse.platform.doc.user/gettingStarted/qs-61f_syncproject.htm | 2018-11-12T22:53:21 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.streambase.com |
Level 1: Mission critical. Tends to impact all users. (i.e. production stuck in an infinite heroku deployment)
Level 2: Not mission critical but needs to be taken care of as quickly as possible. Tends to impact a subset of users. (i.e. form submission broken on mentorship settings)
Level 3: Issue that can wait until the next sprint planning, but definitely needs to be discussed. Often an edge case that sparks discussion -- is this a patch or do we need to re-architect something? (i.e. running out of memory where a few pages get 500 errors)
Level 4: Issue is not affecting users. Likely an internal tool that needs to be fixed. (i.e. slack notifications not pinging)
Level 5: Tech that stopped working but also isn't related to users/the site. (i.e. the gumball machine we use to generate random winners is broken) | https://docs.dev.to/vocabulary/ | 2018-11-12T22:45:08 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.dev.to |
YITH Deals for WooCommerce is a plugin that allows you to create offers, that will be shown in the checkout.
You can choose different types of offers based on restrictions, adding products with a discount or fixed value to the cart. The plugin will allow you to increase the order average of your store with an irresistible offer. | https://docs.yithemes.com/yith-deals-for-woocommerce/ | 2018-11-12T22:18:34 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.yithemes.com |
You've covered a lot of ground. Let's step back and see how all the pieces fit together.
To start with, this is a script that takes its arguments on the command line, using the getopt module.
def main(argv): ... try: opts, args = getopt.getopt(argv, "hg:d", ["help", "grammar="]) except getopt.GetoptError: ... for opt, arg in opts: ...
You create a new instance of the KantGenerator class, and pass it the grammar file and source that may or may not have been specified on the command line.
k = KantGenerator(grammar, source)
The KantGenerator instance automatically loads the grammar, which is an XML file. You use your custom openAnything function to open the file (which could be stored in a local file or a remote web server), then use the built-in minidom parsing functions to parse the XML into a tree of Python objects.
def _load(self, source): sock = toolbox.openAnything(source) xmldoc = minidom.parse(sock).documentElement sock.close()
Oh, and along the way, you take advantage of your knowledge of the structure of the XML document to set up a little cache of references, which are just elements in the XML document.
def loadGrammar(self, grammar): for ref in self.grammar.getElementsByTagName("ref"): self.refs[ref.attributes["id"].value] = ref
If you specified some source material on the command line, you use that; otherwise you rip through the grammar looking for the "top-level" reference (that isn't referenced by anything else) and use that as a starting point.
def getDefaultSource(self):] return '<xref id="%s"/>' % random.choice(standaloneXrefs)
Now you rip through the source material. The source material is also XML, and you parse it one node at a time. To keep the code separated and more maintainable, you use separate handlers for each node type.
def parse_Element(self, node): handlerMethod = getattr(self, "do_%s" % node.tagName) handlerMethod(node)
You bounce through the grammar, parsing all the children of each p element,
def do_p(self, node): ... if doit: for child in node.childNodes: self.parse(child)
replacing choice elements with a random child,
def do_choice(self, node): self.parse(self.randomChildElement(node))
and replacing xref elements with a random child of the corresponding ref element, which() | http://docs.activestate.com/activepython/2.7/dip/scripts_and_streams/all_together.html | 2018-11-12T22:25:08 | CC-MAIN-2018-47 | 1542039741151.56 | [] | docs.activestate.com |
bathtub strips bathtubs organize your bath toys with the corner bathroom caulk.
Related Post
Backyard Discovery Playsets Kids Microphone Stand Sunny Health And Fitness Rowing Machine Nespresso Capsule Dispensers Keyboard And Mouse Tray Kids Sleeping Cots Girl Ride Toy Floating Rafts For Lakes Humminbird Depth Finders Dixon Ticonderoga Black Remote Controlled Toy Truck Plushie Shark Marcy Olympic Strength Cage Golf Bag Organizer Exerpeutic Folding Magnetic Upright Bike | http://top-docs.co/bathtub-strips/bathtub-strips-bathtubs-organize-your-bath-toys-with-the-corner-bathroom-caulk/ | 2018-11-12T22:47:49 | CC-MAIN-2018-47 | 1542039741151.56 | [array(['http://top-docs.co/wp-content/uploads/2018/03/bathtub-strips-bathtubs-organize-your-bath-toys-with-the-corner-bathroom-caulk.jpg',
'bathtub strips bathtubs organize your bath toys with the corner bathroom caulk bathtub strips bathtubs organize your bath toys with the corner bathroom caulk'],
dtype=object) ] | top-docs.co |
This reference guide provides architectural configuration steps and best practices for deploying SAP HANA in the Amazon Web Services cloud such as Amazon Elastic Compute Cloud (Amazon CE2) and Amazon Virtual Private Cloud (Amazon VPC). Following steps are covered in this document in detail:
- Prepare an AWS Account to deploy HANA
- Launch the Virtual Network and Configure AWS services for SAP HANA Deployment
- Download SAP HANA media from SAP Service Download Center
- Launch and Configure the SAP HANA Servers | http://www.erp-docs.com/1544/sap-hana-amazon-web-services-cloud-quick-reference-manual-beginners/ | 2018-11-12T23:05:24 | CC-MAIN-2018-47 | 1542039741151.56 | [] | www.erp-docs.com |
Many repositories are used (at least in part) to manage files and other artifacts, including service definitions, policy files, images, media, documents, presentations, application components, reusable libraries, configuration files, application installations, databases schemas, management scripts, and so on. Most JCR repository implementations will store those files and maybe index them for searching.
But ModeShape does more. ModeShape sequencers can automatically unlock the structured information buried within all of those files, and this useful content derived from your files is then stored back in the repository where your client applications can search, access, and analyze it using the JCR API. Sequencing is performed in the background, so the client application does not have to wait for (or even know about) the sequencing operations.
The following diagram shows conceptually how these automatic sequencers do this.
As of ModeShape 3.6.0.Final, your applications can use a session to explicitly invoke a sequencer on a specified property. We call these manual sequencers. Any generated output is included in the session's transient state, so nothing is persisted until the application calls session.save().
Sequencers
Sequencers are just POJOs that implement a specific interface, and when they are called they simply process the supplied input, extract meaningful information, and produce an output structure of nodes that somehow represents that meaningful information. This derived information can take almost any form, and it typically varies for each sequencer.. A third example is a sequencer that works on XML Schema Documents might parse the XSD content and generate nodes that mirror the various elements, and attributes, and types defined within the schema document.
Sequencers allow a ModeShape repository to help you extract more meaning from the artifacts you already are managing, and makes it much easier for applications to find and use all that valuable information. All without your applications doing anything extra.
Each repository can be configured with any number of sequencers. Each one includes a name, the POJO class name, an optional classpath (for environments with multiple named classloaders), and any number of POJO-specific fields. Upon startup, ModeShape creates each sequencer by instantiating the POJO and setting all of the fields, then initializing the sequencer so it can register any namespaces or node type definitions.
There are two kinds of sequencers, automatic and manual.
Automatic Sequencers
An automatic sequencer has a path expression that dictates which content in the repository the sequencer is to operate upon. These path expressions are really patterns and look somewhat like simple regular expressions. When persisted content in the repository changes, ModeShape automatically looks to see which (if any) sequencers might be able to run on the changed content. If any of the sequencers do match, ModeShape automatically calls them by supplying the changed content. At that point, the sequencer then processes the supplied content and generates the output, and ModeShape then saves that generated output to the repository.
To use an automatic sequencer, simply add or change content in the repository that matches the sequencers' path expression. For example, if an XSD sequencer is configured for nodes with paths like "/files//*.xsd", then just simply upload a file into that location and save it. ModeShape will detect that the XSD sequencer should be called, and will do the rest. The generated content will magically appear in the repository.
Manual Sequencers
A manual sequencer is simply a sequencer that is configured without path expressions. Because no path expressions are provided, ModeShape cannot determine when/where these sequencers should be applied. Instead, manual sequencers are intended to be called by client applications.
For example, consider that a session just uploaded following code shows how an XSD sequencer configured with name "XSD Sequencer" is manually invoked to place the generated content directly under the "/files/schemas/Customers.xsd" node (and adjacent to the "jcr:content" node):
The sequence(...) method returns true if the sequencer generated output, or "false" if the sequencer couldn't use the input and instead did nothing.
Remember that when the sequence(...) does return, any generated output is only in the session's transient state and "session.save()" must be called to persist this state.
Built-in sequencers
ModeShape comes with sequencer implementations for a variety of file types:
Please see the Built-in sequencers section of the documentation for more detail on all of these sequencers, including how to configure them and the structure of the output they generate.
Custom sequencers
As mentioned earlier, a sequencer is actually just a plain old Java object (POJO). Creating a sequencer is pretty straightforward: create a Java class that extends a single abstract class, package it up for use, and then configure your repository to use it. We walk you through all these steps in the Custom sequencers section of the documentation.
Configuring a automatic sequencer.
A path expression consist of two parts: a selection criteria (or an input path) and an output path:
Input path, similar to regular expressions. Thus, the first input path in the previous table would match node "/a/b", and "b" would be captured and could be used within the output path using "$1", where the number used in the output path identifies the parentheses. Here are some examples of what's captured by the parenthesis and available for use in the output path:
Square brackets can also be used to specify criteria on a node's properties or children. Whatever appears in between the square brackets does not appear in the selected node. This distinction between the selected path and the changed path becomes important when writing custom sequencers.
Output paths
The outputPath part of a path expression defines where the content derived by the sequencer should be stored.
Typically, this points to a location in a different part of the repository, but it can actually be left off if the sequenced output is to be placed directly under the selected node. The output path can also use any of the capture groups used in the input path.
Workspaces in input and output paths
So far, we've talked about how input paths and output paths are independent of the workspace. However, there are times when it's desirable to configure sequencers to only work against content in a specific workspace. In these cases, it is possible to specify the workspace names before the path. For example:
Again, the rules are pretty straightforward. You can leave off the workspace name, or you can prepend the path with "workspaceNamePattern:", where "workspaceNamePattern" is a regular-expression pattern used to match the applicable workspace names. A blank pattern implies any match, and is a shorthand notation for the ".*" regular expression. Note that the repository names may not include forward slashes (e.g., '/') or colons (e.g., ':').
Example path expression
Let's look at an example sequencer path expression:
This matches a changed "jcr:data" property on a node named "jcr:content[1]" that is a child of a node whose name ends with ".jpg", ".jpeg", ".gif", ".bmp", ".pcx", or ".png" ( that may have any same-name-sibling index) appearing at any level in the "default" workspace. Note how the selected path capture the filename (the segment containing the file extension), including any same-name-sibling index. This filename is then used in the output path, which is where the sequenced content is placed under the "/images" node in the "meta" workspace.
So, consider a PNG image file is stored in the "default" workspace in a repository configured with an image sequencer and the aforementioned path expression, and the file is stored at "/jsmith/photos/2011/08/09/reunion.png" using the standard "nt:file" pattern. This means that an "nt:file" node named "reunion.png" is created at the designated path, and a child node named "jcr:content" will be created with primary type of "nt:resource" and a "jcr:data" binary property (at which the image file's content is store).
When the session is saved with these changes, ModeShape discovers that the
property satisfies the criteria of the sequencer, and calls the sequencer's execute(...) method with the selected node, input node, input property and output node of "/images" in the "meta" workspace. When the execute() method completes successfully, the session with the change in the "meta" workspace are saved and the content is immediately available to all other sessions using that workspace.
Waiting for automatic sequencing
When your application creates or uploads content that will kick off a sequencing operation, the sequencing is actually done asynchronously. If you want to be notified when the sequencing is complete, you can use ModeShape's observation feature to register a listener for the sequencing event.
The first step is to create a class that implements "javax.jcr.observation.EventListener". Normally this is pretty easy, but in our case we want to block until the listener is notified via a separate thread. An easy way to do this is to use a java.util.concurrent.CountDownLatch, and to count down the latch as soon as we get our event. (If we carefully register the listener using criteria for only the sequencing output we're interested in, we'll know we'll only receive one event.)
Here's our implementation that captures from the first event whether the sequencing was successful and the path of the output node, and then counts down the latch:
We could then register this using the public API: | https://docs.jboss.org/author/display/MODE/Sequencing | 2017-02-19T16:34:32 | CC-MAIN-2017-09 | 1487501170186.50 | [array(['/author/download/attachments/14352807/Sequencing+workflow.png?version=1&modificationDate=1323270993000',
None], dtype=object)
array(['/author/download/attachments/14352807/SequencingUploadedFile.png?version=1&modificationDate=1323279934000',
None], dtype=object) ] | docs.jboss.org |
Users User Note Category Edit
(Redirected from Help36:Users User Note Category): Title of the Category.
- Alias: The Alias will be used in the SEF URL. Leave this blank and Joomla will fill in a default value from the title. This value will depend on the SEO settings (System → Global Configuration →: Enter an optional note to display in the category list.
-.
- Alt Text: Alternative text used for visitors without access to images..
User Notes: Edit Category: | https://docs.joomla.org/Help36:Users_User_Note_Category | 2017-02-19T16:34:00 | CC-MAIN-2017-09 | 1487501170186.50 | [] | docs.joomla.org |
This Sync (talk| contribs).
Below is a list of the language extension and available tools used for translations on Joomla Documentation
We currently have installed and are using the Translate extension, which uses the subpage convention to mark and translate pages on Joomla! ドキュメンテーション.
The translation system is recommended at least for most visited and stable pages. Further policies have not been established and are being considered.
You can request a page to be added to translation by preparing it for translation, then a translation administrator will have to enable it (see the tutorial How to prepare a page for translation); otherwise, ask directly to one of the Translation administrators to do it.
Below is a list of links to pages, localised, explaining common translation wording and do's and do not's when translating from English pages to your language. | http://docs.joomla.jp/JDOC:Language_policy | 2017-02-19T16:36:13 | CC-MAIN-2017-09 | 1487501170186.50 | [] | docs.joomla.jp |
Effective January 8, 2018, PBS is allowing producers to insert a 15-second card/video to recognize non-corporate funders into their streaming full-length video assets.
If you would like to use a 15-second funder pod to recognize the non-corporate funders, please contact PBS at [email protected] six weeks prior to air/streaming date to get started. We will provide you with access to a Box folder for file delivery and a form to submit along with your files.
You will be required to complete the Funder Request Form and deliver the funder media files to PBS at least four weeks before each episode airs or streams (whichever comes first).
The card/video needs to meet PBS streaming media specifications:
VIDEO FUNDER FILE
Container
- MPEG-4 (.mp4)
Video Stream
- H.264 codec
- High Profile
- 15 Mbps bitrate
- 1920 x 1080 frame size (16:9)
- 29.97 fps (or original)
- Progressive scan (no interlacing)
- Must Be Title Safe
*Must include at least 20 extra frames at the start and end for editing purposes
Audio Stream
- AAC codec (AAC-LC or HE-AAC)
- 192 Kbps bitrate
- 48 KHz sampling rate
- Stereo (2 channels)
STATIC IMAGE FUNDER CARD
- Hi-Res 1920 x 1080
- Must be Title Safe
NOTE: This is being offered only for new programs and episodes. If you’d like these to be added to past videos, PBS will charge producers for the time to edit and process the new files. Edits will also be scheduled after new episodes of current programs are completed.
PBS will make every effort to include all submitted funder pods but cannot guarantee that all videos will include funders in the event of any unforeseen circumstances. | https://docs.pbs.org/pages/viewpage.action?pageId=15008590 | 2017-12-11T05:54:19 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.pbs.org |
user: Users
The
user utility is used for managing additional Control Panel users.
Usage
user <command> [<login_name>] [ <option_1> [<param>] [<option_2> [<param>]] ... [<option_N> [<param>]]
Example
The following command creates an additional
Options
Leave your feedback on this topic here
If you have questions or need support, please visit the Plesk forum or contact your hosting provider.
The comments below are for feedback on the documentation only. No timely answers or help will be provided. | https://docs.plesk.com/en-US/onyx/cli-linux/using-command-line-utilities/user-users.66536/ | 2017-12-11T05:49:47 | CC-MAIN-2017-51 | 1512948512208.1 | [] | docs.plesk.com |
- Contact Form
- One Page Menu
- Overlay/Transparent Menu
- Modal Buttons / Popups
- How to use custom fonts
- I can't change the logo
- SVG logo doesn't show up
- Column Width
- Google Maps doesn't work
- How to change header/title wrapper image
- Header Breakpoint
- I can't find Liquid/Ave Events/Calendar/Schedule
- Adding Shape Dividers/Row Separators
- How to Remove Duplicate Demo Content After Import
- Hiding Columns on Mobile/Desktop
- Desktop/Mobile menu doesn't work properly
- How to change media element width
- Demo Import Fails
- How to duplicate portfolio item
- Performance Testing Using GTMetrix | https://docs.liquid-themes.com/category/58-faq | 2019-05-19T15:16:57 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.liquid-themes.com |
Subscriptions 2.0 changed the way data was stored and instantiated. Because of this, it needed to deprecate a large number of existing hooks.
Some of these hooks had dynamic names, i.e. actions/filters where we don’t know what the hook actually is at run-time because it includes a payment gateway ID or another piece of dynamic data we don’t know in advance. Because of this, there was no choice but to either break backwards compatibility completely for those hooks (which would be severe in the case of scheduled subscription payment hooks) or add a small function to the
'all' hook to check if it’s a hook we have deprecated and maintain backwards compatibility.
The latter option was chosen as far better, but it may cause performance issues on some sites. We have, however, attempted to mitigate performance issues and also made sure there is a way to remove this code if your site is negatively affected and you know you have no deprecated code running on your site.
Mitigating performance issues ↑ Back to top
Functions attached to
'all', while running many times each request, are minor. They simply check the name of the hook to see if they start with 13 known hook prefixes.
This was done to mitigate any major performance impact on your site. However, depending on the number of plugins running on your site, you may find that running this tiny piece of logic on all hooks markedly reduces site performance. If this is the case, consider removing deprecated handling as outlined below.
Removing deprecated hook handling ↑ Back to top
If you know your site is not running outdated code, you can avoid having to load both this and other depreciation handling code to improve performance with Subscriptions 2.0.
The snippet below tells Subscriptions not to load deprecated handlers, including classes that attach to the
'all' hook to handle dynamic hooks:
This snippet can also be downloaded and installed as a plugin.
Warning ↑ Back to top
You must be certain that no third-party or custom code is running on your site that is dependent on deprecated hooks before disabling this deprecated hook handling. Disabling will break backwards compatibility with all old hooks and can, among other things:
- Prevent recurring payments being processed
- Break synchronization of subscription status between the payment gateway and store
- Break updating of a failing payment method
List of Deprecated Hooks ↑ Back to top
This gist provides a complete list of deprecated hooks, and their new counterpart, formatted as PHP arrays. The new hook is listed as the array key, and the deprecated hook as the value or values. | https://docs.woocommerce.com/document/subscriptions-query-monitor-warning/ | 2019-05-19T15:06:53 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.woocommerce.com |
Examples¶
The
examples directory in the JupyterLab repo contains:
several stand-alone examples (
console,
filebrowser,
notebook,
terminal)
a more complex example (
lab).
Installation instructions for the examples are found in the project’s README.
After installing the jupyter notebook server 4.2+, follow the steps for
installing the development version of JupyterLab. To build the examples,
enter from the
jupyterlab repo root directory:
jlpm run build:examples
To run a particular example, navigate to the example’s subdirectory in
the
examples directory and enter:
python main.py
Dissecting the ‘filebrowser’ example¶
The filebrowser example provides a stand-alone implementation of a filebrowser. Here’s what the filebrowser’s user interface looks like:
Let’s take a closer look at the source code in
examples/filebrowser.
Directory structure of ‘filebrowser’ example¶
The filebrowser in
examples/filebrowser is comprised by a handful of
files and the
src directory:
The filebrowser example has two key source files:
src/index.ts: the TypeScript file that defines the functionality
main.py: the Python file that enables the example to be run
Reviewing the source code of each file will help you see the role that each file plays in the stand-alone filebrowser example. | https://jupyterlab.readthedocs.io/en/stable/developer/examples.html | 2019-05-19T14:39:36 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['../_images/filebrowser_example.png', 'filebrowser user interface'],
dtype=object)
array(['../_images/filebrowser_source.png', 'filebrowser source code'],
dtype=object) ] | jupyterlab.readthedocs.io |
GemFire XD does not validate the constraints for all affected rows before applying a bulk update (a single DML statement that updates or inserts multiple rows). The design is optimized for applications where such violations are rare.
A constraint violation exception (or any other exception) that is thrown during a bulk update operation does not indicate which row of the bulk update caused a violation. Applications that receive any such exception cannot determine whether any rows in the bulk operation updated successfully.
To address the possibility of constraint violations or exceptions occurring during a bulk update, an application should always apply a bulk update within the scope of a transaction. Using a transaction is the only way to ensure that all rows are either updated or rolled back as a unit. As an alternative, the application should select rows for updating based on primary keys, and apply those updates one at a time.
In addition, without using a transaction, concurrent DML operations (bulk updates or otherwise) can modify qualified rows, giving incorrect results to both the operations. Use transactions where necessary to ensure that a bulk update operation completes or rolls back as a whole. | http://gemfirexd.docs.pivotal.io/docs/1.0.0/userguide/developers_guide/consistency-atomicity.html | 2019-05-19T15:50:26 | CC-MAIN-2019-22 | 1558232254889.43 | [] | gemfirexd.docs.pivotal.io |
All content with label amazon+dist+import+infinispan+listener+rest.
Related Labels:
json, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, release, query, deadlock, rest_security, archetype, jbossas, nexus, guide, cache, s3,
grid, test, jcache, api, xsd, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, jwt, aws, interface, setup, clustering, eviction, gridfs, concurrency, out_of_memory, jboss_cache, index, events, hash_function, configuration, buddy_replication, loader, write_through, cloud, mvcc, tutorial, notification, jbosscache3x, read_committed, xml, distribution, jose, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, permission, transaction, async, interactive, xaresource, build, searchable, demo, installation, scala, client, non-blocking, migration, jpa, filesystem, tx, json_encryption, gui_demo, eventing, client_server, testng, infinispan_user_guide, hotrod, snapshot, repeatable_read, webdav, docs, consistent_hash, batching, store, jta, faq, 2lcache, docbook, jgroups, lucene, locking, json_signature, hot_rod
more »
( - amazon, - dist, - import, - infinispan, - listener, - rest )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+dist+import+infinispan+listener+rest | 2019-05-19T15:15:25 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.jboss.org |
What is Performance Optimizer? The Performance Optimizer analyzes the performance of your McAfee® ePolicy Orchestrator® (McAfee® ePO™) environment with a score and recommendations for improved performance. Dashboards display the results of the collected data, allowing you to drill down for more detail and to view recommendations. Assessments allow you to view details about your environment. For example, you can view information about unmanaged systems, systems with an inactive McAfee® Agent or Agent Handler, and timestamps of user logons. You can also configure Automatic Responses to send text messages or email notifications when a specific performance area requires examination. | https://docs.mcafee.com/bundle/performance-optimizer-2.2.0-product-guide-epolicy-orchestrator/page/GUID-9E62B415-00CE-4F61-81D0-F6B95C33EACA.html | 2019-05-19T14:34:05 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.mcafee.com |
Now having just been introduced to loops within the Miva Merchant template language, both foreach loops and while loops. There’s one more programming construct that will make your life easier. It’s call pos1 or position 1. Pos1 is a built in loop counter. It’s native to all foreach loops and while loops. It’s a special local variable that will give you the current iteration of the loop that you’re on. It’s typically just written or referred to as pos1 but l.pos1 also works and you’ll see them written both ways.
One other feature of pos1 is that when you have nested loops, so say you have one foreach loop inside of another foreach loop, the pos1 variable changes from pos1 to pos2, pos3, etc. However many loops you have, that number increments. So the 1 just refers to the loop you’re in, the two the 2nd and so on. Most the time when using the position variable you’ll use just pos1. However, it’s important to know that the other ones exist. Let’s take a look at some examples. So here I’m back at my text editor. I grabbed a foreach loop from the category page. So this is the loop that outputs products that are assigned to a category. It’s part of the category listing item and it’s the products array. Here my iterator is “product.” So this loop will loop through every product assigned to this category and output some value. For this example I got rid of all the code and I’m just outputting pos1. I’m using mvt:eval and I’m putting l.pos1. I could get rid of the l. and just do the pos1 and both should output the same value. So let’s bring this code over to Miva and let’s see what happens when it runs. In Miva I’m on the CTGY page and I’m just going to put this in the Header and Footer section of this page, click “Update.” When I come back to the category page on the front end and hit refresh you’ll see it outputs Pos1 and it has the Values 1-10. That’s exactly what we’d expect because there’s ten products on this page.
Let’s take a look at another example. So here I added a new “if” statement. If pos1 EQ 5 and again, I could put l.pos1 here or just use pos1. So if it does equal five, it will output this comment, “This is the 5th iteration of the loop.” So we should see this comment one time after the fifth iteration of the loop. Let’s run it and see what happens. Now when I come back to the store and refresh, you’ll see there’s my comment and it’s right after the fifth iteration of the loop. So this is a great way to target a specific iteration within a foreach loop.
Now what happens if you want to target multiple within the foreach loop? Say every fourth, eighth, twelfth, sixteenth product within that array. Well, there’s a great math trick that allows us to do this. We can use pos1 with the MOD operator, which is the modulus operator. This will return the remainder after it divides two numbers together. So what it will do is it will take pos1 the value, divide it by 4 and if the remainder equals 0, which means if it evenly divides by that number, then it will return true. So here, if pos1 equals 4, 4 divided by 4 is one with the remainder of 0 so that would be true. The same is true with 8. 8 divided by 4 is 2 with the remainder of 0 and that would return true as well. Let’s run this and see what happens. I’ll paste this in here and if I refresh this on the frontend, so you’ll see after the fourth iteration of the loop, it prints this comment and then after the eighth loop it prints it again. If we had twelve or sixteen it would also do it. This is a great little tool to allow you to target different multiples within the loop.
So the last thing I want to show you has to do with the nested foreach loops. When to use pos1 versus pos2, pos3, etc. I’m going to get rid of this code that we were working with earlier and I’m going to paste in the same foreach loop again so we have nested foreach loops. So it’s the same category_listing_products, and I changed the iterator to product2 just so it’s unique. Here, I have Pos2 Value: and I’m outputting l.pos2 and it’s pos2 because it’s the second loop. This loop is contained within the previous loop. Say I had another loop within this foreach loop then the pos counter would get incremented into pos3. So let’s run this and see what outputs. So here we see it has the pos1 value of 1 and then it gets into the second foreach loop and within that foreach loop it loops through it ten times and it outputs one through ten again. Then it comes back to pos1 as a value of 2 and it loops through the pos2 value ten times again. This is exactly what we would expect.
So the pos1 counter is a great built in feature and it will save you from having to create and maintain your own counters when working with loops. | https://docs.miva.com/videos/post-loop-counter | 2019-05-19T14:33:14 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.miva.com |
Virtualenv¶
virtualenv is a tool to create isolated Python environments. virtualenv creates a folder which contains all the necessary executables to use the packages that a Python project would need.
Install virtualenv via pip:
$ pip install virtualenv
Basic Usage¶
-
- To begin using the virtual environment, it needs to be activated:
$.
Other Notes¶ venv
This creates the
venv folder inside
~/Envs.
- Work on a virtual environment:
$ workon venv
Alternatively, you can make a project, which creates the virtual environment,
and also a project directory inside
$PROJECT. | https://python-guide-kr.readthedocs.io/ko/latest/dev/virtualenvs.html | 2019-05-19T15:45:57 | CC-MAIN-2019-22 | 1558232254889.43 | [] | python-guide-kr.readthedocs.io |
RF Combine allows combining XML files from multiple experiments into a single profile.
For example, this can be useful when performing CIRS-seq experiments, to combine into a single profile both the reactivity of A/C residues probed with DMS, and of G/U residues probed with CMCT.
Alternatively, RF Combine is able to combine into a single profile multiple replicates of the same probing experiment. In these cases, the resulting XML files may contain optional “-error” tags, in which the per-base standard deviation of the measure from each experiment is reported.
When combining datasets containing NaN values, only non-NaN positions in all experiments will be combined, while the others will be reported as NaNs as well.
There is no limit to the number of experiments that RF Combine can handle. It can be used both on individual XML files, or on whole XML folders generated by either RF Norm, RF Silico, or RF ModCall.
Important
RF Combine does not allow combining RF Norm XML files generated using different scoring/normalization methods, since this will produce inconsistent data.
Note
In XML files generated using RF Combine, the
combined attribute of the
transcript tag is set to
TRUE.
Usage
To list the required parameters, simply type:
$ rf-combine -h | https://rnaframework.readthedocs.io/en/latest/rf-combine/ | 2019-05-19T14:34:54 | CC-MAIN-2019-22 | 1558232254889.43 | [] | rnaframework.readthedocs.io |
EditorProperty¶
Inherits: Container < Control < CanvasItem < Node < Object
Category: Core
Signals¶
- multiple_properties_changed ( PoolStringArray properties, Array value )
Emit yourself if you want multiple properties modified at the same time. Do not use if added via EditorInspectorPlugin.parse_property
Used by sub-inspectors. Emit if what was selected was an Object ID.
Do not emit this manually, use the emit_changed method instead.
Used internally, when a property was checked.
Emit if you want to add this value as an animation key (check keying being enabled first).
Emit if you want to key a property with a single value.
If you want a sub-resource to be edited, emit this signal with the resource.
Internal, used when selected.
Description¶
This control allows property editing for one or multiple properties into EditorInspector. It is added via EditorInspectorPlugin.
Property Descriptions¶
Used by the inspector, set when property is checkable.
Used by the inspector, when the property is checked.
Used by the inspector, when the property must draw with error color.
Used by the inspector, when the property can add keys for animation/
Set this property to change the label (if you want to show one)
Used by the inspector, when the property is read-only.
Method Descriptions¶
If any of the controls added can gain keyboard focus, add it here. This ensures that focus will be restored if the inspector is refreshed.
If one (or many properties) changed, this must be called. “Field” is used in case your editor can modify fields separately (as an example, Vector3.x). The “changing” argument avoids the editor requesting this property to be refreshed (leave as false if unsure).
Get the edited object.
Get the edited property. If your editor is for a single property (added via EditorInspectorPlugin.parse_property), then this will return it..
Override if you want to allow a custom tooltip over your property.
Add controls with this function if you want them on the bottom (below the label).
- void update_property ( ) virtual
When this virtual function is called, you must update your editor. | https://docs.godotengine.org/en/latest/classes/class_editorproperty.html | 2019-05-19T14:20:50 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.godotengine.org |
SearchResultSelectionGeneratable¶
Leveraging the SearchResultSelectionGeneratable annotation, you can create a complex
Content type with a search result action. For example, you can have a
MultiMediaGallery type that consists of
Image and
Video slides. A query for
Image or
Video types displays an associated action in the Search Panel. Invoking the action creates a
MultiMediaGallery object with any
Image or
Video objects selected in the search results.
The following steps show how to construct a search result action that creates a
MultiMediaGallery content type.
Step 1: Create the Content Type
In the previous snippet—
- Line 1 is the class annotation that enables
MultiMediaGalleryobjects to be created from
Imageand
Videoobjects returned in search results.
- Lines 4 to 6 define a list for a
Slidetype and associated getter and setter methods.
- Lines 8 to 9 embed an abstract
Slideclass.
- Lines 11 to 21 implement an
ImageSlideinner class for slides consisting of
Imageobjects.
- Lines 23 to 33 implement a
VideoSlideinner class for slides consisting of
Videoobjects.
- Lines 35 to 52 implement the
fromSelectionmethod from the
SearchResultSelectionGeneratableinterface. The method creates
Imageor
Videoslides from the SearchResultSelection object. After usage of the
SearchResultSelectionto create a new
Contentinstance, the
SearchResultSelectionis destroyed.
Step 2: Implement SearchResultAction
The
SearchResultAction implementation displays the applicable action button in the Search Panel.
In the previous snippet—
- Lines 20–22 check for search results that are selected in the UI. If there are no selections, then the implementation does not display the action button.
- Lines 29–31 construct the URL to the page servlet, specified in the servlet routing path as
toolUserMultiMedia. Only one parameter is passed to the servlet,
selectionId, returned by the
SearchResultSelection#getId()method.
- Line 32 specifies the label on the action button. The label is retrieved from a localization resource file.
When results are selected in the Search Panel, the “Create MultiMediaGallery” button appears.
Step 3: Implement Page Servlet
The page servlet invoked from the search result action creates the
MultiMediaGallery objects from the search result selections.
- Line 1 specifies the routing filter of the servlet as “toolUserMultiMedia”.
- Line 12 gets the value of the
selectionIdparameter passed from
MultiMediaGalleryAction.
- Line 13 performs a query to get the
SearchResultSelectionobject identified by
selectionId.
- Lines 15–17 create a
MultiMediaGalleryobject. Note that the
SearchResultSelectionobject can represent selections of various content types. However,
MultiMediaGalleryis limited to
Imageand
Videoitem types, so the implemented
fromSelectionmethod creates gallery slides from only those types. | http://docs.brightspot.com/cms/developers-guide/search/generatable.html | 2018-04-19T11:30:18 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['../../../_images/actions2.png', '../../../_images/actions2.png'],
dtype=object) ] | docs.brightspot.com |
Class: SC.Scanner
A Scanner reads a string and interprets the characters into numbers. You assign the scanner's string on initialization and the scanner progresses through the characters of that string from beginning to end as you request items.
Scanners are used by
DateTime to convert strings into
DateTime objects.
Defined in: datetime.js
- Since:
- SproutCore 1.0
Field Summary
- Fields borrowed from SC.Object:
- concatenatedProperties, isDestroyed, isObject, nextProperty, object, property, target, toInvalidate
- Fields borrowed from SC.Observable:
- isObservable
Instance Methods
Field DetailscanLocation Integer
The current scan location. It is incremented by the scanner as the characters are processed. The default is 0: the beginning of the string.
The string to scan. You usually pass it to the create method:
SC.Scanner.create({string: 'May, 8th'});
Instance Method Detail
scan(len)
Reads some characters from the string, and increments the scan location accordingly.
- Throws:
- SC.SCANNER_OUT_OF_BOUNDS_ERROR
- If asked to read too many characters
scanArray(ary)
Attempts to scan any string in a given array.
- Parameters:
- ary Array
- the array of strings to scan
- Returns:
- Integer
- The index of the scanned string of the given array
- Throws:
- SC.SCANNER_SCAN_ARRAY_ERROR
- If no string of the given array is found
scanInt(min_len, max_len)
Reads some characters from the string and interprets it as an integer.
- Parameters:
- min_len Integer
- The minimum amount of characters to read
- max_len Integer Optional
- The maximum amount of characters to read (defaults to the minimum)
- Returns:
- Integer
- The scanned integer
- Throws:
- SC.SCANNER_INT_ERROR
- If asked to read non numeric characters
skipString(str)
Attempts to skip a given string.
- Parameters:
- str String
- The string to skip
- Returns:
- Boolean
- YES if the given string was successfully scanned, NO otherwise
- Throws:
- SC.SCANNER_SKIP_ERROR
- If the given string could not be scanned
Documentation generated by JsDoc Toolkit 2.4.0 on Wed Apr 08 2015 10:02:21 GMT-0600 (CST) | http://docs.sproutcore.com/symbols/SC.Scanner.html | 2018-04-19T11:55:09 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.sproutcore.com |
Preview Key Features
PRODUCTIVE FUNCTIONALITY
- Full document and web page preview
- Preview and print 500 file formats including DWG.
- in-document plugins for mobile support
- Optimized for preview on public web sites
- No-coding – 100% configurable in administration UI
Supported Systems
- SharePoint Server 2016
- SharePoint Server 2013
- SharePoint Server 2010
- SharePoint Server 2007
More:
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.surfray.com/ontolica-search-preview/1/en/topic/preview-tech-spec | 2018-04-19T11:28:24 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['http://manula.r.sizr.io/large/user/760/img/3d-team-work-201wallpapers.jpg',
None], dtype=object) ] | docs.surfray.com |
Browse Data Permission
By default, Browse Data permission is given to all users when an account is first created in Wavefront. Browse Data permission allows you to:
- View the Dashboards, Alerts, Metrics, Sources, Events, Maintenance Windows, and Webhooks pages
- Add dashboards to your list of favorites
- View existing dashboards and charts
- Create and interact with charts without the ability to save
- Share dashboards and charts with other users
- Access the Wavefront Community and your user profile
Direct Data Ingestion Permission
Users with Direct Data Ingestion permission have the ability to directly ingest metrics using the Wavefront API. Direct Data Ingestion permission should only be granted to users who have a deep understanding of APIs and the Wavefront ingestion path.
Embed Charts Permission
While every Wavefront user can access charts and make temporary changes to chart parameters, Embed Charts permission gives you the ability to embed an interactive chart outside of Wavefront. Embedded chart URLs are associated with a specific user account, so if a user embeds a chart and later has their Wavefront account removed, that embedded chart will no longer work. For instructions, see Embedding a Chart. | https://docs.wavefront.com/permissions_misc.html | 2018-04-19T11:52:01 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.wavefront.com |
Forefront Identity Manager Password Management
Managing passwords for multiple user accounts is one of the complexities of managing an enterprise environment with multiple data sources. Microsoft® Forefront Identity Manager (FIM) 2010 R2 provides two password management solutions:
Password synchronization – Utilizes the password change notification service (PCNS) to capture password changes from Active Directory and propagate them to other connected data sources.
User-based password change management – Utilizes the Windows Management Instrumentation (WMI) through Web-based Help Desk and self-service password reset applications.
By using password synchronization and user-based password change management, you can:
Reduce the number of different passwords users have to remember.
Simultaneously set or change passwords in a user's multiple accounts to the same password.
Allow users to change their own passwords in Active Directory and push the password change out to other systems.
Eliminate the risk of building an additional password or credential store.
Synchronize passwords across multiple data sources by using Active Directory as the authoritative source.
Perform password management operations in real time, independent of FIM operations.
Password extensions
Management agents for directory servers support password change and set operations by default. For file-based, database, and extensible connectivity management agents, which do not support password change and set operations by default, you can create a .NET password extension dynamic-link library (DLL). The .NET password extension DLL is called whenever a password change or set call is invoked for any of these management agents. Password extension settings are configured for these management agents in Synchronization Service Manager. For more information about configuring password extensions, see the FIM Developer Reference.
Password synchronization
Password synchronization works with the password change notification service (PCNS) on an Active Directory domain, and allows password changes that originate from Active Directory to be automatically propagated to other connected data sources. FIM accomplishes this by running as a Remote Procedure Call (RPC) server that listens for a password change notification from an Active Directory domain controller. When the password change request is received and authenticated, it is processed by FIM and propagated to the appropriate management agents.
Important
Bi-directional password synchronization is not supported by FIM. Configuring bi-directional password synchronization can create a loop, which will consume server resources and have a potentially negative effect on both Active Directory and FIM.
The PCNS runs on each Active Directory domain controller. The systems that receive the password notifications are known as targets. Your FIM must be configured as a PCNS target in Active Directory before password notifications are sent. The PCNS configuration must define an inclusion group and, optionally, an exclusion group. These groups are used to restrict the flow of sensitive passwords from the domain. For example, to send passwords for all users, but not send administrative passwords, you might choose to use Domain Users as the inclusion group, and Domain Admins as the exclusion group. For more information about configuring the password change notification service, see Using Password Synchronization.
The components involved in the password synchronization process are:
Password change notification service (Pcnssvc.exe)–The password change notification service runs on a domain controller and is responsible for receiving password change notifications from the local password filter, queuing them for the target server running FIM, and using RPC to deliver the notifications. The service encrypts the password and ensures that the password remains secure until it is successfully delivered to the target server running FIM.
Service principal name (SPN) – The SPN is a property on the account object in Active Directory that is used by the Kerberos protocol to mutually authenticate the PCNS and the target. The SPN ensures that the PCNS authenticates to the correct server running FIM, and that no other service can receive the password change notifications. The SPN is created and assigned by using the setspn.exe tool. For more information about configuring the SPN, see Using Password Synchronization.
Password change notification filter (Pcnsflt.dll) – The password filter is used to obtain plaintext passwords from Active Directory. This filter is loaded by the Local Security Authority (LSA) on each Windows Server domain controller participating in password distribution to a target server running FIM. Once the filter has been installed and the domain controller has been restarted, the filter begins to receive password change notifications for password changes that originate on that domain controller. The password notification filter runs simultaneously with other filters that are running on the domain controller.
Password change notification service configuration utility (Pcnscfg.exe) – The pcnscfg.exe utility is used to manage and maintain the password change notification service configuration parameters stored within Active Directory. These configuration parameters, such as defining the target servers, the password queue retry interval, and enabling or disabling a target server, are used when authenticating and sending password notifications to the target server running FIM.
The service configuration is stored in Active Directory, so it is only necessary to update the configuration on one domain controller. Active Directory replicates the change to all other domain controllers.
Remote Procedure Call (RPC) server on the server running FIM – When password synchronization is enabled, the RPC server on the server running FIM is started, enabling it to receive notifications from the password change notification service. RPC dynamically selects a range of ports to use. If you require FIM to communicate with the Active Directory forest through a firewall, you must open a range of ports.
Password extension DLL – The password extension DLL provides a way to implement password set or change operations by means of a rules extension for any database, extensible connectivity, or file-based management agent. This is accomplished by creating an export-only, encrypted attribute named "export_password" that does not actually exist in the connected directory but can be accessed and set in provisioning rules extensions or can be used during export attribute flow. For more information about configuring password extensions, see the FIM Developer Reference.
Preparing for password synchronization
Before you set up password synchronization for your FIM and Active Directory environment, verify the following:
FIM is installed according to installation instructions.
Management agents for the connected data sources to be managed for password synchronization are already created and the objects are being successfully joined and synchronized.
To set up password synchronization:
Extend the Active Directory schema to add the classes and attributes necessary for installing and running the password change notification service (PCNS).
Install the PCNS on each domain controller.
Configure the service principal name (SPN) in Active Directory for the FIM service account.
Configure the PCNS to communicate with the target FIM service.
Configure the management agents for the connected data sources to be managed for password synchronization.
Enable password synchronization on FIM.
For more information about setting up password synchronization, see Using Password Synchronization.
Password synchronization process
The process of synchronizing a password change request from an Active Directory domain controller to other connected data sources is shown in the following diagram:
.gif)
The user initiates the password change request by pressing Ctrl+Alt+Del. The password change request, including the new password, is sent to the nearest domain controller.
The domain controller records the password change request and notifies the password change notification filter (Pcnsflt.dll).
The password change notification filter passes the request to the password change notification service (PCNS).
The PCNS verifies the password change request, then authenticates the service principal name (SPN) by using Kerberos, and forwards the password change request in encrypted RPC to the FIM target server.
FIM validates the source domain controller, then uses the domain name to locate the management agent that services that domain, and uses the user account information in the password change request to locate the corresponding object in the connector space.
By using the join table information, FIM determines the management agents that receive the password change, and pushes the password change out to them.
Password synchronization security
The following password synchronization security concerns have been addressed:
Authentication from the password source – When the password change notification is received, Kerberos authentication is done by FIM as well as the source domain controller to ensure both the recipient and sender are valid. Upon receiving a password change notification, FIM ensures that the caller has an account in the Domain Controllers container of the domain it belongs to.
Failed password synchronization to a target data source due to an insecure connection – If the management agent has been configured to require a secure connection but one is not detected, the synchronization fails. Synchronization still occurs if the management agent has been configured to allow non-secure connections. Allowing non-secure connections should be enabled only after examining and understanding the risks involved.
Secure storage of passwords – FIM only stores encrypted passwords temporarily. All passwords received by FIM during a password change notification operation are encrypted as soon as they enter the FIM process. The moment they are successfully sent out to the target connected data source, they are decrypted, and the memory storing the password is immediately cleared. If the operation fails to write to the target connected data source, the encrypted password is stored until all retry attempts have been attempted, and then is cleared from memory.
Secure password queues – Passwords stored in PCNS password queues are encrypted until they are delivered.
Password synchronization error recovery scenarios
Ideally, whenever a user changes a password, the change is synchronized with no errors. The following scenarios describe how FIM recovers from common synchronization errors:
Failed password notification from Active Directory to FIM – This can occur if the network is down, or if the server running FIM is unavailable. The password change notification remains queued locally on the domain controller by the PCNS. The PCNS reattempts the notification according to its retry interval configuration.
Failed password synchronization to a target data source – This can also occur if the network is down, or if the target data source is unavailable. The password change notification is queued and retried according to the management agent's configuration for retry attempt and retry interval. All passwords are encrypted while they are stored for retry, and deleted when the operation succeeds or the retry limits are hit.
Activating a warm standby server running FIM after a failure – In the case of the primary server running FIM failing, you can configure a warm standby server for password synchronization, and activate it with no loss of password changes. For more information, see MIISactivate: Server Activation Tool.
Some failures are serious enough that no amount of retries is likely to result in a successful operation. In these cases, an error event is logged and the process is stopped. The following events are not retried:
User-based password change management
FIM provides two web applications that use Windows Management Instrumentation (WMI) for resetting passwords. As with password synchronization, you activate password management when you configure the management agent in Management Agent Designer. For information about password management and WMI, see the FIM Developer Reference.
FIM creates two security groups during installation that specifically support password management operations:
FIMSyncBrowse—Members of this group have permission to gather information about a user's accounts when doing search operations with WMI queries.
FIMSyncPasswordSet—Members of this group have permission to perform account search, password set, and password change operations using the password management interfaces with WMI.
For more information about FIM security groups, see Using Security Groups. | https://docs.microsoft.com/en-us/previous-versions/mim/jj590203(v=ws.10) | 2018-04-19T12:26:10 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['images%5cjj590203.42c140eb-344a-489a-ae6e-3a0152de5c61(ws.10',
'How Password Synchronization Works How Password Synchronization Works'],
dtype=object) ] | docs.microsoft.com |
ETL
Page summary display
Extract, Transform, and Load. This is the process of getting your data from your data source (Extract), transforming it into a format that can be read by your analysis tool (Transform), and then loading it into a data store that can be accessed by the analysis tool (Load). | https://docs.interana.com/lexicon/ETL | 2018-04-19T12:01:10 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.interana.com |
Indexes
Important.
The computed_column_expression.
Important
When you refer to string literals of the date data type in indexed computed columns in SQL Server, we recommend that you explicitly convert the literal to the date type that you want by using a deterministic date format style. For a list of the date format styles that are deterministic, see CAST and CONVERT.
Note
Expressions that involve implicit conversion of character strings to date data types are considered nondeterministic, unless the database compatibility level is set to 80 or earlier. This is because the results depend on the LANGUAGE and DATEFORMAT settings of the server session.
For example, the results of the expression
CONVERT (datetime, '30 listopad 1996', 113) depend on the LANGUAGE setting because the string '
30 listopad 1996' means different months in different languages.
Similarly, in the expression
DATEADD(mm,3,'2000-12-01'), the Database Engine interprets the string
'2000-12-01' based on the DATEFORMAT setting.
Implicit conversion of non-Unicode character data between collations is also considered nondeterministic, unless the compatibility level is set to 80 or earlier.
When the database compatibility level setting is 90, you cannot create indexes on computed columns that contain these expressions. However, existing computed columns that contain these expressions from an upgraded database are maintainable. If you use indexed computed columns that contain implicit string to date conversions, to avoid possible index corruption, make sure that the LANGUAGE and DATEFORMAT settings are consistent in your databases and applications.is int and deterministic but not precise.
CREATE TABLE t2 (a int, b int, c int, x float, y AS CASE x WHEN 0 THEN a WHEN 1 THEN b ELSE c END);).
Note
Setting ANSI_WARNINGS to ON implicitly sets ARITHABORT to ON when the database compatibility level is set to 90 or higher.
Creating Indexes on Persisted Computed Columns.
Related Content
COLUMNPROPERTY (Transact-SQL)
CREATE TABLE (Transact-SQL)
ALTER TABLE (Transact-SQL) | https://docs.microsoft.com/en-us/sql/relational-databases/indexes/indexes-on-computed-columns?view=sql-server-2017 | 2018-04-19T12:48:27 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object)
array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object) ] | docs.microsoft.com |
Mixin: SC.Array
Extends SC.Enumerable.
This module implements Observer-friendly Array-like behavior. This mixin is picked up by the Array class as well as other controllers, etc. that want to appear to be arrays.
Unlike
SC.Enumerable, if an array changes by changing the syntax of the property to .observes('*myProperty.[]') .
To support
SC.Array in your own class, you must override two
primitives to use it: replace() and
objectAt().
Note that the
SC.Array mixin also incorporates the
SC.Enumerable mixin. All
SC.Array-like objects are also enumerable.
- Since:
- SproutCore 0.9.0
Field Summary
- Fields borrowed from SC.Enumerable:
- isEnumerable
Instance Methods
- addArrayObservers(options)
- addRangeObserver(indexes, target, method, context)
- arrayContentDidChange(start, removedCount, addedCount)
- arrayContentWillChange(start, removedCount, addedCount)
- compact()
- contains(object)
- flatten()
- indexOf(object, startAt)
- insertAt(idx, object)
- isEqual(ary)
- lastIndexOf(object, startAt)
- max()
- min()
- objectAt(idx)
- popObject()
- pushObject(object)
- pushObjects(objects)
- registerDependentKeyWithChain(property, chain)
- removeArrayObservers(options)
- removeAt(start, length)
- removeDependentKeyWithChain(property, chain)
- removeObject(obj)
- removeObjects(objects)
- removeRangeObserver(rangeObserver)
- replace(idx, amt, objects)
- setupEnumerablePropertyChains(addedObjects, removedObjects)
- shiftObject()
- slice(beginIndex, endIndex)
- teardownEnumerablePropertyChains(removedObjects)
- uniq()
- unshiftObject(obj)
- unshiftObjects(objects)
- updateRangeObserver(rangeObserver, indexes)
- without(value)
Field DetailisSCArray Boolean
Instance Method Detail
- Parameters:
- options
Creates a new range observer on the receiver. The target/method callback you provide will be invoked anytime any property on the objects in the specified range changes. It will also be invoked if the objects in the range itself changes also.
The callback for a range observer should have the signature:
function rangePropertyDidChange(array, objects, key, indexes, context)
If the passed key is '[]' it means that the object itself changed.
The return value from this method is an opaque reference to the range observer object. You can use this reference to destroy the range observer when you are done with it or to update its range.
- Parameters:
- indexes SC.IndexSet
- indexes to observe
- target Object
- object to invoke on change
- method String|Function
- the method to invoke
- context Object
- optional context
- Returns:
- SC.RangeObserver
- range observer
- Parameters:
- start
- removedCount
- addedCount
- Parameters:
- start
- removedCount
- addedCount
Generates a new array with the contents of the old array, sans any null values.
Returns a new array that is a one-dimensional flattening of this array, i.e. for every element of this array extract that and it's elements into a new array.
This will use the primitive replace() method to insert an object at the specified index.
Returns the largest Number in an array of Numbers. Make sure the array only contains values of type Number to get expected result.
Note: This only works for dense arrays.
- Returns:
- Number
Returns the smallest Number in an array of Numbers. Make sure the array only contains values of type Number to get expected result.
Note: This only works for dense arrays.
- Returns:
- Number
This is one of the primitives you must implement to support
SC.Array.
Returns the object at the named index. If your object supports retrieving
the value of an array item using get() (i.e.
myArray.get(0)), then you do
not need to implement this method yourself.
- Parameters:
- idx Number
- The index of the item to return. If idx exceeds the current length, return null.
Pop object from array or nil if none are left. Works just like pop() but it is KVO-compliant.
Push the object onto the end of the array. Works just like push() but it is KVO-compliant.
Add the objects in the passed numerable to the end of the array. Defers notifying observers of the change until all objects are added.
- Parameters:
- objects SC.Enumerable
- the objects to add
- Returns:
- SC.Array
- receiver
Register a property chain to propagate to enumerable content.
This will clone the property chain to each item in the enumerable, then save it so that it is automatically set up and torn down when the enumerable content changes.
- Parameters:
- options
Remove an object at the specified index using the replace() primitive method. You can pass either a single index, a start and a length or an index set.
If you pass a single index or a start and length that is beyond the
length this method will throw an
SC.OUT_OF_RANGE_EXCEPTION
- Parameters:
- start Number|SC.IndexSet
- index, start of range, or index set
- length Number
- length of passing range
- Returns:
- Object
- receiver
Removes a dependent key from the enumerable, and tears it down on all content objects.
- Parameters:
- obj object
- object to remove
Search the array for the passed set of objects and remove any occurrences of the.
- Parameters:
- objects SC.Enumerable
- the objects to remove
- Returns:
- SC.Array
- receiver
Removes a range observer from the receiver. The range observer must already be active on the array.
The return value should replace the old range observer object. It will usually be null.
- Parameters:
- rangeObserver SC.RangeObserver
- the range observer
- Returns:
- SC.RangeObserver
- updated range observer or null
This is one of the primitives you must implement to support
SC.Array. You
should replace amt objects started at idx with the objects in the passed
array.
Before mutating the underlying data structure, you must call
this.arrayContentWillChange(). After the mutation is complete, you must
call
arrayContentDidChange().
NOTE: JavaScript arrays already implement
SC.Array and automatically call
the correct callbacks.
For all registered property chains on this object, removed them from objects being removed from the enumerable, and clone them onto newly added objects.
Shift an object from start of array or nil if none are left. Works just like shift() but it is KVO-compliant.
Returns a new array that is a slice of the receiver. This implementation uses the observable array methods to retrieve the objects for the new slice.
If you don't pass in
beginIndex and
endIndex, it will act as a copy of the
array.
- Parameters:
- removedObjects
Generates a new array with only unique values from the contents of the old array.
Unshift an object to start of array. Works just like unshift() but it is KVO-compliant.
Adds the named objects to the beginning of the array. Defers notifying observers until all objects have been added.
- Parameters:
- objects SC.Enumerable
- the objects to add
- Returns:
- SC.Array
- receiver
Moves a range observer so that it observes a new range of objects on the
array. You must have an existing range observer object from a call to
addRangeObserver().
The return value should replace the old range observer object that you pass in.
- Parameters:
- rangeObserver SC.RangeObserver
- the range observer
- indexes SC.IndexSet
- new indexes to observe
- Returns:
- SC.RangeObserver
- the range observer (or a new one) | http://docs.sproutcore.com/symbols/SC.Array.html | 2018-04-19T12:02:43 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.sproutcore.com |
Released on 24 January 2014
Yes, we did miss our 6 month release cycle! Many changes have gone into Pootle 2.5.1 which follows on from 2.5.0 released in May.
Pootle 2.5.1 has been in production for a number of users, so although it is a new official release, we’ve had many people running their production Pootle server off this code. This includes Mozilla and Evernote. So you are in good company.
For those who can’t wait you might be interested to know what we’ve got planned on our roadmap for Pootle 2.5.2.
These are by no means exhaustive, check the git log for more details.
pootle.core.auth.ldap_backend.LdapBackendand received various fixes.
…and lots of refactoring, upgrades of upstream code, cleanups to remove Django 1.3 specifics, missing documentation and of course, loads of bugs were fixed
The following people have made Pootle 2.5.1 possible:
Julen Ruiz Aizpuru, Leandro Regueiro, Dwayne Bailey, Alexander Dupuy, Khaled Hosny, Arky, Fabio Pirola, Christian Hitz, Taras Semenenko, Chris Oelmueller, Peter Bengtsson, Yasunori Mahata, Denis Parchenko, Henrik Saari, Hakan Bayindir, Edmund Huber, Dmitry Rozhkov & Darío Hereñú | http://docs.translatehouse.org/projects/pootle/en/stable-2.8.x/releases/2.5.1.html | 2018-04-19T11:26:46 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.translatehouse.org |
Poking my head out of the sand
Wow, it's been a long time. I kept trying to come back and write something, but between working on new projects and trying to figure out what I could actually write about, well…
So, anyway, just to catch up. I left the CRM team at the beginning of the year to work in an incubation / greenhouse team within MBS. We were initially focused (well, "focused" might be too strong of a word) on hybrid application models. That is, we were looking at ways to create applications that spanned the firewall either directly or indirectly. One of our motivating factors was to introduce a collaboration element to MBS assets. We tossed around a handful of interesting scenarios, and in true Microsoft style, we went way overboard in developing the initial scenario (a building contractor doing collaborative bidding and design on home remodel projects).
One of the good things that came out of all that scenario work was a prototype for a "data projector" that could take internal line of business data and project it onto a shared, hosted workspace. I can't go into a ton of detail around this yet because the concept itself was useful enough that we're going to pursue it as a product. That means I need to be hush-hush about it until the official product team makes an announcement.
In the meantime I'm looking at the CRM platform through the eyes of an ISV (again) to see what we might be able to do with it in terms of building non-CRM products. Watch this space for more information as we learn things (like the callout implementation from CRM for notes and documents is just plain broken).
PS - Caitlyn is doing great, she sleeps through the night and has since we brought her home. Talk about a blast watching her learn about all the cool things in her new world. | https://docs.microsoft.com/en-us/archive/blogs/mikemill/poking-my-head-out-of-the-sand | 2021-10-16T04:11:55 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.microsoft.com |
Anomalous System Uptime
This report provides a list of servers that have not had been rebooted in 30 days or more. Use this report to identify systems that might be vulnerable to attack.
Systems often need to be rebooted after patches are applied. Systems that have not been rebooted might still be vulnerable to compromise. PCI DSS requires that high and/or critical patches be applied within 30 days.
Relevant data sources
Relevant data sources for this report include uptime data extracted through scripts from Windows, Unix, or other hosts.
How to configure this report
- Index uptime information captured through scripts from relevant hosts.
- Map the uptime data to the following Common Information Model fields:
dest, uptime. CIM-compliant add-ons for these data sources perform this step for you.
- Tag the uptime data with "uptime", "performance", and "os".
- Set the
should_timesynccolumn to true for assets in the asset table that should synchronize their clocks.
Report description
The Anomalous System Update report is populated by the Performance data model and the asset table.! | https://docs.splunk.com/Documentation/PCI/3.8.1/Install/AnomalousSystemUpdate | 2021-10-16T03:18:25 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
What is an AVA file?
The AVA files are actually belonging to a Persian eBook reading software known as AvaaPlayer. It seems a discontinued software; as we don’t have any copy or information available on the internet. Adobe Systems Incorporated the .ava file extension in their eBook reading software, but later on, they stopped providing service to Iranians. The data analysis shows that the AvaaPlayer users mostly use Windows operating system, and they mostly opened the software in the Google Chrome browser.
Possible problems while opening the file
If you are not being able to open and run the AVA file; it doesn’t mean that you do not have a suitable software installed on your device. There might be some other issues which prevent the file to work properly. The possible problems might be one of the following:
- Corruption of a AVA file
- Wrong links to the AVA file in registry entries
- Deleted description of the AVA from the Windows registry
- An infected AVA file with an undesirable malware
- The computer does not have sufficient hardware resources to operate the AVA file
- Drivers used by the computer to open a AVA file are outdated | https://docs.fileformat.com/ebook/ava/ | 2021-10-16T03:31:43 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
Ways of Avoiding Common Business Tax Errors
To be at peace with the authorities, you should always file your returns on time. If you want to avoid brushing shoulders with the tax authorities, you need to file your tax returns on time.
It is thus important that as a licensed business, make sure that you file the returns without any underpayment for peaceful business operations.
In most circumstances, there can be errors during this process and a business owner should make sure that they correct them as quickly as possible, or else they’ll attract penalties. For instance, filing late returns can attract consequences.
As a business owner, you should be careful when making tax payments to avoid possible business tax mistakes..
Another common business tax mistake you should avoid is misclassification of your employees. Always make sure that when you hire a contractor, you classify them accordingly.
How much you control a contractor will tell whether they are employees or independent contractors.
You will always have limited control over an independent contractor so make sure you do not misclassify them or else you will be caught on the hook for a lot of money. You should also make sure their salaries are classified separately when filing tax returns.
It is also advisable that you avoid mixing your personal and businesses expenses..
You should, therefore, keep every expense separateread more now to avoid an audit from the tax authorities.
You should have a separate business account from your personal account to make it easier when filing the returnsmore. It would be best to keep records of all your expenses, especially when you choose to use a similar account for all your expenses. | http://docs-prints.com/2020/11/24/a-10-point-plan-for-without-being-overwhelmed-21/ | 2021-10-16T01:58:42 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs-prints.com |
Action: Branching Modifier
This action gives control over a Branching modifier.
Interface
The Action's interface looks like this:
Parameters
Branching Modifier
This field accepts a Branching).
Force Branch
If checked, then when the action is triggered, it will force the modifier to branch immediately from the particle, rather than waiting until the time until the next branch has elapsed.
Important: for this to work, you must ensure that the Branching Modifier is already activated BEFORE a 'Force Branch' action is triggered. In other words, you must have turned the modifier on (as it must be in action-controlled mode) before triggering a 'Force Branch' action. You cannot use the same action to turn on the modifier and force a branch simultaneously.
Groups Affected
Drag any particle group objects into this list. If there is one or more groups in the list, only those particles which are in those groups will be affected by the action. But if there are no groups, all particles will be affected by the action. | http://docs.x-particles.net/html/action_branchmod.php | 2021-10-16T02:42:47 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../images/actions_branchmod1.jpg', None], dtype=object)] | docs.x-particles.net |
Gwave (PSU)¶
Login hosts¶
For details on how to connect to these machines, please see Access to the LIGO Data Grid.
Execute hosts¶
Functional Architecture¶
Provided Services and Dashboards¶
User Information¶
Important Directories¶
/ligo/home/ligo.org/albert.einstein/: regular files, analysis output, etc.
/ligo/home/ligo.org/albert.einstein/public_html/: files viewable by LIGO-authtenticated users. See below for setting up file sharing
/ligo/software/ligo.org/albert.einstein/: for software builds
/cvmfs: CVMFS data files served from a local NFS Aliencache and remote Stashcaches
Data Access¶
Frame files are provided by CVMFS via repositories in the
/cvmfs directory. To access proprietary data, you must first create an active proxy via
ligo-proxy-init user.name. You must do this prior to sourcing environments or running analyses that require proprietary data.
Sharing LIGO Files¶
Files in your home directory may be shared by using the
/ligo/home/ligo.org/albert.einstein/public_html/ folder. To do this, you must set up permissions for both
public_html and your home folder:
chmod 775 /ligo/home/ligo.org/albert.einstein/public_html chmod 711 /ligo/home/ligo.org/albert.einstein/
This allows your files to be viewable by LIGO-authenticated users at (replacing
albert.einstein with your LIGO credentials).
Building Software¶
When compiling software, use the
/ligo/software/ directory instead of the
/ligo/home/ directory.
Conda Environments
By default, user conda environments are stored in
/ligo/home/, so this must be changed via:
ENV_DIR='conda-envs' # this is the folder name where you want to keep your environments mkdir -p /ligo/software/ligo.org/${USER}/${ENV_DIR} conda config --add envs_dirs /ligo/software/ligo.org/${USER}/${ENV_DIR}
Use
conda activate to load environments as usual. Verify your setup with
conda info --env; output should be similar to this:
$ conda info --env # conda environments: # test-env * /ligo/software/ligo.org/albert.einstein/<conda-envs>/envs/test-env
To shorten the long path that prefixes the terminal prompt, enter:
conda config --set env_prompt '({name})'
After building software or moving environments to the
/ligo/software directory, reference that path in your HTCondor submit and DAG files in place of your
/ligo/home directory.
Other Tips¶
SSH keys¶
To SSH to GWave without entering your password each time, copy your public SSH key using:
ssh-copy-id -i ~/.ssh/<your_pubkey.pub> [email protected]
You will be prompted for your password to copy the file. Afterwards, you will be able to SSH without re-entering your password. | https://computing.docs.ligo.org/guide/computing-centres/psu/ | 2021-10-16T02:05:48 | CC-MAIN-2021-43 | 1634323583408.93 | [] | computing.docs.ligo.org |
Manage storage configurations using the account console (E2)
This article describes how to:
- Create and configure an S3 bucket to store a limited set of Databricks workspace information such as libraries, some logs, and notebook version history.
- Create a storage configuration in Databricks that references that bucket, using the account console for accounts on the E2 version of the platform.
Note
This article describes the process for accounts on the E2 version of the Databricks platform, using the account console. To learn how to create storage configurations using the Account API, see Create a new workspace using the Account API. For other account types, see Configure AWS storage (Legacy). All new Databricks accounts and most existing accounts are now E2. If you are unsure which account type you have, contact your Databricks representative.
The bucket that you include in your storage configuration is referred to as your workspace’s root storage. Do not use your root storage to store production customer data. Instead, create additional S3 buckets or other data sources for production data and optionally create DBFS mount points for them.
Define a storage configuration and generate a bucket policy
Note
These instructions show you how to create the storage configuration from the Account Settings page in the account console before you create a new workspace. You can also create the storage configuration in a similar way as part of the flow of creating a new workspace. See Create and manage workspaces using the account console.
- Go to the account console, click Account Settings, and click Storage configurations.
- Click Add Storage Configuration.
- In the Storage Configuration Name field, enter a human-readable name for your new storage configuration.
- In the Bucket Name field, enter the exact name of the S3 bucket you will create.
- Click Generate Policy and copy the policy that is generated. You will add this to your S3 bucket configuration in AWS in the next task.
- Click Add.
Create the S3 bucket
Log into your AWS Console as a user with administrator privileges and go to the S3 service.
Create an S3 bucket, using the name that you entered in the Databricks storage configuration.
See Create a Bucket in the AWS documentation.
Important
- The S3 bucket must be in the same AWS region as the Databricks workspace deployment.
- Databricks recommends as a best practice that you use an S3 bucket that is dedicated to Databricks, unshared with other resources or services.
Click the Permissions tab.
Click the Bucket Policy button.
Paste the bucket policy that you generated and copied from the Add Storage Configuration dialog in Databricks.
Save the bucket.
Enable object-level logging (recommended)
Databricks strongly recommends that you enable S3 object-level logging, which enables faster investigation of issues. See Step 4: Enable S3 object-level logging (recommended).
View storage configurations
Go to the account console, click Account Settings, and click Storage configurations.
All storage configurations are listed, with Bucket Name and Created date displayed for each.
Click the storage configuration name to view more details.
Delete a storage configuration
Storage configurations cannot be edited after creation. If the configuration has incorrect data or if you no longer need it, delete the storage configuration:
Go to the account console, click Account Settings, and click Storage configurations.
On the storage configuration row, click the Actions menu icon, and select Delete.
You can also click the storage configuration name and click Delete on the pop-up dialog.
In the confirmation dialog, click Confirm Delete.
Encrypt your root S3 bucket using customer-managed keys (optional)
Preview
This feature is in Public Preview.
You can encrypt your root S3 bucket using customer-managed keys, which requires using the Account API 2.0. You can either add an encryption key when you create a new workspace using the Account API or add the key later. For more information, see Customer-managed keys for workspace storage. | https://docs.databricks.com/administration-guide/account-settings-e2/storage.html | 2021-10-16T02:43:49 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.databricks.com |
gp_resgroup_status"} | https://docs.greenplum.org/6-16/ref_guide/system_catalogs/gp_resgroup_status.html | 2021-10-16T03:18:10 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.greenplum.org |
Firefox Migration from self-hosted to Add-ons StoreFirefox Migration from self-hosted to Add-ons Store
We’ve recently migrated from hosting the Sourcegraph Firefox extension ourselves to hosting it in the Firefox add-ons store, to improve the update process.
If you previously installed the Firefox extension from our website, the first time you update it you’ll need to reinstall the extension from the Firefox store, which can be done in two quick steps:
First, remove your current Sourcegraph extension by right-clicking the Sourcegraph logo and selecting “Remove Extension” from the dropdown. Second, navigate to the Sourcegraph extension page on the Firefox add-ons store and install the latest Sourcegraph extension.
Future updates will be distributed through the Firefox store and won’t require a re-install. | https://docs.sourcegraph.com/integration/migrating_firefox_extension | 2021-10-16T02:55:38 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.sourcegraph.com |
public class GroovyPageWritable extends java.lang.Object
Writes itself to the specified writer, typically the response writer.
This sets any additional variables that need to be placed in the Binding of the GSP page.
binding- The additional variables
Writes the Groovy source code attached to the given info object to the response, prefixing each line with its line number. The line numbers make it easier to match line numbers in exceptions to the generated source.
info- The meta info for the GSP page that we want to write the generated source for.
out- The writer to send the source to.
Copy all of input to output.
in- The input stream to writeInputStreamToResponse from
out- The output to write to
Writes the template to the specified Writer
out- The Writer to write to, normally the HttpServletResponse | http://docs.grails.org/3.2.13/api/org/grails/gsp/GroovyPageWritable.html | 2019-10-14T04:21:04 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.grails.org |
Usage¶
Proportional Free Space Calculation¶
To use Alternative Cinder Scheduler Classes in a Cinder deployment the package will need to be first installed in all scheduler nodes as instructed in the installation guide.
Then configuration files will need to be updated to use the classes:
scheduler_host_manager = alt_cinder_sch.host_managers.HostManagerThin scheduler_default_filters = AvailabilityZoneFilter,AltCapacityFilter,CapabilitiesFilter scheduler_driver = alt_cinder_sch.scheduler_drivers.FilterScheduler
Scheduler’s default filters could vary depending on your configuration, but the only filter provided by this package at the moment is the AltCapacityFilter.
In above example we were defaulting to thin provisioning calculations for any backend that supported thin provisioning, but we can also default to thick provisioning is we use HostManagerThick instead as the scheduler_default_filters.
Default Volume Types¶
To support Default Volume Types based on users or projects the package needs to be installed in all API nodes as instructed in the installation guide.
Then configuration files on the API nodes will need to be updated to use our custome API class:
volume_api_class = alt_cinder_sch.api.DefaultVolumeTypeAPI
Since we are only changing the configuration for the API service only these will need to be restarted, leaving Scheduler, Backup, and Volume services as they were.
And the default volume types will need to be added to the users and/or projects in Keystone directly in the DB (there’s no REST API).
Data must be added to extra DB field as JSON with key default_vol_type.
If there is no data in the user’s extra field we can run:
UPDATE user SET extra='{"default_vol_type": "iscsi"}' WHERE id=$USER_UUID;
If the project’s extra field already had info, like an email, we could do:
UPDATE project SET extra=CONCAT(SUBSTRING(extra, 1, LENGTH(extra) - 1), ', "default_vol_type": 'iscsi"}') WHERE name='admin'; | https://alt-cinder-sch.readthedocs.io/en/latest/usage.html | 2019-10-14T04:31:44 | CC-MAIN-2019-43 | 1570986649035.4 | [] | alt-cinder-sch.readthedocs.io |
-
Model
model - ⊞ - Select the model of device to use, F200, R200, or SR300.
- F200
f200-
- R200
r200-
- SR300
sr300-
Sensor
sensor - Select the device to use.
Mode
mode - ⊞ - Choose from Finger/Face or Marker Tracking.
- Finger/Face Tracking
fingerface-
- Maker Tracking
maker- -
Face Rotation
facerotation -
Face Bounds
facebounds -
Face Expressions
faceexpressions -
Persons Center-Mass World Position
personsworldcenterpos -
Persons Center-Mass Color Position
personscolorcenterpos -
Persons Color Bounds
personscolorbounds -
Persons Skeleton World Position
personsskelworldpos -
Persons Skeleton Color Position
personsskelcolorpos -
Max Persons
maxperson -
Marker Image TOP
markertop -
Parameters - Gestures Page
Description of the gestures can be found in the RealSense SDK documentation.
Separate Hands
separatehands - -
Number of Weights
weights - The number of weighted samples to use for weighted smoothing.'. | https://docs.derivative.ca/index.php?title=RealSense_CHOP&printable=yes | 2019-10-14T05:00:37 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.derivative.ca |
What is Maintenance Mode?
We'll put ZapWorks into Maintenance Mode when we make significant changes to how it works behind the scenes.
Whilst in Maintenance Mode ZapWorks will be temporarily unavailable. You will not be able to modify your codes, or otherwise use your account.
This only affects the ZapWorks tool itself - existing published codes can still be scanned as normal, and Zapalytics will still be collected.
Maintenance mode will be announced in advance on the site. | https://docs.zap.works/general/what-maintenance-mode/ | 2019-10-14T04:16:13 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.zap.works |
Target Manipulator
Download the Target Manipulator zpp
This example project was created to demonstrate how simple touch functionality can be implemented in ZapWorks Studio, by going through the Target Manipulator symbol's implementation.
Feel free to test out the subsymbol with one of the models from our 3D model library.
For more information on how to use the template, please refer to the Target Manipulator subsymbol documentation. | https://docs.zap.works/studio/projects/project-breakdowns/target-manipulator/ | 2019-10-14T04:20:03 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.zap.works |
Use the Boolean to Enum Refactoring to convert two-state structures (method return types or method parameter types) into multiple-state structures. Enums have the following advantages.
Available when the caret is on a Boolean (bool) member, variable or method parameter.
Place the caret on a boolean member, variable or method parameter as shown in the code below.
The blinking cursor shows the caret's position at which the Refactoring is available.
The Boolean To Enum Refactoring makes the following changes.
The result of the Refactoring execution is shown in the code below.
class TestClass { private int TestMethod(TestMethodParam a) { if (a == TestMethodParam.Success) return 1000; else return 1024; } } public enum TestMethodParam { Success, Failure } | https://docs.devexpress.com/CodeRushForRoslyn/115367/refactoring-assistance/boolean-to-enum | 2019-10-14T04:31:47 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.devexpress.com |
Document Type
Document
Abstract
Objectives : teach members to navigate through applications such as Siri, iOS, and the App Store; give useful tips of how to organize and use their applications through App Folders; to show how to use utilities on iphones or ipads like calendars and alerts.
Recommended Citation
Evan Beck, Alana Peoples, Gabby Reardon and Robinson, Arnold, "Technology Resource Guide and Classes for Seniors: Barrington Senior Center" (2014). Community Development. 1.
Included in
Civic and Community Engagement Commons, Communication Commons, Community-Based Learning Commons
Members of the Project SOAR student leadership organization put on a public presentation at the Barrington Senior Center, to teach its members the fundamentals of how to use technologies like the search engines, emails, smart phones, social media and Skype. | https://docs.rwu.edu/cpc_comdev/1/ | 2019-10-14T04:20:52 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.rwu.edu |
Contents IT Operations Management Previous Topic Next Topic Create entry point types for Service Mapping Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create entry point types for Service Mapping An entry point is how clients access an application service. If necessary, you can create a new entry point type in addition to preconfigured entry point types in Service Mapping. Before you beginIf your ServiceNow instance uses domain separation and you have access to the global domain, select the domain to which the application service belongs from the domain picker (). The selected domain must be a domain without any child domains.Role required: sm_admin or admin About this task Service Mapping starts the discovery and mapping process for every application service from the entry point you define for it. In addition to this, Service Mapping patterns use entry points to discover CI outbound connections. Service Mapping includes a wide range of preconfigured entry point types that cover most commonly used applications. If your organization uses a less known or proprietary application that does not have a corresponding entry point type in Service Mapping, you must create it. Entry points are modeled in the ServiceNow CMDB as CIs of endpoint type. Entry points are stored as records in the Endpoint [cmdb_ci_endpoint] tables. Like any other CI type, an entry point contains several important definitions that apply to all CIs belonging to it: CI attributes are added as fields to the CMDB tables. Identifiers help Service Mapping and Discovery to differentiate between new and existing CIs. For example, if there is an Active Directory Forest endpoint CI type defined in the CMDB, and Service Mapping discovers an Active Directory Forest CI, it processes it using identifiers and recognizes it as an updated version of the Active Directory Forest CI that exists in the system, not a new Active Directory Forest CI. Unlike with regular CI types, identifiers for new endpoint CI types are created automatically. CI type hierarchy.. Create standard entry points as child CIs for the endpoint CI type, which creates an extension for the cmdb_ci_endpoint table. For entry points of inclusion type create child CIs for the inclusion endpoint CI type extending the cmdb_ci_endpoint_inclusion table. In an inclusion, a server hosts applications that are treated as independent objects. Procedure Navigate to Configuration > CI Class Manager. To create a standard entry point, right-click Endpoint from the Class Hierarchy pane and select Add Child Class. To create an entry point of the inclusion type, right-click Inclusion Endpoint from the Class Hierarchy pane and select Extend. Create an entry point type using the following parameters. See Create a table. Table 1. New table form Field Description Label Entry point type name. For example, HTTP entry point. Name The table name. For example, cmdb_ci_endpoint_http. Extends table The table name of the parent CI type is automatically filled by the system: cmdb_ci_endpoint - for entry points cmdb_ci_endpoint_inclusion - for entry points of the inclusion type Add entry point attributes on the Columns tab at the bottom of the page. By default the new entry point derives attributes from its parent CI, but you can modify the attributes as necessary. Click Submit. Previous topicCreate CI types for Service Mapping and DiscoveryNext topicCreate or modify patternsRelated tasksMap a single application serviceCreate CI types for Service Mapping and Discovery On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-it-operations-management/page/product/service-mapping/task/t_CreateEntryPoint.html | 2019-10-14T03:54:34 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicenow.com |
Get started by adding some pages to this space. Create page.
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
Version 158
Use Cases
Sun Oct 13 21:29:31 CDT 2019
Ted Mahsun
ServiceRocket apps compatibility with Confluence 7.0
Support Resources
Fri Oct 11 01:59:08 CDT 2019
Kai Yeow Pua
An alternative to the Incoming Link macro using Reporting
Thu Oct 10 01:14:52 CDT 2019
ServiceRocket Team
Visualizing your project with an overview dashboard
Thu Oct 10 01:08:58 CDT 2019
Macro Reference
Composition
Thu Oct 10 00:13:44 CDT 2019
Azwandi Mohd Aris
Tabs
Thu Oct 10 00:12:55 CDT 2019
Tab (Deprecated)
Thu Oct 10 00:12:26 CDT 2019
Tab Group (Deprecated)
Thu Oct 10 00:11:54 CDT 2019
Switching to the new Tabs macro
Wed Oct 09 22:55:30 CDT 2019
Salesforce & Jira Cloud Connector (Jira Add-on) Release Notes
Connector for Salesforce & Jira
Tue Oct 08 04:35:16 CDT 2019
Connector Create REST API - one-click or automatic Jira Issue creation
Classic Connector for Salesforce & Jira
Mon Oct 07 01:22:06 CDT 2019
How do I know if I am affected by GDPR compliance changes?
Sun Oct 06 23:08:00 CDT 2019
How do I synchronize users between Jira and Salesforce?
Sun Oct 06 22:56:53 CDT 2019
Part 2 - Creating the workaround
Fri Oct 04 04:13:50 CDT 2019
Part 1 - Creating the Scaffolding macro structure
Fri Oct 04 04:06:13 CDT 2019 | https://docs.servicerocket.com/pages/viewpage.action?pageId=12189796 | 2019-10-14T02:57:08 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.servicerocket.com |
Use the font picker to choose your own fonts - we’ve started with some essential baseline features but we will be adding to these in later releases.
If you look at the Text Properties menu you’ll see the Font section which shows the currently selected font (or the one in which the text cursor is placed) as well as the size of the font - use the slider to adjust or enter a value. The value you enter will be converted to either Points (pt) or Pixels (px) depending on the type of document. So a web document like a website or a Facebook post will display pixels and a print document like a flyer or a letter will display points.
Tap on the arrow and the full menu slides in…
Where a font has a bold variant then you should consider the bold character function in the Text editor as a kind of extra bold - thats to say it's application will add boldness to an already bold variant.
| https://docs.xara.com/en/articles/1694544-font-picker | 2019-10-14T04:49:17 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['https://downloads.intercomcdn.com/i/o/52127191/50fca50b1b670391e60faabe/Picker_01.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/52128350/448f2af30c27fc9ad16265d6/Picker_02.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/52047582/b0a34144c4c5abc34c199ed8/854.png',
None], dtype=object) ] | docs.xara.com |
Instagram Stories
Are you ready to publish the best pictures of your summer in Instagram? Insert them in our ready to use templates, add a little text and play with the colors: you won't have to wait long to get new followers :)
Presentations
One last presentation before the summer holidays, so let's make it fun! Choose the layout that makes you feel happy, apply your brand and play around with colors and fonts, insert your text, keep it simple and your pitch will be just amazing!
Proposals
Your Real Estate agency needs more customers, so now's the moment to impress your targets with a modern and fresh layout. It won't take you long to customize. Just apply your brand and enter your text. Writing a proposal has never been so exciting!
Facebook Covers
And refresh your Facebook page with a new cover. Check out our new templates for your Real Estate business: impress your clients! | https://docs.xara.com/en/articles/3151300-content-update-17th-july-2019 | 2019-10-14T04:50:30 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.xara.com |
Managing Your Websites¶
Content:
-
- Contributing content
- Introduction to the Edit mode, procedures to do actions in InContext Editing, Inline Editing, CKEditor and information about the publication process.
-
- Publication process
- Introduction to the publication process of content and how to manage the publication.
-
- Managing content list viewer byquery
- Introduction to the Content By Query portlet and detailed steps to add this portlet to a specific page.
-
- Managing categories
- Introduction to how to work with categories in Sites Explorer via the Add category and Manage Categories actions added to the Action bar.
-
-.
-
- Using WebDAV
-.
-
- Managing SEO
- Introduction to Search Engine Optimization (SEO), steps to manage SEO data of web pages, web content and optimize your website for search engine.
-
- Searching content in a site
- Steps to search for content and to configure the Search portlet.
-
- Printing content
- Steps to print any content in a site.
Contributing content¶
This function allows web-contributors to edit content, quickly access content list folders from the homepage of the current site, publish content without using the Manage Publication function in Sites Explorer.
This section consists of the following topics:
- Edit mode Introduction to the Edit mode in eXo Platform, how to enable and use this mode.
- CKEditor Introduction to the additional features of CKEditor in eXo Platform.
Edit mode¶
When you access the Agital site, by default, the site content is in the published mode and you cannot edit them.
However, eXo Platform provides you with the Edit mode which enables you to edit all content of the Ag..
InContext Editing¶
By <TurningOnTheEditMode>.
Adding content¶
Note
Adding new content by using InContext Editing is enabled for the content list viewer (CLV) only.
- Turn on the Edit Mode, then hover your cursor over the CLV to which you want to add new content.
- Click
on the CLV.
You will be redirected to the Sites Explorer in the creation form of the content having the same type as other contents in the CLV.
Details:
- Fill all the fields in the form. The field name is required.
4.Click Save or Save & Close to save the content.
After closing the content form, you can view the content and do some actions listed on the Action bar for the content. See the Working with basic actions for more details.
Note
The folder, where a document is saved, is the path you have selected in the Managing preferences section.
Editing content¶
You can edit any content on the homepage for SCV and CLV with InContext Editing.
- Turn on the Edit Mode, then hover your cursor over the content you want to edit, and click
at the right corner. You will be directed to Sites Explorer with the document form for you to edit.
Make changes on the content, then click Save or Save & Close to accept your changes.
After closing the Edit form, the content is in the Document View.
- Click
to return to the site. In the Edit mode, your new content will be in the “Draft” state with its visible modifications.
- Click
to publish your edited content. Your content is now in the “Published” state.
Note
You cannot see the edited content in the draft state when you turn off the Edit mode.
Managing content¶
With InContext Editing, you can easily manage a content list viewer on the page. You can add new content, edit, delete an existing content or copy/cut/paste in the CLV and take more actions in the right-click menu.
Adding content in the CLV
- Turn on the Edit Mode.
- Hover your cursor over the CLV to which you want to add new content on the homepage, and clicck
.
You will be directed to the Sites Explorer page.
on the Action bar.
- Do the same steps as in the Adding Content section.
Do other actions
You can do many different actions for specific content in the CLV. See the Working with basic actions section.
Managing preferences¶
Preferences enable you to edit content in the single content viewer (SCV) and the content list viewer (CLV), reset the display of the content in SCV and CLV and publish content.
Editing the single content viewer
- Turn on the Edit Mode.
- Hover your cursor over a single content viewer and select
of a single content viewer.
The Content Detail Preferences dialog appears.
Details:
Note
Hover your cursor over
to see a quick help for each section.
- Click
next to the Content Path to select another content. The Select Content dialog appears.
- Select a folder in the left pane, and its content in the right pane. The selected content will be displayed in the Content Path field.
- Tick the checkboxes, including Show Title, Show Date and Show Option Bar, if you want to display the content title, the publication date and the print button like the illustration below.
i. In the Print Setting part, click
to open the
UIPageSelector dialog.
ii. Click
, then click a folder on the left and
select a page which will show the content on the right by clicking
.
- Click Save to save all your changes.
Editing the content list viewer
The Content List Preferences dialog appears.
- Select the Content Selection tab:
- Select content you want to show on the content list viewer by clicking
next to the Folder Path field.
- If you select the By Folder mode, select an available site on the left, then select a folder that contains content (documents and/or web content) on the right by clicking the folder.
- If you select the By Content mode, select an available folder from the left pane, all content in this folder will be listed in the right pane. Click content on the right that you want to add to the content list. There will be a message, informing that you have successfully added it to the Content List. The selected content will be listed in the Content List.
- Click the Order by field and select one criterion to sort the content list in the ascending or descending order.
- Select the Display settings tab:
- Enter a header for the content list in the Header field if you want.
- Select a template to display the content list in the template list.
- Tick/Untick your desired options.
- Select the Advanced tab to activate the dynamic navigation and select the content visibility.
- Click Save to accept your changes.
Inline Editing¶
The.
CKEditor¶
When using CKEditor to write/edit a document in eXo Platform, you can also:
- Insert a site link to the document
- Insert a content link to the document
- Upload an image to the document
Inserting a site link
- Click
to open the Insert link to a site page form.
- Enter the site title of the link in the Title field.
- Enter the site URL manually, or you can also click Get portal link to open a page containing all the sites in the same server, then select one that you want.
- Click Preview to view the site.
- Click Save to accept inserting the site to the document.
Inserting a content link
- Click
to open a page.
- Click the plus before the document name, or click directly the document name in the left pane to show the content in the right pane, or click
to upload a file from your local device.
- Click content that you want to insert to the document.
Image Upload through CKEditor
- Click
to open the upload image form.
- Click on Browse server to open the WCM Content selector allowing to upload from desktop or to select an existing attached image.
By default, the WCM content selector opens the folder where the webcontent/Illustrated webcontent will be saved.
In this case, the webcontent is added under
sites/intranet/web contents.
If the WCM Content selector has already been opened and a file has been selected then this last location will be displayed.
As an example of this case:
- Go to file Explorer under /sites/intranet/web contents and create a new webcontent.
- Click
to insert an image and then Browse server.
- The WCM content selector opens the folder
/sites/intranet/web contents(the first case). Browse to get, for example, under the path
sites/intranet/medias, upload an image and insert it to the webcontent.
- Reclick
and then on Browse server, the WCM contents selector will open the last location which is
sites/intranet/mediasand not the default one
/sites/intranet/web contents.
- Select an image from the existing ones or click on
to upload an image from your desktop then select it.
- The image will be first previewed in the Image properties form.
- Click OK, the image will be inserted in the webcontent.
- To finalize the webcontent/illustrated webcontent creation, click on Save or Save and close.
Publication process¶
After new content has been created, it is saved as draft and must be approved before publishing by the web-contributors or administrator. The publication process consists of three steps:
Request for Approval –> Approval –> Publish.
Sending approval request¶
If you want to publish your content without having the “Approve” or
“Publish” right, you first need to send a request for approval by
clicking
on the Action bar.
Approving content¶
If you have the right to approve or publish content, you will see a list of content waiting for your approval at the bottom of the Sites Explorer.
To approve the content, do as follows:
- Click the content to review.
- Click
on the Action bar to approve the content.
Note
If you have the right to publish content, you can publish it immediately without the Approval step. After being approved/published, the content is removed from the list of Waiting For My Approval at the bottom of the Sites Explorer.
Publishing content¶
You can an quickly publish content by opening your desired content, then
clicking
.
Managing publication¶ Save to accept publishing the content as the schedule.
Note
To publish your content forever, you should not set time in the To field.
- Published: The content is published immediately and permanently.
- Click Close to quit the form.
Managing content list viewer by query¶
The Page Editor –> Applications –> Content Save to complete adding the Content By Query portlet.
- Click
to quit the Page Editor page and see the displayed data.
Managing categories¶
As a web-contributors, you can easily work with categories in Sites Explorer via the Add category and Manage Categories actions added to the Action bar.
By default, these buttons are available in the Categories and Web views. To know which drives have these views, see here for more details.
Creating a new category¶
This function enables you to quickly create a new category in Sites Explorer.
- Select a folder in which you want to create a new category.
on the Action bar to open the Add Category form.
- Enter a name for the category in the Category Name field.
- Click Save to accept creating the new category.
Assigning a category to content¶
You can assign available categories to content/document folders only.
- Select a content/document folder to which you want to assign a category.
- Click
on the Action bar.
The Add Category form appears.
- Select the Select Category tab to show the available categories.
Select a category tree for the content/folder.
Click
next to Root Tree to add the category tree to the content/folder.
Or/And click a category on the left, then click
corresponding to the child category on the right to add it to the content/folder.
The categories added to the content/folder will be listed in the Referenced Categories tab.
Note
You can add many categories to content.
Viewing a category¶
Viewing a category allows you know which content is added to the category and you can view it by double-clicking its name or do many different actions in the right-click menu.
- Go to the drive which contains the category you have added. There will be a list of categories available.
- Select your desired category. The content added to that category will be listed.
Note
To know which drives contain categories, see Categories in Content Administration. When copying and pasting content in the category tree, a reference to the original content will be created. This reference is a symlink rather than a copy. This feature is used to preserve the disk space.
Creating content inside a category¶
In eXo Platform, you can create new content in any folders or directly in a CLV with Incontext Editing. However, to facilitate the content management, categories which are usually used to sort and organize documents make your desired searches more quickly. Also, creating content inside a category helps you manage and publish them effectively.
After creating a document, you should categorize it by adding it to a category. Otherwise, documents should be created right in a category and links to those documents will be automatically created in the category. In eXo Platform, categories are stored in JCR.
Creating content in a category¶
- Click
–> Content –> Sites Explorer on the top navigation bar.
- Open the drives list, and select a drive that has categories, for example, Collaboration.
- Select a category where you want to add new content.
4. Click
on the Action bar to create the new content. See the
Creating new web content
section to know how to add new content. The new content is stored in the
category as a symlink and also stored in also stored in another folder
depending on the target path configured while creating a category tree
by Administrator.
To view the content, simply click the Symlink.
Managing content in a specific site¶
Web content is a key resource which is used for a site. Other resources make a site more dynamic and animated by using layout, color, font, and more. This section focuses on how to manage web content in a specific site via the Sites Management drive which allows you to manage content of all sites in the portal.
This section consists of the following topics:
- Creating new web content
- Instructions on how to create new web content in a specific site.
- Editing/Publishing/Deleting web content
- Instructions on how to edit/publish/delete web content.
Note
Only users who have the right to access the Sites Management drive can do it.
Creating new web content¶
- Go to the Sites Management drive, then select a site to which you want to add web content.
- Select the web content folder on the left.
Note
In this step, you also can add new web content into another folders (documents and media folders) of a site but you are recommended to select the web content folder because: - Managing web content of a site becomes more easily. - You do not have to select many web content types in the list of document types. It makes adding new web content more flexibly.
- Click
on the Action bar to open a list of content templates, including Illustrated Web Content and Web content.
- Select a template to present the web content by clicking one.
- Enter values in fields of the form.
- Click Save or Save & Close to save the content or Close to quit the Add New Document form.
Tabs in the Add New Document form¶
The Main Content tab
The Illustration tab allows you to upload an illustration that makes the site’s content more attractive.
Details:
Uploading an image¶
- Browse a list of images on your local device by clicking the Select File button, then select a specific location.
- Select an image in the list to upload.
The Advanced tab includes two parts: CSS Data and JS Data.
Details:
When you create new content which is in draft, a new activity will be
created on your activity stream and on the Social Intranet homepage.
This activity shows the title
, summary (if any), type
, version
and current status
of the
content, and the icon corresponding to the content type
.
From the activity stream, you can:
- Click
to view the content in a larger window.
- Click
to edit the content directly into the Sites Explorer.
- Click
to give your idea.
- Click
to show your liking to the uploaded document.
- New comments will be automatically added to the activity when your content has the following changes:
- The main content is edited
- A file is attached/removed to/from the content
- A tag is added/removed to/from the content
- A category is assigned/removed to/from the content
- Your comment is added to the content from the Sites Explorer
Besides, the content of the activity will be updated with comments when there are the following changes: - The title and/or summary of the content
- The status of the content
- The number of version of the content is updated without a comment
When the content is deleted, the activity is also removed from the activity stream without any comment or notification.
Editing/Publishing/Deleting web content¶
Editing web content¶
This function is used to edit web content in a specific drive of an existing site.
- Access the folder of a site which contains the web content that you want to edit.
- Select the web content by double-clicking it in the left tree or in the right pane. The detailed information of web content will be viewed in the right pane.
- Click
on the Action bar to show the form to edit the selected web content. This form is similar to that of creating a new document.
- Make changes on current values in the fields of this form.
- Complete editing the selected web content by clicking Save or Save & Close.
Note
When you click
, the web content will be auto-locked for your editing. After finishing, the content is back to the unlock status. You can manage “Locks” in the Unlocking a node section.
Publishing web content¶
This function helps you publish web content that you have added to the web contents folder in Sites Explorer.
See the Publication process section to know how to publish web content.
Adding translations to content¶
This function enables you to add multiple languages for content. This action is similar to adding a language.
- Select a document to which you want to add the translation. For example, select a web content in English.
- Click
on the Action bar to open the Add Translation form.
- Click Select Document to browse to the target content that has a different language with the first content. For example, the Web Content version in French.
- Click Save on the Add Translation form.
- Select the document to which you have added the translation, then click the
button on the Filter bar.
You will see the available languages for the selected document. Click the language on this pane to view the document in the corresponding language version.
Using WebDAV¶
In eXo Platform, you can use WebDAV to perform actions on a website easily, quickly and efficiently without accessing it directly on web browsers. Each website managed by WebDAV will be displayed as a folder.
To manage site content using WebDAV, follow either of two ways:
The first way
You need to connect to your WebDAV clients. See WebDAV for more details.
It is assumed that you want to access the ACME site using WebDAV, simply use the URL: into the address bar. After successul login, the ACME site appears as a folder.
The second way
This way can be done through Sites Management.
- Click
on the top navigation bar, then select Content –> Sites Explorer from the drop-down menu.
- Click the Show Drives button, then select Sites Management.
You will see all sites listed in the left sidebar.
- Right-click your desired site to view with WebDAV, and select Download and Allow Edition from the menu.
The selected site will be shown in WebDAV.
In this view, you can access documents in the directories that are linked to the web server.
Adding new content to a specific site¶
This function enables you to copy web content, such as an .html file, from your local device to a web content folder of a site.
- Access a site via WebDAV, then go to a web content folder of the site.
- Copy the web content on your local system into this folder.
The copied file will be converted to web content that is viewable by WebDAV automatically. The content is converted to a directory containing CSS, documents, js and media.
After the new content is added, it can be viewed as a folder in WebDAV or as a page using a web browser.
Deleting web content¶
This function enables site administrators to delete web content files separately or in batches.
- Navigate to the folder that contains the content you want to remove.
- Right-click the content files or directories (hold the Ctrl key to select multiple files at once), and select Delete from the drop-down menu.
The selected files will be removed from the site.
Managing content with Fast Content Creator¶
The Fast Content Creator portlet in PRODUCT enables you to quickly create and save a new document with only one template in a specific location without accessing Sites Explorer. This helps you save a lot of time when creating a new document.
To use the Fast Content Creator portlet, you need to add it to a specific page first by dragging and dropping the Fast Content Creator portlet from Page Editor –> Applications –> Forms to the main pane. This can be done when creating a new page or editing an existing page or editing the layout of a site.
Configuring Fast Content Creator¶
- Hover your cursor over the portlet, then click
to edit the portlet.
The form with the Edit Mode tab appears.
Details:
- Select a specific location to save documents.
i. Click
next to the Location to Save field to
open the Select Location form.
- ii. Select the parent node in the left pane, then click
in the Add column to select the child node in the right pane. After being selected, this location will be displayed on the Location to Save field. Created documents will be saved in this location.
- Select a template which is used to create a new document.
The fast content creator portlet will be shown and allows you to create content quickly. Here is the added page containing a fast content creator for the Accessible Media template.
Creating/Viewing content¶
Creating new content
- Go to the page which has the fast content creator portlet.
- Fill values in all the fields in the page.
- Click a button in the page to accept creating the new document. A message appears to let you know that the document is created successfully at the location selected in the Location to Save field.
Note
The button name is different, basing on the Custom Save Button field.
Viewing content
After creating a new document by Fast Content Creator, you can view it as follows:
- Go to Sites Explorer.
- Select the drive and the path that you established in the configuration of Fast Content Creator. You will see this document.
Managing SEO¶
SEO (Search Engine Optimization) allows you to improve the visibility of your web pages and web content in the major search engines (such as Google, Yahoo, Ask, Bing, and more) via the search results. Therefore, it is very important for the user to maximize their web pages and content’s position in the search engines. In eXo Platform, the SEO Management feature is provided to meet this target. By using SEO Management, you can easily manage the SEO data of web pages and web content.
Managing the SEO data¶
- Open a page or content that you want to edit the SEO metadata.
- Open the SEO Management form by clicking Edit –> Page –> SEO on the top navigation bar.
Depending on your SEO management for a page or content, the content of the SEO Management form will be different.
The SEO Management form for content is as follows:
The SEO Management form for a page is as follows:
Details:
- Fill out all fields in this form.
- Click Save to finish creating SEO metadata.
Note
- If no language has been selected, the default portal language will be used after saving.
means that the SEO information is empty.
means that the SEO information has been updated but some information are not filled out yet.
means that the SEO Management form is filled out with the full SEO information.
means that the SEO Management feature is disabled.
Searching for content in a site¶
This section consists of the following topics:
- Searching for content
-.
Searching for content¶
- Enter a keyword into the search box and press Enter.
The search results matching with your keyword are displayed in the search page:
In case of no search results matching the keyword, the search page is displayed as below:
Details:
- In the Search form, you can enter another keyword and set the search scale.
- Press Enter, or click Search to start searching.
Editing the Search portlet¶
Editing the Search portlet allows you to change the display of search results.
- Open the Search page as in the Searching for content section.
- Open the Edit Mode of the Search portlet by following one of two ways:
The first way
Click Edit –> Content on the top navigation bar, then click
.
The second way
Click Edit –> Page –> Layout on the top navigation bar. The Page Editor will be displayed.
- Hover your cursor over the Search Result portlet and click
to edit the portlet.
The Edit Mode of the Search portlet appears.
Details:
- Edit your desired portlet and click Save to accept your changes.
Printing content¶
Users can easily print any content in a site by following these steps:
- Click the name of the content which you want to print to view all the content.
- Click the Print button. The Print Preview page will be displayed on another tab.
- Click Print to print the content of this page, or Close to close this tab without printing. | https://exo-documentation.readthedocs.io/en/latest/Manage-Sites.html | 2019-10-14T04:36:32 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['_images/turn_on_edit_mode.png', 'image1'], dtype=object)
array(['_images/single_content_viewer.png', 'image2'], dtype=object)
array(['_images/content_list_viewer.png', 'image3'], dtype=object)
array(['_images/content_forms.png', 'image5'], dtype=object)
array(['_images/new_content.png', 'image7'], dtype=object)
array(['_images/edit_form_in_sites_explorer.png', 'image8'], dtype=object)
array(['_images/edited_content_in_document_view.png', 'image9'],
dtype=object)
array(['_images/draft_content.png', 'image11'], dtype=object)
array(['_images/add_new_content_to_CLV.png', 'image14'], dtype=object)
array(['_images/content_detail_preferences.png', 'image16'], dtype=object)
array(['_images/scv_show_options.png', 'image19'], dtype=object)
array(['_images/clv_preferences.png', 'image24'], dtype=object)
array(['_images/content_list_preferences.png', 'image25'], dtype=object)
array(['_images/inline_editing_form.png', 'image27'], dtype=object)
array(['_images/CKEditor_Inline.png', 'image28'], dtype=object)
array(['_images/insert_link_to_a_site_page.png', 'image33'], dtype=object)
array(['_images/content_selector_form.png', 'image35'], dtype=object)
array(['_images/imageProperties.png', 'image38'], dtype=object)
array(['_images/preview.png', 'image44'], dtype=object)
array(['_images/save.png', 'image45'], dtype=object)
array(['_images/content_waiting_approval.png', 'image47'], dtype=object)
array(['_images/manage_publication_form.png', 'image50'], dtype=object)
array(['_images/drap_drop_content_by_query.png', 'image52'], dtype=object)
array(['_images/edit_content_by_query.png', 'image53'], dtype=object)
array(['_images/edit_mode_content_by_query.png', 'image54'], dtype=object)
array(['_images/add_category_form1.png', 'image56'], dtype=object)
array(['_images/add_category_form2.png', 'image57'], dtype=object)
array(['_images/referenced_categories_tab.png', 'image58'], dtype=object)
array(['_images/documents_added_to_category.png', 'image59'], dtype=object)
array(['_images/select_category_to_add_content.png', 'image61'],
dtype=object)
array(['_images/new_content_inside_category.png', 'image63'], dtype=object)
array(['_images/new_content_on_activity_stream.png', 'image71'],
dtype=object)
array(['_images/comments_on_activity_stream_content.png', 'image81'],
dtype=object)
array(['_images/select_web_content_language.png', 'image85'], dtype=object)
array(['_images/add_translation_form.png', 'image86'], dtype=object)
array(['_images/web_content_in_fr.png', 'image87'], dtype=object)
array(['_images/content_in_different_language.png', 'image89'],
dtype=object)
array(['_images/webdav_acme_folder.png', 'image90'], dtype=object)
array(['_images/sites_management_drive.png', 'image92'], dtype=object)
array(['_images/sites_list.png', 'image93'], dtype=object)
array(['_images/webdav_site_view.png', 'image94'], dtype=object)
array(['_images/drap_drop_fast_content_creator.png', 'image95'],
dtype=object)
array(['_images/configure_fcc.png', 'image96'], dtype=object)
array(['_images/edit_mode_fcc.png', 'image97'], dtype=object)
array(['_images/select_location_form.png', 'image98'], dtype=object)
array(['_images/fast_content_creator_page.png', 'image99'], dtype=object)
array(['_images/open_seo_management.png', 'image107'], dtype=object)
array(['_images/search_page.png', 'image117'], dtype=object)
array(['_images/no_search_result.png', 'image118'], dtype=object)
array(['_images/search_edit_mode.png', 'image120'], dtype=object)] | exo-documentation.readthedocs.io |
13. Client Server¶
Scientific simulations are almost always run on a powerful supercomputer and accessed using desktop workstations. This means that the databases usually reside on remote computers. In the past, the practice was to copy the databases to a visualization server, a powerful computer with very fast computer graphics hardware. With ever increasing database sizes, it no longer makes sense to copy databases from the computer on which they were generated. Instead, it makes more sense to examine the data on the powerful supercomputer and use local graphics hardware to draw the visualization. VisIt can run in a client-server mode that allows this exact use case. The GUI and viewer run locally (client) while the database server and parallel compute engine run on the remote supercomputer (server). Running VisIt in client-server mode is almost as easy as running all components locally. This chapter explains the differences between running locally and remotely and describes how to run VisIt in client-server mode.
- 13.1. Client-Server Mode
- 13.2. Host Profiles | https://visit-sphinx-github-user-manual.readthedocs.io/en/develop/gui_manual/ClientServer/index.html | 2019-10-14T03:01:12 | CC-MAIN-2019-43 | 1570986649035.4 | [] | visit-sphinx-github-user-manual.readthedocs.io |
A fork of the Spring 2.5.6 GenericBeanFactoryAccess class that was removed from Spring 3.0.
Constructs a
GenericBeanFactoryAccessor that wraps the supplied org.springframework.beans.factory.ListableBeanFactory.
Find a java.lang.annotation.Annotation of
annotationType on the specified
bean, traversing its interfaces and super classes if no annotation can be
found on the given class itself, as well as checking its raw bean class
if not found on the exposed bean reference (e.g. in case of a proxy).
beanName- the name of the bean to look for annotations on
annotationType- the annotation class to look for
null
Return the wrapped org.springframework.beans.factory.ListableBeanFactory.
Find all beans whose
Class has the supplied java.lang.annotation.Annotation type.
annotationType- the type of annotation to look for | http://docs.grails.org/3.2.13/api/org/grails/spring/beans/factory/GenericBeanFactoryAccessor.html | 2019-10-14T04:24:09 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.grails.org |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
EventBus
An event bus receives events from a source and routes them to rules associated with that event bus. Your account's default event bus receives rules from AWS services. A custom event bus can receive events from your custom applications and services. It can also receive events from AWS services, but only if those events are forwarded by a rule on a default event bus.
A partner event bus receives events from an event source created by an SaaS partner. These events come from the partners services or applications.
Contents
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_EventBus.html | 2019-10-14T04:04:18 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.aws.amazon.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::SageMakerRuntime::Client
- Inherits:
- Seahorse::Client::Base
- Object
- Seahorse::Client::Base
- Aws::SageMakerRuntime::Client
- Defined in:
- (unknown)
Overview
An API client for Amazon SageMaker Runtime. To construct a client, you need to configure a
:region and
:credentials.
sagemakerruntime = Aws::SageMakerRuntime:::SageMakerRuntime:::SageMakerRuntime::Client constructor
Constructs an API client.
API Operations collapse
- #invoke_endpoint(options = {}) ⇒ Types::InvokeEndpointOutput
After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint.::SageMakerRuntime::Client
Constructs an API client.
Instance Method Details
#invoke_endpoint(options = {}) ⇒ Types::InvokeEndpointOutput.
: | https://docs.aws.amazon.com/sdkforruby/api/Aws/SageMakerRuntime/Client.html | 2019-10-14T04:27:14 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.aws.amazon.com |
CLIENT
Changes
- New Instance Type: Invite
- Very private. Owner can accept invite requests and send invites. Occupants get notifications that others want into the instance.
- New Instance Type: Invite+
- Somewhat private. Owner and any occupants can accept invite requests.
- Friends list is now sorted alphabetically
- User lists on the social page are now expandable
- The small Red Arrows that allow expanding of rows in menus have been replaced with Expand and - Collapse buttons
- You can now join friends in Friend instances if the owner is also your friend
Fixes
- Object owner disagreement between clients has been fixed. This was causing pickups to fly around worlds
- Fixed notifications showing up blank in some cases
- The forced audio source falloff curve in the SDK has been fixed | https://docs.vrchat.com/docs/vrchat-0120p12 | 2019-10-14T03:07:39 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.vrchat.com |
JMX/REST Management¶
In this chapter, the following topics are included:
- Introduction to eXo Platform management Overall information about managing resources of eXo Platform, JMX and REST interfaces.
- Management views of eXo Platform Introduction to the following set of management view types of eXo Platform and their Object Names:
- Jobs and Job Scheduler List of the Cron Jobs and the Job Scheduler MBean.
- eXo Platform notifications monitoring A step by step to monitor notifications.
Introduction to eXo Platform management¶
Managing resources of eXo Platform is critical for IT operators and system administrators to monitor and supervise the production system. eXo Platform can be managed using JMX (Java Management Extension) tools or REST service.
To use JMX, some settings are required. To use REST service, you just need a browser. As you will see later in this chapter, all operations are available in JMX and some of them are available in REST. So use JMX if you need all operations and use REST in some cases, for example, you are on a machine that JMX is not installed, or at remote region where JMX is inaccessible because of security setup.
How to manage eXo Platform with JMX¶
JMX and JMX Client
Note
See Oracle’s Documentation to learn about JMX (Java Management Extension).
To manage eXo Platform with JMX, you need a JMX Client, or more exactly an MBean Browser. JConsole is a built-in tool, and it features an MBean browser, so it does not require any installation. Another suggestion is VisualVM, which requires some steps to install its MBean plugin.
The tools are graphical and you may just try and use to explore MBean. In this chapter, the following terms will be used to describe an individual or a group of similar MBeans:
Object Name is used to identify and indicate an MBean. All MBeans introduced in this chapter can be found under a group “exo”, however their organization may make it difficult to find an MBean. For example, you will see three groups with the same name “portal”, so this document will not indicate an MBean by its position in the MBeans tree, but by its Object Name.
If you are using VisualVM, you can see Object Name in the “Metadata” tab.
Attribute is a pair of “Name” and “Value”. A list of Attributes shows the state of an MBean object.
Operation is a function that you can invoke. Each MBean provides some (or no) Operations. Some Operations are “Set” and some Operations are “Get”. An Operation may require data inputs.
Configuring eXo Platform to allow JMX access
The JMX configurations are JVM options and thus basically not specific to eXo Platform. Such configurations are explained at Oracle’s Documentation.
In eXo Platform, by default JMX is not configured. Thus, local access is enabled and remote access is disabled. Authentication is disabled as well, this means username and password are not required. If you want to enable remote access or authorization, you need to start customizing eXo Platform, as instructed in the Customizing environment variables section.
After the start, put your JMX configurations in the form described in Advanced Customization <AdvancedCustomization> section.
Although the two sections are written for Tomcat bundle, it is very
similar for JBoss, except the customized configuration file. In JBoss,
the file is
$PLATFORM_JBOSS_HOME/bin/standalone-customize.conf for
Linux,
$PLATFORM_JBOSS_HOME/bin/standalone-customize.conf.bat for
Windows. You can create it by using the sample file
$PLATFORM_JBOSS_HOME/bin/standalone-customize.sample.conf for Linux
or
$PLATFORM_JBOSS_HOME/bin/standalone-customize.sample.conf.bat for
Windows.
Securing JMX connection
It is recommended to enable security for production system. You may:
- Enable SSL. See Using SSL.
- Enable Password Authentication. See Using Password Authentication and Using Password and Access Files.
How to manage eXo Platform with REST service¶
Using REST service, you can do some operations with a browser. It requires no setup.
You need to be member of /platform/administrators to access REST services.
You also need to know the URL of a service (or its attributes and operations) to access it. You can get the URLs as follows:
- Enter the base URL: http://[your_server]:[your_port]/rest/private/management, which is to access all management REST services, in your browser, then log in. The page returns a list of available REST services in plain text.
- Select a service name and append it to the base URL. You will have the service’s URL, for example: http://[your_server]:[your_port]/rest/private/management/skinservice. Entering this URL, you will get a list of attributes (as “properties”) and operations (as “method”).
- Continue appending an attribute of Step 2 to have URL of a method or property. Let’s see the “skinservice” as an example:
- Its property “SkinList” can be accessed by the URL: http://[your_server]:[your_port]/rest/private/management/skinservice/SkinList.
- Its method “reloadSkins” can be invoked by the URL: http://[your_server]:[your_port]/rest/private/management/skinservice/reloadSkins.
- The URL of the method “reloadSkin” is a bit complex because the method requires parameter “skinId” (to know which Skin will be reloaded): http://[your_server]:[your_port]/rest/private/management/skinservice/reloadSkin?skinId=Default.
Management views of eXo Platform¶
- PortalContainer management view The management view of all objects and configurations of a given portal.
- Cache management view The management view of eXo Platform caches at several levels that provides the critical performance information, especially useful for tuning the server.
- Content management view The management view of WCMService.
- JCR management view The management view of SessionRegistry, LockManager, Repository, and Workspace that allow you to monitor sessions, locks, repository configurations, and workspace configurations respectively.
- Portal management view A set of the Portal management views, including Template statistics, Template service, Skin service, TokenStore, Portal statistics, and Application statistics.
- Forum management view A set of the Forum management views, including Forum, Job, Plugin, Storage that allows you to control rules, statistics, information of data storage.
PortalContainer management view¶
PortalContainer manages all objects and configurations of a given portal.
- The Object Name of PortalContainer MBeans: exo:container=portal,name=portal.
Note
PortalContainer can be controlled through the following path: -.
Cache management view¶
eXo Platform uses caches at several levels. Monitoring them can provide the critical performance information, especially useful for tuning the server. Each cache is exposed with statistics and management operations.
CacheService¶
- There are many Cache MBeans of which the Class Name is common: org.exoplatform.services.cache.concurrent.ConcurrentFIFOExoCache and the Object Names are: exo:service=cache,name={CacheName} where CacheName is specified for each MBean.
CacheManager¶
The CacheManager MBean has no attribute and only one method to clear all the Caches.
- The Object Name of CacheManager Mbeans: exo:service=cachemanager.
PicketLinkIDMCacheService¶.
Note
PicketLinkIDMCacheService can be controlled through the following path:
However, the REST View managements of CacheService and CacheManager are not currently exposed in this version.
Content management view¶
WCMService¶
- The Object Name of WCMService MBean: exo:portal=portal,service=wcm,view=portal,type=content.
Note
WCMService can be controlled through the following paths respectively: -.
JCR management view¶
Java Content Repository (JCR) provides a management view to monitor sessions, locks, repository configurations, and workspace configurations.
SessionRegistry¶
- The Object Name of SessionRegistry MBean: exo:portal=portal,repository=repository,service=SessionRegistry.
Workspace¶
- There are several default workspaces listed below, each of them corresponds to a Workspace MBean:
- The Object Name of Workspace MBeans: exo:portal=portal,repository=repository,workspace={WorkspaceName} where WorkspaceName is the name of each workspace.
LockManager¶
Each Workspace has an MBean to manage locks.
The Object Name of LockManager MBeans: exo:portal=portal,repository=repository,workspace={WorkspaceName},service=lockmanager where WorkspaceName is the name of each workspace.
Note
- Currently, the REST View managements of SessionRegistry,
- LockManager, Repository and Workspace are not exposed in this
version.
Portal management view¶
Template statistics¶
Template statistics exposes various templates used by the portal and its components to render markups. Various statistics are available for individual templates, and aggregated statistics, such as the list of the slowest templates. Most management operations are performed on a single template; those operations take the template identifier as an argument.
- The Object Name of Template statistics MBean: exo:portal=portal,service=statistic,view=portal,type=template.
Template management¶
Template management provides the capability to force the reload of a specified template.
- The Object Name of Template management MBean: exo:portal=portal,service=management,view=portal,type=template.
Skin management¶
- The Object Name of Skin management MBean: exo:portal=portal,service=management,view=portal,type=skin.
TokenStore¶
- The Object Name of TokenStore MBeans: exo:portal=portal,service=TokenStore,name={Name} where Name is the name of each specific token.
eXo Platform provides the following TokenStore instances:
Portal statistics¶
- The Object Name of Portal statistics MBean: exo:portal=portal,service=statistic,view=portal,type=portal.
Application statistics¶
Various applications are exposed to provide relevant statistics.
- The Object Name of Application statistics MBean: exo:portal=portal,service=statistic,view=portal,type=application.
Note
Template statistics, Template management, Skin management, Portal statistics and Application statistics can be controlled through the following paths respectively:
However, the REST View management of TokenStore is currently not exposed in this version.
Forum management view¶
Some MBeans are provided to manage Forum application.
Jobs¶
- The Object Name of Forum Job MBeans: exo:portal=portal,service=forum,view=jobs,name={Name} where Name is specified for each job (listed later).
The list of Forum Jobs:
RoleRulesPlugin¶
The Object Name of RoleRulesPlugin MBean: exo:portal=portal,service=forum,view=plugins,name=”add.role.rules.plugin”.
Storage¶
This MBean enables you to get storage information (data path, repository, workspace) of Forum application.
- The Object Name of Forum Storage MBean: exo:portal=portal,service=forum,view=storage.
Note
Currently, the REST View managements of Forum, Job, Plugin, Storage are not exposed in this version.
Jobs and Job Scheduler¶
Jobs are components that run in background and perform scheduled tasks, such as sending notification emails every day.
In eXo Platform, jobs are managed by Quartz Scheduler. This framework allows to schedule jobs using simple patterns (daily, weekly) and Cron expressions.
The following tables are the jobs and their default configuration:
You can suspend or resume the jobs via JMX. Find the MBean
exo:portal=portal,service=JobSchedulerService like in the
screenshot, it gives you the two operations.
eXo Platform notifications monitoring¶
Monitoring is a means to be aware about your system’s state. You can monitor different parts of eXo Platform through JConsole.
To monitor and observe notification settings in eXo Platform, you should follow these steps:
- In the file exo.properties, add this property
exo.social.notification.statistics.activeand set it to true.
- Start your server and then open a new terminal to start JConsole using the command jconsole.
- Go to MBeans tab.
- Navigate in the tree to exo –> portal –> notification –> statistic to get statistics about eXo Platform notifications.
| https://exo-documentation.readthedocs.io/en/latest/Management.html | 2019-10-14T03:07:40 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['_images/job_scheduler_mbean.png', 'image0'], dtype=object)
array(['_images/jconsole.png', 'image1'], dtype=object)] | exo-documentation.readthedocs.io |
WHAT'S IN A NAME?
Originally purchased for a music festival in 2014, these multifaceted shoes—flirty yet tough, eye-catching yet versatile—became my signature piece and an extension of myself.
I still remember the first time I put on my pink Dr. Martens lace up boots. Brightly colored, distinctively shaped, and heavier than any shoe I had ever worn—they would certainly stand up to the elements at the upcoming music festival for which they were purchased. After the first newborn fawn-like steps, I found my bearings and felt something I hadn’t felt in months: bold.
My pink Docs came to me at a pivotal time. It was 2014: I was in my third year of undergrad and feeling out of control in nearly every aspect of my life. After months of pity partying, I decided to attend a music festival with a large group of friends and knew I’d need some statement pieces to stand out in the crowd (in more ways than one, because I’m also very short and tend to get lost easily in large groups of people).
The shoes were just meant to be worn to an unfamiliar place where I could dress however I wanted without judgement—but the confidence boost I received from those conversation starters followed me back home. I began incorporating my pink Docs into every outfit imaginable to continue creating that bold feeling. This empowered me to make bold moves in fashion, in meeting new people, and in exploring the city I called home. Nothing was too scary as long as I was wearing my pink Docs.
Once I realized I had that feeling of empowerment all of the time, I knew I had the Docs to thank (yes, it was a very Wizard of Oz “ruby slippers moment”.) These multifaceted shoes—flirty yet tough, eye-catching yet versatile—became my signature piece and an extension of myself.
It’s no surprise that I came into my feminism around this same time. In wearing the pink version of a traditionally masculine shoe, I redefined my own ideas of gendered fashion and color psychology: combat boots were no longer “boyish”; pink wasn’t “just for girls” anymore. And that is so freakin’ freeing.
I started My Pink Docs (the blog) to channel that feeling of empowerment through my lifelong passion, writing. At my lowest points, I wrote to escape and to heal. Now, I am using this platform to help multifaceted millennial women initiate empowering conversations that enable them to confidently explore feminism and fashion as tools for self-expression.
...Whew, I know that’s a lot to take in!
What I’m trying to say is that my pink Docs (the shoes) made life less scary for me. Now, it is my hope that My Pink Docs (the blog) will make talking about real issues—quarter life conundrums, feminism today, and breaking out of your fashion bubble—less scary for both of us. | http://mypinkdocs.com/why-its-called-my-pink-docs | 2019-10-14T04:00:14 | CC-MAIN-2019-43 | 1570986649035.4 | [array(['https://images.squarespace-cdn.com/content/v1/567acfb39cadb6b997971062/1497033438674-CNU3TR7VTRLSJNJE2OQS/ke17ZwdGBToddI8pDm48kBUDAxm-FLUF-OJf9moK1kV7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z5QPOohDIaIeljMHgDF5CVlOqpeNLcJ80NK65_fV7S1UT_TXfTUFcrrnRvtinoH4JYxq5g0UB9t65pVePltZrd1IKYY7Qu0iTZQJ-GJ4dsqLQ/2014-03-21+15.55.56.jpg',
'2014-03-21 15.55.56.jpg'], dtype=object) ] | mypinkdocs.com |
Climate change activists spray red paint at UK Treasury from fire engine
Reuters
Oct 03, 2019 09:25 UTC
Nancy Pelosi applauds Modi's commitment to tackle climate change
Economic Times
Oct 03, 2019 03:48 UTC
The Trump circus was a distraction from Africa's climate change fight at the UN summit
Quartz Africa
Oct 02, 2019 22:41
Could a New ‘Bretton Woods’ Conference’ Prompt Global Climate Change Policy?
Fortune
Oct 02, 2019 16:52 UTC
Pittsburgh researchers investigate some of climate change's most critical questions
Great Lakes Now
Oct 02, 2019 16:25 UTC
Candidates, show us innovation on climate change, not just rhetoric
Des Moines Register
Oct 02, 2019 16:11 UTC
Are There Climate Change Clues In Texas Hill Country Cave Stalagmites?
Texas A&M University
Oct 02, 2019 14:57 UTC
Brexit and climate change - London police plan for big protests this month
Reuters
Oct 02, 2019 13:43 UTC
To Avoid Conflict, Responses to Climate Change in Oceania Must Heed Customary Actors and Institutions
New Security Beat
Oct 02, 2019 13:06 UTC
Rewild 25% of the UK for less climate change, more wildlife and a life lived closer to nature
Phys.Org
Oct 02, 2019 12:54 UTC
Gates Foundation Commits $310 Million to Help Farmers Prepare for Climate Change (Grants Roundup)
The Chronicle of Philanthropy
Oct 02, 2019 12:53 UTC
Climate change threatens health in Northwest
Peninsula Daily News
Oct 02, 2019 08:29 UTC
Scientists say planting trees way to fight climate change
Vashon-Maury Island Beachcomber
Oct 02, 2019 08:29 UTC
Climate change is already forcing entire communities to migrate
Quartz
Oct 01, 2019 09:59 UTC
West Africa's Sahel vulnerable to climate change, bad governance
Quartz Africa
Oct 01, 2019 06:40 UTC
This is how climate change could hurt VA hospitals and programs
Military Times
Oct 01, 2019 04:02 UTC
How Does Climate Change Affect Mountainous Watersheds That Give Us Our Water?
Lawrence Berkeley National Laboratory
Sep 30, 2019 14:32 UTC
Greta Thunberg got the world's attention. But are leaders really listening?
CNN
Sep 29, 2019 13:54 UTC
Why Vladimir Putin Suddenly Believes in Global Warming
Bloomberg
Sep 29, 2019 05:59 UTC
Climate Change Memes for Angry and Terrified Teens
Gizmodo
Sep 28, 2019 14:00 UTC
Preaching Climate Change To Commuters: Richard McLachlan's Subway Sermons
NPR
Sep 27, 2019 08. | https://search-docs.net/climate-change-news:2pW0SjGu2a-YxbFZBaFZUT | 2019-10-14T03:19:20 | CC-MAIN-2019-43 | 1570986649035.4 | [] | search-docs.net |
To incorporate Blueworx Voice Response successfully into your telecommunications network and to develop applications that maximize its potential, you need a variety of skills. Blueworx Voice Response is packaged and presented as a fully interactive, window-based system to make the tasks you need to perform as easy as possible. The functions provided can be used in any of the state table, Java, or VoiceXML programming environments (except where specified).
This section introduces: | http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.geninf.doc/usingdt.html | 2019-10-14T03:28:30 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.blueworx.com |
To use Single Sign-on feature, navigate to WCAP > Advanced Settings > SSO Settings (tab)
This feature comes in handy, if you want your WordPress users to able to login to WHMCS and make purchases or if you need your WHMCS users to comment on WordPress blog posts or for example, use support forum bbpress without additional login to WordPress.
This is achieved by synchronizing the user profiles & passwords between WHMCS and WordPress.
SSO Settings
Enable WHMCS-WP SSO: Check to let users log in to both WordPress and WordPress.
Hide WP Admin bar: WordPress shows admin bar at the top for logged in users, check this option if you want to hide it. Since most of the time users will be using a logout link in menu page provided with “WHMCS Client Area”, you can check this option to hide admin bar.
Exclude WP roles from SSO: Check the roles to exclude from SSO process.
Sync Address / Profile Fields: By default, WHMCS requires address fields for user creation, while WordPress doesn’t. WCAP create these fields in WP. Check to create WHMCS profile fields (address, phone number etc), in WordPress.
WHMCS-WP profile fields mapping: Users who are already using address/profile fields from another plugin can map those fields with WHMCS fields
SSO Sync Settings
SSO works by synchronizing user profiles and passwords in WHMCS and WP. Following options are related to how new users are created and existing are synced in between WHMCS and WordPress when SSO is enabled
Settings to create users in WordPress
These settings come into play, when a new user signup in WHMCS, and it is synced to WordPress.
Role for the new user: A WordPress role for the new user.
Username for the new user: In WHMCS email is used as user-id, while in WordPress uses the username. You have two options here to make usernames.
- First Name + Last Name
- Use email as username (recommended)
Settings to create users in WHMCS
These settings come into play, when a new user signup in WordPress, and it is synced to WHMCS.
WordPress has fewer profile fields than WHMCS, settings in this section handle how to fill WHMCS fields.
By default, WHMCS requires Client Address and Phone Number for user creation, while by default WordPress does not collect this information.
You can handle this situation in two ways.
- Allow Sync to add dummy data to the WHMCS Client Area Address & Phone numbers fields.
- Leave empty fields as its
One Time Sync
When SSO has enabled, new users and profiles changes are tracked and syncs users between WHMCS and WordPress. For the existing users (those are created before SSO is activated), you need to run sync process below. This is a one time task and is needed for a smooth SSO experience.
Sync.Direction: You can run sync users from WHMCS to WP or from WP to WHMCS, or both ways to suit your needs. | http://docs.whmpress.com/docs/wcap-whmcs-client-area-api/administration-settings/single-sign-on-with-wordpress-sso/ | 2019-10-14T04:48:39 | CC-MAIN-2019-43 | 1570986649035.4 | [] | docs.whmpress.com |
Gradient Filter
This topic documents a feature of Visual Filters and Transitions, which is deprecated as of Windows Internet Explorer 9.
Displays a color gradient between the object's background and content.
Syntax
Possible Values
Members Table
The following table lists the members exposed by the Gradient object.
Remarks
When revealed by a transition, any text that covers a Gradient procedural surface is initially exposed as transparent. After the transition has finished, the text is updated to the applicable color. the effects of this filter when its properties are modified.
Code example:
This example shows how the text is unaffected by the gradient behind it.
<SCRIPT> <!-- Toggle the Enabled property to toggle the gradient. --> function fnToggle(oObj) { if (oDiv.filters(0).enabled){ oDiv.filters(0).enabled='false'; oObj.innerText='Add Gradient';} else { oDiv.filters(0).enabled='true'; oObj.innerText='Make Normal';} } </SCRIPT> <font size="+5"> <DIV ID="oDiv" STYLE="height:120px; color:green; filter: progid:DXImageTransform.Microsoft.gradient(enabled='false', startColorstr=#550000FF, endColorstr=#55FFFF00)" > A simple gradient </DIV> </font> <P> <BUTTON onclick="fnToggle(this)">Add Gradient</BUTTON><BR/>
Code example:
Applies To
See Also
Scripting Filters, Filter Design Considerations | https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/ms532997(v=vs.85) | 2018-06-18T04:05:07 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.microsoft.com |
You can use the. balloon (vSphere only) Displays the amount of memory that is currently reclaimed from the virtual machine through ballooning, in megabytes. swap (vSphere only) Displays the current amount of memory swapped out to the virtual machine's swap file, in megabytes. memlimit (vSphere only) Displays memory limit information, in megabytes. memres (vSphere only) Displays memory reservation information, in megabytes. cpures (vSphere only) Displays CPU reservation information, in MHz. cpulimit (vSphere only) Displays CPU limit information, in MHz. sessionid (vSphere only) Displays the current session ID. Parent topic: Retrieve Status Information About the Virtual Machine Related tasks Retrieve Status Information About the Virtual Machine | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vmtools.install.doc/GUID-B4AFCC7C-AFFC-4A4A-B565-2F56A123B7BB.html | 2018-06-18T04:16:30 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.vmware.com |
Adding Admins to Your Account
In the developer portal, admins are people that you have allowed to edit your documentation (oppose to developers who you give permission to read the documentation). The amount of admins you can add depends on the account you have, so if you need to add more spots you can send an email to [email protected]
To add admins to your account, first you'll need to click your name at the upper right hand side of the screen and select Account & Team
If you already have admins added, you'll see them listed here. To add someone new, click the "Invite Someone New" button directly above the company information section.
Next, simply fill in their name and email address and click the "Send Invite" button.
This will send an email to your new Admin. Once they click the activate link in the email they are ready to start contributing to your documentation.
If you have any questions, please send them to [email protected] | https://docs.gelato.io/guides/adding-admins-to-your-account | 2018-06-18T03:59:43 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.gelato.io |
Introduction
The Getting Started exercise creates a Hello World web page. This is a simple single web page which demonstrates how to program simple functionality and explore programming using events, properties and methods.
Objectives
To achieve these objectives you will complete the following:
Step 2. Create a Web Page Component
Step 3. Add a Table Layout to the Web Page
Step 4. Add Components to the Web Page
Step 5. Change the Push Button Properties
Step 6. Add a Field to the Web Page and Set its Properties
Step 7. Add Logic to the Hello Button Click Event
Step 8. Add Logic to the other Click events
Step 9. Compile the Web Page
Step 10. Execute the Web Page
Before You Begin
You may wish to review the following topics: | https://docs.lansa.com/14/en/lansa095/content/lansa/wbfeng01_0015.htm | 2018-06-18T03:53:22 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.lansa.com |
You can enable or disable one of the services listed in the Security Profile from the vSphere Web Client.
About this task
After installation, certain services are running by default, while others are stopped. In some cases, additional setup is necessary before a service becomes available in the vSphere Web Client UI. For example, the NTP service is a way of getting accurate time information, but this service only works when required ports are opened in the firewall.
Prerequisites
Connect to vCenter Server with the vSphere Web Client.
Procedure
- Browse to a host in the vSphere Web Client inventory, and select a host.
- Click the Manage tab and click Settings.
- Under System, select Security Profile and click Edit.
- Scroll to the service that you wish to change.
- In the Service Details pane, select Start, Stop, or Restart for a one-time change to the host's status, or select from the Startup Policy menu to change the status of the host across reboots.
Note:
Start automatically if any ports are open, and stop when all ports are closed: The default setting for these services. If any port is open, the client attempts to contact the network resources for the service. If some ports are open, but the port for a particular service is closed, the attempt fails. If and when the applicable outgoing port is opened, the service begins completing its startup.
Start and stop with host: The service starts shortly after the host starts, and closes shortly before the host shuts down. Much like Start automatically if any ports are open, and stop when all ports are closed, this option means that the service regularly attempts to complete its tasks, such as contacting the specified NTP server. If the port was closed but is subsequently opened, the client begins completing its tasks shortly thereafter.
Start and stop manually: The host preserves the user-determined service settings, regardless of whether ports are open or not. When a user starts the NTP service, that service is kept running as long as the host is powered on. If the service is started and the host is powered off, the service is stopped as part of the shutdown process, but as soon as the host is powered on, the service is started again, preserving the user-determined state.
These settings apply only to service settings that are configured through the vSphere Web Client or to applications that are created with the vSphere Web Services SDK. Configurations made through other means, such as from the ESXi Shell or with configuration files, are not affected by these settings. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-5E240D6E-0A2C-4A3C-9B88-58E14EAF051A.html | 2018-06-18T04:15:15 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.vmware.com |
<glossdef>
The
<glossdef> element specifies
the definition of one sense of a term. If a term has multiple senses,
create a separate
<glossentry> topic to define
each sense.
- topic/abstract concept/abstract glossentry/glossdef
See the example in
<glossentry>.
The following attributes are available on
this element: Universal attribute group
(without the Metadata attribute group),
@base from
the Metadata attribute group,
and outputclass. | http://docs.oasis-open.org/dita/dita/v1.3/errata01/os/complete/part3-all-inclusive/langRef/technicalContent/glossdef.html | 2018-06-18T03:39:41 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.oasis-open.org |
Upgrading 2.1.x or 2.2.x to 3.1.x
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
- Obtain your new license(s)
- Back up your old instance(s)
- Plan your update
- Have questions?
- XML Configuration Updates
- Bundle Configuration Updates
- Other Configuration Updates
Upgrading 2.1.x or 2.2.x to 3.1.x
Note: Do not attempt to migrate to 3.1.x or higher from 2.0.x. You must first upgrade and migrate your data to 2.2.3 format. Read the 2.0.x to 2.2.x migration instructions. You will also have to migrate from 3.x to 3.1 after migrating from 2.x.
Step 1: Administrative preparation
Obtain your new license(s)
Splunk 3.1 requires an entirely new form of license key (same as 3.x). If you are a Splunk Professional customer (from 2.x), you must obtain a new Enterprise license (Professional is now known as Enterprise) even if your 2.x license has not expired. New licenses can be obtained through customer support or via your store account. If you have a current Enterprise support agreement, it is likely that customer support has already re-issued your license and attached it to your store account. Just log into, and go to store -> my orders to view all of your licenses. Please contact support if this is not the case.
If you are using a Free license then there will be no need to install a 3.x license. The default 3.x Free license will be installed automatically if no license is detected on startup.
Back up your old instance(s)
Please backup your 2.x instances before attempting to update them to 3.x or higher. At a minimum make sure the
$SPLUNK_HOME/etc and
$SPLUNK_HOME/var directories in all your instances are backed up to a separate location before proceeding. If you have Splunk instances in production we recommended piloting the update in a staging environment before attempting it in production. It is possible to recover a corrupt instance or one in an indeterminate state but you will require assistance from customer support to do so.
Plan your update
Please read through both the upgrade overview and this entire page before attempting your first update. The update process involves overlaying Splunk 3.x over your 2.x instance (then a simple upgrade to 3.1.x). If you're using packages native to your platform you'll use their update mechanisms. If you have a tar installation you'll simply extract 3.x over your 2.x instance after backing up your configuration. In either case you'll update your database to function with 3.x and move your configuration back in place after the 3.x package is in place. The time required to complete the update depends on the complexity of your configuration, not the size of your database(s).
Have questions?
Please do not hesitate to contact support if you have questions or experience problems updating to 3.x.
Step 2: Install a 3.x package and update your database
NOTE: If you have a Splunk Enterprise license (formerly known as Splunk Professional) be sure you have obtained a new 3.x license before proceeding.
This process requires the installed configuration to be moved out of the way and then be restored after installation and database migration. Until configuration is restored the default ports will be used. Those ports are 8000 and 8089. It would be best to ensure there are no conflicts on ports 8000 and 8089 before executing the data migration step. Note that Splunk 3.x only listens on one HTTP port and one management port, unlike 2.x which listened on one HTTP port, one HTTPS port, and one management port. That is why only 8000 and 8089 are important for this step. The HTTP port can be configured to be HTTPS (see Step 3).
To install Splunk 3.x over 2.1.x or 2.2.x and fashion the 2.1x or 2.2.x database for use in 3.x:
- Stop the 2.1.x or 2.2.x Splunk server.
- Event types and tags will be lost when the database is migrated. If you wish to preserve your tags execute
$SPLUNK_HOME/bin/splunk export globaldatabefore proceeding. (Note: Splunk must be running to execute this command.) Host tags and sourcetype aliases can be recovered with a procedure outlined below. We will release a procedure to convert event types at a later time.
- Move
$SPLUNK_HOME/etcto
$SPLUNK_HOME/etc.bak. Directory must be named
etc.bak. The native packages will move the configuration to
$SPLUNK_HOME/etc.bakautomatically.
- Ensure your database, by default everything in
$SPLUNK_HOME/var/lib, has been backed up to a location outside of
$SPLUNK_HOME. The native packages will not do this automatically.
- Update the native package or overlay the 3.x tar over the 2.x installation.
- If you are upgrading from version 2.2.x or higher and are not using LDAP and wish to migrate your user accounts, copy
$SPLUNK_HOME/etc.bak/auth/splunk.secretto
$SPLUNK_HOME/etc/auth. Copy
$SPLUNK_HOME/etc.bak/passwdto
$SPLUNK_HOME/etc. Your 2.x accounts should then work as normal in 3.x..
- If you are upgrading from version 2.1.x, and are not using LDAP, you will not already have the passwd file; the users are still stored in the authentication database. Follow these instructions to run the provided script to pull your users into a passwd file for you to use:
# Download the migrate_users.py.gz file
# Uncompress the
migrate_users.pyscript to the Splunk 2.2 machine's $SPLUNK_HOME directory.
# Source $SPLUNK_HOME/bin/setSplunkEnv into your shell's environment.
# Execute
python migrate_users.py $SPLUNK_HOME.
- If using tar archives, reset the
SPLUNK_HOMEvalue in
$SPLUNK_HOME/bin/setSplunkEnvto the correct value.
- If you moved your datastore from the default $SPLUNK_HOME/var/lib/splunk location, edit the
SPLUNK_DBvalue in
$SPLUNK_HOME/bin/setSplunkEnvto the correct path.
- If you have Splunk Professional 2.2.x or 2.1.x , copy a 3.x Enterprise license into
$SPLUNK_HOME/etc.
- Source
$SPLUNK_HOME/bin/setSplunkEnvinto your shell. (If you are already in the directory, the command is "source setSplunkEnv".)
- Important! The following step will start and restart your instance. Absent configuration Splunk 3.x will attempt to bind to its default ports, 8000 and 8089. If you or your environment cannot tolerate this, please skip to the "Port information in search.user.xml and splunkd.xml" section below and migrate your port configuration before proceeding. It is enough to have your port information in
$SPLUNK_HOME/etc/bundles/local/web.confin place. Do not start Splunk to confirm your configuration before proceeding to the next step as you haven't migrated your database.
- Execute
python $SPLUNK_HOME/bin/migrate_2x_data_to_3x.py. This will migrate the data, start, and restart Splunk. This is necessary for Splunk to correctly re-scan the database files. This is a one way process. Tags will be lost (see above for tag preservation information).
- Splunk 3.x is now active but your configuration is empty. No inputs will be active until step 3 is completed. If your instance consumes live inputs that cannot tolerate downtime, make provisions to redirect those inputs to a file for later consumption. Configuration in
$SPLUNK_HOME/etc.bakis ready for migration. Note that certain elements of the UI may appear corrupt or incorrect immediately after migration. It is necessary to clear your browser's cache after the update to remedy this.
Step 3: Update and restore your configuration files
Most configuration options previously controlled by XML files have now been moved to configuration files. This change makes it easier to
administer a Splunk server and easier to deploy configuration changes to other Splunk servers via bundles and the Splunk deployment server.
The purpose of this section is to provide a map between 2.x and 3.x configuration. It is not to provide exhaustive documentation of all
- x configuration. Please review the 3.x administration guide for full details of how 3.x works and 3.x administration.
The process of updating and restoring your configuration involves moving some configuration information out of XML files and into bundle
files and updating your bundle files to work with 3.x. You'll need to migrate configuration parameters from
$SPLUNK_HOME/etc.bak to
$SPLUNK_HOME/etc.
At a high level the necessary configuration changes are:
- Port information in a 2.x
$SPLUNK_HOME/etc/myinstall/search.user.xmlneeds to be moved to 3.x
$SPLUNK_HOME/etc/bundles/local/web.conf. This file has moved from XML to Splunk bundle format.
- All configuration in a 2.x
$SPLUNK_HOME/etc/myinstall/pluginConfs/multiIndexer.xmlneeds to be moved to 3.x
$SPLUNK_HOME/etc/bundles/local/indexes.conf. This file has moved from XML to Splunk bundle format.
- If your 2.x instance is a lightweight forwarder then it's best to just set that configuration via the 3.x Splunk Web interface or CLI. In 3.x configuration of a forwarder with local indexing disabled automatically configures Splunk in a minimal mode.
- The 2.x bundle file
regexes.confis called
transforms.confin 3.x and the 3.x bundle file
props.confnow requires the attribute prefix
REGEXES-to be changed to
TRANSFORMS-.
- The 2.x bundle files
savedsplunks.confand
livesplunks.confhave been combined into the 3.x bundle file
savedsearches.conf.
- The 2.x bunde file
auth.confis a subset of configuration available in the 3.x bundle file
auth.conf. A new required parameter must be added to your 2.x configuration. (pageSize = 0)
- The 2.x XML file
$SPLUNK_HOME/etc/myinstall/pluginConfs/cleaners.xmlis now easier to administrate in the 3.x bundle file
segmenters.conf.
XML Configuration Updates
Port information in search.user.xml and splunkd.xml
By default Splunk 2.x opened three ports. An HTTP port, and HTTPs port, and a management port. Splunk 3.x only opens two ports. A web port and a management port. The web port can be configured to be either HTTP or HTTPS. The default 3.x configuration can be observed at
$SPLUNK_HOME/etc/bundles/default/web.conf. Documentation and an example of the 3.x
web.conf file can be found at
$SPLUNK_HOME/etc/bundles/README/web.conf.[spec|example] or here.
To migrate your settings from 2.x to 3.x it should just be a matter of configuring the same ports used in the 2.x search.user.xml file in an override 3.x stanza in
$SPLUNK_HOME/etc/bundles/local/web.conf. If you're using SSL and are also using your own certificates then you'll want to place those certificates from your 2.x instance in the location where the default 3.x web.conf file expects them, or override those configuration parameters in your local web.conf file as well.
Multiple indexes in 3.x
In 2.x all indexes were specified in
$SPLUNK_HOME/etc/myinstall/pluginConfs/multiIndexer.xml. This file has been converted into a bundle in 3.x. In 3.x the set of default indexes are configured in
$SPLUNK_HOME/etc/bundles/default/indexes.conf. Additional custom indexes may be added in
$SPLUNK_HOME/etc/bundles/local/indexes.conf. Be sure to place the 3.x configuration information after migrating your database. If you don't also configure your custom indexes in Splunk 3.x then you won't see them. It should be readily apparent how the parameters in the 2.x XML file maps to the 3.x bundle file. Be aware that the index dropdown in 2.x is not present in 3.x. Search in your custom indexes with the
index::yourcustomindexname search operator.
Documentation and an example of the 3.x indexes.conf file can be found at
$SPLUNK_HOME/etc/bundles/README/indexes.conf.[spec|example] or here.
Bundle Configuration Updates
2.x regexes.conf is now 3.x transforms.conf
The file
regexes.conf is ignored in 3.x. The file was renamed and extended to better serve its purpose - transforming data inputs and events based on requirements to modify and extend Splunk's automated processing. Like
regexes.conf,
transforms.conf is referenced by
props.conf and may contain regular expressions to extract the target of the transformation. The format and actions of the individual attributes within the file has not changed. Rename your regexes.conf to transforms.conf. Then modify
props.conf to refer to the
transform.
+ Change regexes to transforms
Props.conf used to refer to regexes in the regexes.conf file. Now it refers to transforms in the transforms.conf file. In addition, the attribute prefix in
props.conf has changed from
REGEXES- to
TRANSFORMS-. See the 3.x admin manual reference pages on props.conf and transforms.conf for full details.
An example of Splunk 2.x to Splunk 3.x regex to transforms changes:
In
props.conf change from the following 2.x style:
[cisco_syslog] MAX_TIMESTAMP_LOOKAHEAD = 32 SHOULD_LINEMERGE = False REGEXES = syslog-host ...
to the following 3.x
props.conf style:
[cisco_syslog] MAX_TIMESTAMP_LOOKAHEAD = 32 SHOULD_LINEMERGE = False TRANSFORMS = syslog-host ...
A stanza in a 3.x
transforms.conf looks like stanzas in 2.x
regexes.conf:
[syslog-host] DEST_KEY = MetaData:Host REGEX = :\d\d\s+(?:\d+\s+|(?:user|daemon|local.?)\.\w+\s+)*\[?(\w[\w\.\-]+)\]?\s FORMAT = host::$1
Note that if you are using a
regexes.conf stanza in 2.x in order to extract fields at search time for use with the
report: search modifier, you will want to read about how to define extracted fields in 3.x as well as read about the new search language which has many powerful native statistical and structured search commands including
where,
fields,
stats,
top and
rare which have replaced and improved upon the 2.x
report: search modifier.
2.x savedsplunks.conf and livesplunks.conf is now 3.x savedsearches.conf
The 2.x savedsplunks.conf and livesplunks.conf files have been combined into one overall savedsearches.conf file. In 3.x you can add scheduling and alert information directly to the saved search. The old live splunk subsystem in 2.x has been completely replaced with the new scheduling and alerting subsystem in 3.x.
The name/value pairs in
savedsplunks.conf should map directly to
savedsearches.conf with one exception. Use just the raw search string in 3.x, not the entire XML value of the query parameter in 2.x. Be sure a user exists in the 3.x instance with the same userid you're bringing over from 2.x.
The name/value pairs from livesplunks.conf will not map cleanly into savedsearches.conf. You do not need to bring over the savedsplunkid parameter as alert information is now stored directly with the saved search. The next change is that the 3.x saved searches.conf file uses a cron-like scheduling parameter in replacement of several run and range parameters in livesplunks.conf. It should be readily apparent how to map the relation and action configuration if one compares your livesplunks.conf stanza to the 3.x spec and example files. (
$SPLUNK_HOME/etc/bundles/README/savedsearches.conf.[spec|example])
+ Special note about saved report:: searches
The only search syntax element that is not backwards compatible between 3.x and prior versions is
report:. If you have saved searches that use
report:, you should update them to take advantage of the new search language which has many powerful native statistical and structured search commands including
where,
fields,
stats,
top and
rare which have replaced and improved upon the 2.x
report: search modifier. These new commands are both more flexible and faster than the old
report: modifier.
2.x auth.conf in LDAP mode to 3.x auth.conf in LDAP mode
If you're using LDAP authentication in 2.x then you can copy your auth.conf file into 3.x and use it if you make the following changes:
- Add name/value pairs userBaseFilter and groupBaseFilter. If you're unsure how to use LDAP filters you may safely supply the value '(objectclass=*)' without the quotes for these parameters. This filter will match every entry below your specified base DNs. Restrict the filter if you do not want this.
- Add the name/value pair pageSize. A pageSize of 0 means to use LDAPv2. A nonzero page size will use LDAPv3 with an additional control for paging. A nonzero page size is necessary if your using Microsoft Active Directory.
2.x cleaners.xml is now 3.x segmenters.conf
If you've modified your segmenters in your 2.x instance you should add them to your local segmenters.conf file.
$SPLUNK_HOME/etc/bundles/local/segmenters.conf. See
$SPLUNK_HOME/etc/bundles/README/segmenters.conf.[spec|example] for detailed information.
Data Inputs
In general nearly all data input parameters should map cleanly from 2.x to 3.x with the exception of the regexes/transforms transition described above. If you have a concern about a particular parameter you should browse
$SPLUNK_HOME/etc/bundles/README for the parameter in question and see if it's usage has changed from 2.x to 3.x. If something about the data input configuration gets lost in translation there should be clear error messages in splunkd.log. Be aware that 3.x is capable of eating archive files directly without needing them to be uncompressed first.
Other Configuration Updates
Splunk Users
The procedure for migrating non-LDAP users is covered in Step 2. The procedure for migrating LDAP configuration is covered in Step 3. Select the method that is appropriate for you.
Lightweight Forwarders
Splunk 2.x required extensive configuration changes to run in a minimal mode for a forwarding-only instance. In Splunk 3.x this configuration is done for you automatically if you enable forwarding and disable local indexing in the GUI. Splunk should only consume about 100 MB RAM in the 3.x configuration, usually less. It is possible for 2.x forwarders to forward to a 3.x instance.
Custom C++ Processors
If your 2.x instance contains a custom C++ module, that module should work with 3.x. Be aware, however, that Splunk 3.x ships with fewer shared objects than Splunk 2.x. In particular, libstdc++ is no longer included with the distribution. If you use your platform's libstdc++ and other libraries, your module should work.
Distributed Search
It is easiest to simply re-configure your 2.x distributed search hosts via Splunk Web in 3.x. Be aware that it is not possible to mix 2.x and 3.x servers in a distributed search cluster. Enhancements to the search language in the 3.x product prevent this from working. Note that the "Splunk-2-Splunk" tab in the 2.x admin section has been renamed "Distributed" in the 3.x admin section.
If in Step 2 you exported your global data to an XML file you can convert and re-import the host tag and sourcetype aliases into your Splunk 3.x instance. With
$SPLUNK_HOME/bin/setSplunkEnv sourced from a 3.x instance, execute
python $SPLUNK_HOME/bin/migrate_2x_exported_data_to_3x.py $YOUREXPORTEDFILENAME splunk import globaldata $YOUREXPORTEDFILENAME.readyfor30import -auth admin:$YOURPASSWORD
To confirm the procedure you should be able to see your host tags in type ahead and also see the correct data exported with a splunk export globaldata command.
This documentation applies to the following versions of Splunk: 3.1 , 3.1.1 , 3.1.2 , 3.1.3 , 3.1.4 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/3.1.1/Installation/Upgrading21xOr22xTo31x | 2012-05-24T09:55:56 | crawl-003 | crawl-003-008 | [] | docs.splunk.com |
This device report displays disk utilization percentages collected during the selected time period from the device displayed at the top of the report. You can configure the data collection for this device through Device Properties - Performance Monitors > Configure Disk Utilization.
Note: To ensure that your data collection is uninterrupted in the occurrence of a re-index, be sure to change the Determine uniqueness by option in the Advanced Data Collection settings for this performance monitor to description.
Below the date/time picker is a graph showing the disk utilization for the selected time period. Each point on the graph corresponds with an entry in the graph data table below.
Split Second Graph - Real-Time Disk Utilization
Under the main report graph is a Split Second Graph that displays real-time utilization data for the disk.
Note: Split Second Graphs are not available in WhatsUp Gold Standard Edition.
Note: When viewing information for devices running Microsoft Windows, information gathered via WMI is displayed in real time. Information gathered by SNMP, however, may reflect a delay of one minute or more. This delay is caused by a limitation in how often Microsoft Windows updates SNMP values.
Below the Split Second Graph, the report displays the average disk utilization percentages collected during the time period:
Click the chart properties button to change how the report chart is displayed.
Use the date/time picker at the top of the report to select a date range. The date and time format for the date on this report matches the format specified in Program Options > Regional.
You can change the device you are viewing by clicking the device name in the application bar at the top of the page.
You can change to another device report by selecting it from the More Device Reports pull-down menu.
To view the properties on the current device, click the Device Properties button in the application at the top of the page.
You can print a fully formatted report through your browser by clicking the print icon in the browser's toolbar, or selecting File > Print from the browser's menu. | http://docs.ipswitch.com/NM/92_WhatsUp%20Gold%20v12.3/03_Help/disk_utilization_report.htm | 2012-05-24T09:55:52 | crawl-003 | crawl-003-008 | [] | docs.ipswitch.com |
Control user access to SplunkBase
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Control user access to SplunkBase
We are still working on this page. Please check back here later. If you need help on this topic now, contact Splunk Support.
This documentation applies to the following versions of Splunk: 3.0 , 3.0.1 , 3.0.2 , 3.1 , 3.1.1 , 3.1.2 , 3.1.3 , 3.1.4 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/3.1.2/Admin/ControlUserAccessToSplunkBase | 2012-05-24T09:45:05 | crawl-003 | crawl-003-008 | [] | docs.splunk.com |
A MultiFile instance has the following methods:
It is possible to push more than one boundary. Encountering the most-recently-pushed boundary will return EOF; encountering any other boundary will raise an error.
Note that this test is used intended as a fast guard for the real boundary tests; if it always returns false it will merely slow processing, not cause it to fail.
Finally, MultiFile instances have two public instance variables: | http://docs.python.org/release/1.5.1p1/lib/MultiFile-objects.html#l2h-18770 | 2012-05-24T07:48:19 | crawl-003 | crawl-003-008 | [] | docs.python.org |
Image only menu items with rollover effect
From Joomla! Documentation
Here's how to get a real rollover image menu using on the Cascading Style Sheet (CSS).
Displaying images for menu items is standard in newer Joomla releases. No code changes to PHP files are needed.
In this example we'll edit the horizontal menu on top of the rhuk_milkyway template. Here are the general style rules for this menu:
#pillmenu { white-space: nowrap; height: 32px; float: left; } #pillmenu ul { margin: 0; padding: 0; list-style:none; } #pillmenu li { float: left; background: url(../images/mw_menu_separator.png) top right no-repeat; margin: 0; padding: 0; } #pillmenu a { font-family: Arial, Helvetica, sans-serif; font-size: 12px; font-weight: bold; float:left; display:block; height: 24px; line-height: 24px; padding: 0 20px; color: #000; text-decoration: none; }
Most important rules are:
-. | http://docs.joomla.org/Image_only_menu_items_with_rollover_effect | 2012-05-24T07:17:39 | crawl-003 | crawl-003-008 | [] | docs.joomla.org |
The expressions..
To launch the Scrapy shell you can use the shell command like this:
scrapy shell <url>
Where the <url> is the URL you want to scrape.
The Scrapy shell is just a regular Python console (or IPython console if you have it available) which provides some additional shortcut functions for convenience.
- shelp() - print a help with the list of available objects and shortcuts
- fetch(request_or_url) - fetch a new response from the given request or URL.
The Scrapy shell automatically creates some convenient objects from the downloaded page, like the Response object and the XPathSelector objects (for both HTML and XML content).
Those objects are:
- spider - the Spider which is known to handle the URL, or a BaseSpider object if there is no spider found for the current URL
- request - a Request object of the last fetched page. You can modify this request using replace() or fetch a new request (without leaving the shell) using the fetch shortcut.
- response - a Response object containing the last fetched page
- hxs - a HtmlXPathSelector object constructed with the last response fetched
- xxs - a XmlXPathSelector object constructed with the last response fetched
- settings - the current Scrapy settings. | http://readthedocs.org/docs/scrapy/en/latest/topics/shell.html | 2012-05-24T05:18:30 | crawl-003 | crawl-003-008 | [] | readthedocs.org |
Appeon security features are set in Appeon Enterprise Manager (AEM), the Web application that manages the Appeon system and deployed Web or mobile applications. Appeon security is at the application level and is "either or": the user either has or does not have access to the application. By default, Appeon security is turned off for each deployed application.
When the security for a Web or mobile application is turned on, the Appeon Login dialog box pops up at the beginning of the application startup and prompts the user to enter the user name and password. The user name and password are verified by Appeon Server against the authentication schema that can be set in an LDAP server or in Appeon system database. If the user name or password is not correct, the user is not allowed to access the Appeon application.
For more information on using Appeon security features for Appeon Web or Appeon mobile applications, refer to the Server Security section.
If your PowerBuilder application has not coded user name/password verification at application startup that restricts access to the application, you can utilize Appeon's built-in user group management. When the application runs, the user is prompted to enter the Appeon user name and password in the Appeon Login dialog box.
The Appeon user name can be passed to the application so that it can be utilized to implement script coded security features for the application. You can use the of_getappeonusername function in the Appeon Workarounds PBL to get the Appeon user name. For detailed information, refer to the section called “AppeonExtFuncs object” in Workarounds & APIs Guide.
In Client/Server architecture, the database can easily keep track of every logged-in user if you enable the AUDITING option in the database.
Appeon Web applications and Appeon mobile applications run in a three-tier architecture. Each time the Client wants to connect with the database, the call reaches Appeon Server first. Appeon Server will validate the user ID and password of the call. If the validation passes, Appeon Server connects with the Database Server using a unified user ID and password. The user ID and password that the database keeps track of is not the user ID and password that makes the call at the Client.
If you are using a SAP ASE database, you can use the SSA data source property. This property changes the ID at the database to whatever user ID/Password is used by end users for accessing the server. If you are using an SAP database, you can set this property in your data source props file. This cannot be used if you are using a different database type.
The following information is taken from the EAServer Administrator Guide Appendix B - Data Source Properties; please refer to the EAServer documentation for more detailed instructions.
The data source property, com.sybase.jaguar.conncache.ssa, enables set-proxy support for connections to databases that support this feature. By default, the property is set to false, which disables set-proxy support.
This feature can be used with any database that recognizes this command:
set session authorization "login-name"
When proxy support is enabled, connections retrieved from the cache are set to act as a proxy for the user name associated with the EAServer client. To set the proxy to another user name, use the Java JCMCache.getProxyConnection() method or the C JagCmGetProxyConnection() routine in your component.
The user name specified in the cache properties (com.sybase.jaguar.conncache.username) must have set-proxy privileges in the database and/or server used by the cache.
In EAServer Manager, set this property using the All Properties tab in the Data Source Properties dialog box.
To work around the database auditing functionality, you can also re-configure the auditing information that is saved on the database by adding a new field to it: user ID.
With the Client/Server application, make sure that a combination of user ID and password cannot hold multiple connections with the database at one time.
Add in the necessary code in the Client Server application so that every time the user wants to connect with the database, the call sent to the Database Server includes user ID information. For example, when sending the user ID as a column in the DataWindow or to the Stored Procedure, the user ID information in the call from the client-side will be saved in the user ID field on the Database Server. | https://docs.appeon.com/2015/server_configuration_guide_for_j2ee/ch03s05s02.html | 2020-10-20T01:03:59 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.appeon.com |
The core N-body code used to run Caterpillar was a combination of
P-Gadget3 and
Gadget-4. The output of the simulation runs is the typical Gadget HDF5 output and should be compatible with all other downstream post-processing tools available in the community. These outputs are available at all 320 snapshots of the simulation from z = 127 to z = 0 with < 50 Myr resolution from z = 6 to z = 0.
For details, please see either our flagship paper or our Technical pages.
Owing to the extreme size of the runs, particle data (z > 0) is only available upon request.
We used a modified version of
ROCKSTAR included full iterative unbinding but the outputs are consistent with the nominal outputs from standard
ROCKSTAR catalogues. Please see the documentation at the Rockstar repository for details (See Section 4. Outputs). There are however a few important caveats relating to our outputs which differ from the nominal outputs.
e.g. M_grav vs. Mvir vs npart*m_p
These can be made available upon request.
The chosen merger tree system used is consistent-trees. Please see the
consistent-trees README for direct information pertaining to its running and output. | https://docs.caterpillarproject.org/data | 2020-10-19T23:39:45 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.caterpillarproject.org |
Authentication and Authorization for SignalR Persistent Connections (SignalR 1.x)
by Patrick Fletcher, Tom FitzMacken
Warning
This documentation isn't for the latest version of SignalR. Take a look at ASP.NET Core SignalR.
This topic describes how to enforce authorization on a persistent connection. For general information about integrating security into a SignalR application, see Introduction to Security.. | https://docs.microsoft.com/en-us/aspnet/signalr/overview/older-versions/persistent-connection-authorization | 2020-10-19T23:48:34 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.microsoft.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.