content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Terrain simulates a land mass in your game which can be occupied, traversed, or flown over by objects in your game world. Terrain is represented in a game level by a TerrainBlock.
In order to follow along with this article using World Builder, you must either create a new game project or open an existing project.
There are three methods to add a TerrainBlock to a level: (1) Create a
blank terrain; (2) Add an existing .ter file; and (3) Import a heightmap.
To create a new blank terrain start from the menu by selecting File>Create Blank Terrain.
After you click the menu entry, a Create New Terrain Dialog will appear.
The Name field allows you to specify a name for your TerrainBlock. This name will appear in the Scene Tree and can be used to reselect your terrain later for editing. Enter a name for the terrain in the text box, in this example theterrain.
The Material for the terrain, that is the texture that will be displayed to depict the ground cover, is selected using a drop-down list. This list is populated by the World builder with all the existing materials created specifically for terrains.
The Resolution that you select from that drop-down list determines the size of the terrain that will be created. The size of the terrain that you choose is largely dependent on the design of your game. You will have to experiment to find the right size that works for each game you create and some combinations of options are not very practical. For example, selecting a terrain size of 256 and using the Noise option will result in a terrain that is so drastically contoured that it will not be of much use.
The radio buttons to the right of the Resolution dropdown determine the smoothness of the terrain that is generated. Selecting Flat will create a relatively smooth terrain and selecting Noise will generate a bumpy terrain.
To create a flat terrain: from the main menu select File > Create Blank Terrain; enter a name; select a material; select a size such as 256; and select the Noise radio button, then click the Create New button. A contoured TerrainBlock will be generated and automatically loaded into the scene.
(click to enlarge)
A Flat terrain is a great place to start, but is more suitable for terrains that will remain relatively flat. Using a flat terrain requires you to create all the terrain details yourself using the Terrain Editor.
To create a bumpy terrain: from the main menu select File>Create Blank Terrain; enter a name; select a material; select a larger size such as 1024; and select the Noise radio button, then click the Create New button.
TerrainBlock loaded into the scene. A contoured extremely mountainous terrain will be generated and automatically loaded into your scene. The noise algorithm randomly generated the hills and valleys for you.
Starting with a contoured terrain this is a decent method of starting from scratch with a little randomness thrown in.
To add an existing terrain file to a level start by selecting the Object Editor tool. Locate your Library panel and click it. Click on the Level tab then select the Environment folder. Once that is open, locate the TerrainBlock entry, and double-click it.
The new terrain dialog will open.
The Object Name field allows you to specify a name for your TerrainBlock. This name will appear in the Scene Tree and can be used to reselect your terrain later for editing. Enter a name for the terrain in the text box, in this example theterrain.
The Terrain file box indicates the information file which holds the data describing the terrain to be loaded. Clicking the box loads the OS file browser.
Terrain files are named with a .ter extension. The .ter file type is a proprietary format that contains terrain data understood by Torque 3D. Locating a .ter file then clicking Open/OK will cause it to be selected as the Terrain file to be loaded.
The most recommended and effective method to add a TerrainBlock to a level is to import the terrain from external data files. However, this method requires the skill and the third-party tools to create those data files. Very high-quality and professional-looking terrain can be created with tools such as L3DT and GeoControl. These tools allow you to generate extremely detailed heightmaps that can be imported by Torque 3D and to generate terrain data.
There are several types of asset required to import and use a terrain in Torque 3D using this method: (1) a heightmap, (2) an opacity map and layers, and (3) texture files.
The primary asset required is a heightmap. A heightmap is a standard image file which is used to store elevation data rather than a visible picture. This elevation data is then rendered in 3D by the Torque engine to depict the terrain. The heightmap itself needs to be in a 16-bit greyscale image format, the size of which is a power of two, and must be square. The lighter an area of a heightmap is, the higher the elevation will be in that terrain location.
Example Heightmap
An opacity map acts as a mask, which is designed to assign opacity layers. Opacity layers need to match the dimensions of the heightmap. For example, a 512x512 heightmap can only use a 512x512 opacity map.
If the opacity map is a RGBA image, four opacity layers will be used for the detailing (one for each channel). If you use an 8-bit greyscale image, only a single channel. You can then assign materials to the layers. This allows us to have up to 255 layers with a single ID texture map, saving memory which we can apply to more painting resolution.
Notice that the following example Opacity Map resembles the original heightmap.
Example Opacity Map
Texture files "paint" the terrain giving it the appearance of real ground materials. When creating a terrain from scratch textures can be manually applied to it using the Terrain Painter, which is built into the World Editor, but that is a time and effort intensive method. Instead of hand painting them, the opacity layer will automatically assign textures to the terrain based upon what channel they are loaded into.
For each type of terrain to be rendered you will want to have three textures: (1) a base texture, also referred to as a diffuse texture, (2) a normal map, and (3) a detail mask.
Diffuse
Normal
Detail
The base represents the color and flat detail of the texture. The normal map is used to render the bumpiness or depth of the texture, even though the image itself is physically flat. Finally, the detail map provides up-close detail, but it absorbs most of the colors of the base map.
To import a heightmap for terrain start the World Editor, then from the menu select File > Import Terrain Heightmap:
The Import Terrain heightmap dialog will appear.
Name: If you specify the name of an existing TerrainBlock in the dialog it will update that existing TerrainBlock and its associated .ter file. Otherwise, a new TerrainBlock will be created.
Meters Per Pixel: What was the TerrainBlock SquareSize (meters per pixel of the heightmap), which is a floating point value. It does not require power of 2 values.
Height Scale: The height in meters that you want pure white areas of the heightmap to
present.
Height Map Image: File path and name of a .png or .bmp file which is the heightmap itself. Remember, this needs to be a 16-bit greyscale image, the size of which is a power of two, and it must be square.
Texture Map: This list specifies the opacity layers, which need to match the dimensions of the heightmap image. If you add an RGBA image it will add 4 opacity layers to the list, one for each channel. If you add an 8-bit greyscale image, it will be added as a single channel.You can then assign materials to the layers. If you do not add any layers
or do not add materials to the layers, the terrain will be created with just the Warning Material texture.
Click the browse button to the right of the Height Map Image box to open a file browser dialog. Navigate to where your terrain files are located, select the desired heightmap PNG file, then click Open. The selected heightmap file will be entered in the Height Map Image box.
Click on the + button next to Texture Map to open
another file browser. This is where you add opacity
layers. Start by locating the masks. If you have the right assets, it should resemble something like this:
Do not worry if you do not see the detail. The mask is
supposed to be solid white. Repeat the process until you have imported all your opacity layers.
Now that our opacity layers have been added, you should assign a material to each one. You can do so by clicking on one of the layers, then clicking
the edit button in the bottom right. You will now see the Terrain
Materials Editor.
Click the New button, found at the top next to the garbage bin, to add a new material. Type in a name then click the Edit button next to the Diffuse preview box. Again, a file browser will pop up allowing you to open the base texture file for the material. Alternatively, you can click the preview box itself, which is a checkerboard image until you add a texture.
Once you have added the base texture, the preview box will update to show you what you opened. Set the Diffuse size which controls the physical size in meters of the base texture.
Click on the Edit button next to the Detail Preview box. Using the file browser, load the detail map.
Next, click on the Edit button next to the Normal Preview box. Use the file browser to open the normal map.
Your final material properties should look like the following:
Repeat this process until each opacity layer has a material assigned to it. Back in the Import Terrain Height Map dialog, click on the import button. It will take a few moments for Torque 3D to generate the terrain data from our various assets. When the import process is complete, the new TerrainBlock will be added to your scene (you might need to move your camera back to see it).
If you zoom in close to where materials overlap, you can notice the high quality detail and smooth blending that occurs.
A TerrainBlock has properties which can be set like any other object using the Object Editor. Clicking a TerrainBlock in the scene or selecting it from the Scene Tree will update the Inspector pane with information about it. TerrainBlocks have their own unique set of properties. Hover over a section of the image below for a description of the properties in that section:
This article showed you the three methods available to add a TerrainBlock to your level: creating a blank terrain, adding an existing .ter file, and importing a heightmap. Regardless of which method you use to add the TerrainBlock to a level, you can continue to adjust it using the Terrain Editor and Terrain Painter tools.
Additionally, the system allows you to add multiple TerrainBlocks to the same level. This can provide you with a number of opportunities to create massive levels while retaining rendering quality and details. If you wish to learn more about terrains, you can read the Building Terrains Tutorial, which contains
more information on importing a terrain. | http://docs.garagegames.com/torque-3d/official/content/documentation/World%20Editor/Adding%20Objects/TerrainBlock.html | 2015-06-30T05:14:26 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.garagegames.com |
Introduction¶
Birdhouse is a collection of Web Processing Service (WPS) related Python components to support data processing in the climate science community. The aim of birdhouse is to make the usage of WPS easy. Most of the OGC/WPS related software comes from the GeoPython project.
Birdhouse is the home of Web Processing Services used in climate science and components to support them (the birds).
WPS client side:
WPS supporting services and libraries:
- Twitcher: a simple OWS Security Proxy
- Malleefowl: access to climate data (ESGF, ...) as a service and library
WPS services and libraries with algorithms used in climate science analysis:
- Flyingpigeon: services for the climate impact community
- Hummingbird: provides cdo and cfchecker as a service
- Dodrio: WPS for KIT
- Emu: some example WPS processes
You can find the source code of all birdhouse components on our GitHub page. Conda packages for birdhouse are available on the birdhouse channel on Binstar. Docker images with birdhouse components are on Docker Hub
Getting started¶
- Overview
- Installation
- Tutorials
- Administrator Guide
- Developer Guide
- Contributing
- Frequently Asked Questions
- Glossary
- Release Notes
- Roadmap
- Useful Links
Presentations & Blog Posts¶
- UNCCC Subgroup 2017 at Kigali
- AGU 2016 at San Francisco
- ESGF F2F 2016 at Washington
- FOSS4G 2016 at Bonn
- EGU 2016 at Vienna
- ICRC-CORDEX 2016
- Model Animation LSCE
- Talk on USGS WebEx 2016/02/18
- Paris Coding Spring 2015 at IPSL
- EGI Community Forum 2014 at Helsinki
- Prag
- CSC 2.0 Hamburg
- Vienna
- LSDMA
: _licence:
License Agreement¶
Birdhouse components are distributed under the Apache License, Version 2.0. | http://birdhouse.readthedocs.io/en/latest/ | 2017-06-22T16:30:38 | CC-MAIN-2017-26 | 1498128319636.73 | [] | birdhouse.readthedocs.io |
Warning! This page documents an old version of InfluxDB, which is no longer actively developed. InfluxDB v1.2 is the most recent stable version of InfluxDB.
InfluxDB Shell(CLI)
The Influx shell is an interactive shell for InfluxDB, and is part of all InfluxDB distributions starting with the InfluxDB 0.9.0..
Telegraf
Telegraf is an open source tool for metrics collection (e.g. CollectD) built and maintained by the InfluxDB team. | https://docs.influxdata.com/influxdb/v0.9/tools/ | 2017-06-22T16:34:08 | CC-MAIN-2017-26 | 1498128319636.73 | [] | docs.influxdata.com |
Turn on or turn off battery saving mode
When you turn on battery saving mode, your BlackBerry smartphone automatically changes options to conserve battery power. When you turn off battery saving mode or the smartphone is charged, your normal smartphone settings are restored.
- On the home screen or in a folder, click the Options icon.
- Click Device > Battery Saving Mode.
- Select or clear the Turn on Battery Saving Mode when battery power level is low checkbox.
- To change the power level that turns on battery saving mode, change the Start when battery power is at or drops below field.
- Press the
key > Save.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/38289/mfl1326232039285.jsp | 2014-10-20T11:36:33 | CC-MAIN-2014-42 | 1413507442497.30 | [] | docs.blackberry.com |
BlackBerry 10 Pop Quiz
How much do you know about your BlackBerry 10 device? Take this short quiz to test your skills. Prove that you know your stuff and pick up a few tricks along the way. Plus, you can check out additional resources after you finish the quiz.
Note: This quiz is available in English only.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/61550/ewr1363025500545.jsp | 2014-10-20T12:09:41 | CC-MAIN-2014-42 | 1413507442497.30 | [array(['kte1391464337344_lowres_en-us.jpg',
'The BlackBerry 10 Pop Quiz app icon The BlackBerry 10 Pop Quiz app icon'],
dtype=object) ] | docs.blackberry.com |
ArcLink is a distributed data request protocol usable to access archived waveform data in the MiniSEED or SEED format and associated meta information as Dataless SEED files. It has been originally founded within the German WebDC initiative of GEOFON (Geoforschungsnetz) and BGR (Bundesanstalt für Geowissenschaften und Rohstoffe). ArcLink has been designed as a “straight consequent continuation” of the NetDC concept originally developed by the IRIS DMC. Instead of requiring waveform data via E-mail or FTP requests, ArcLink offers a direct TCP/IP communication approach. A prototypic web-based request tool is available via the WebDC homepage at.
Recent development efforts within the NERIES (Network of Excellence of Research and Infrastructures for European Seismology) project focuses on extending the ArcLink network to all major seismic data centers within Europe in order to create an European Integrated Data Center (EIDAC). Currently (September 2009) there are four European data centers contributing to this network: ORFEUS, GFZ (GeoForschungsZentrum), INGV (Istituto Nazionale di Geofisica e Vulcanologia), and IPGP (Institut de Physique du Globe de Paris).
Note
The default client needs to open port 18002 to the host webdc.eu via TCP/IP in order to download the requested data. Please make sure that no firewall is blocking access to this server/port combination.
Note
The user keyword in the following examples is used for identification with the ArcLink server as well as for usage statistics within the data center, so please provide a meaningful user id such as an email address.
getWaveform(): The following example illustrates how to request and plot 18 seconds of all three single band channels ("EH*") of station Jochberg/Hochstaufen ("RJOB") of the Bavarian network ("BW") for an seismic event around 2009-08-20 04:03:12 (UTC).
>>> from obspy import UTCDateTime >>> from obspy.arclink.client import Client >>> client = Client(user='[email protected]') >>> t = UTCDateTime("2009-08-20 04:03:12") >>> st = client.getWaveform("BW", "RJOB", "", "EH*", t - 3, t + 15) >>> st.plot()
Waveform data fetched from an ArcLink node is converted into an ObsPy Stream object. The seismogram is truncated by the ObsPy client to the actual requested time span, as ArcLink internally cuts SEED files for performance reasons on record base in order to avoid uncompressing the waveform data. The output of the script above is shown in the next picture.
getPAZ(): Requests poles, zeros, gain and sensitivity of a single channel at a given time.
>>> from obspy import UTCDateTime >>> from obspy.arclink.client import Client >>> client = Client(user='[email protected]') >>> dt = UTCDateTime(2009, 1, 1) >>> paz = client.getPAZ('BW', 'MANZ', '', 'EHZ', dt) >>> paz AttribDict({'poles': [(-0.037004+0.037016j), (-0.037004-0.037016j), (-251.33+0j), (-131.04-467.29j), (-131.04+467.29j)], 'sensitivity': 2516778600.0, 'zeros': [0j, 0j], 'name': 'LMU:STS-2/N/g=1500', 'gain': 60077000.0})
saveResponse(): Writes a response information into a file.
>>> from obspy import UTCDateTime >>> from obspy.arclink.client import Client >>> client = Client(user='[email protected]') >>> t = UTCDateTime(2009, 1, 1) >>> client.saveResponse('BW.MANZ..EHZ.dataless', 'BW', 'MANZ', '', '*', ... t, t + 1, format="SEED")
saveWaveform(): Writes the requested waveform unmodified into your local file system. Here we request a Full SEED volume.
>>> from obspy import UTCDateTime >>> from obspy.arclink.client import Client >>> client = Client(user='[email protected]') >>> t = UTCDateTime(2009, 1, 1, 12, 0) >>> client.saveWaveform('BW.MANZ..EHZ.seed', 'BW', 'MANZ', '', '*', ... t, t + 20, format='FSEED')
getInventory(): Request inventory data.
>>> from obspy import UTCDateTime >>> from obspy.arclink.client import Client >>> client = Client(user='[email protected]') >>> inv = client.getInventory('BW', 'M*', '*', 'EHZ', restricted=False, ... permanent=True, min_longitude=12, ... max_longitude=12.2) >>> inv.keys() ['BW.MROB', 'BW.MANZ..EHZ', 'BW', 'BW.MANZ', 'BW.MROB..EHZ'] >>> inv['BW'] AttribDict({'description': 'BayernNetz', 'region': 'Germany', ... >>> inv['BW.MROB'] AttribDict({'code': 'MROB', 'description': 'Rosenbuehl, Bavaria', ... | http://docs.obspy.org/packages/obspy.arclink.html | 2014-10-20T11:14:51 | CC-MAIN-2014-42 | 1413507442497.30 | [] | docs.obspy.org |
Authentication (internal)¶
This documents how to use authentication in your API requests when you are working on a web application that lives on AMO domain or subdomain. If you are looking for how to authenticate with the API from an external client, using your API keys, read the documentation for external authentication instead.
When using this authentication mechanism, the server is responsible for
creating an API Token when the user logs in, and sends it back in
the response. The clients must then include that token as an
Authorization
header on requests that need authentication. The clients never generate JWTs
themselves.
Fetching the token¶
A fresh token, valid for 30 days, is automatically generated and added to the responses of the following endpoint:
-
/api/v3/accounts/authenticate/
The token is available in two forms:
- For the endpoint mentioned above, as a property called
token.
- For all endpoints, as a cookie called
frontend_auth_token. This cookie expires after 30 days and is set as
HttpOnly. | https://addons-server.readthedocs.io/en/2018.05.17/topics/api/auth_internal.html | 2021-02-25T03:03:02 | CC-MAIN-2021-10 | 1614178350706.6 | [] | addons-server.readthedocs.io |
Permissions
Revoke space administrators access to ScriptRunner built-in script features in the space they administer.
Browse Permissions Functionality
After you select Create Permissions, you can use the Search ScriptRunner Functionality search bar to search the available permissions.
For example, if you’re looking for a permission that works with scripts you could type "Scripts" and press Enter. Then, the list of permissions is narrowed down to only those containing the word "scripts" in their title or description. | https://docs.adaptavist.com/sr4c/latest/get-started/permissions | 2021-02-25T01:29:47 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.adaptavist.com |
In order to use the http tunnel feature, you will first need to download it's source code. The following command takes care of that.
stew tunnel:install
Steward allows to open a tunnel to your local http server. You will get a random url similar to, which will be accessible from outside your local network. This way you can temporarily have a client or co-worker visit your local site without having to upload or deploy any code.
Make sure to download the required source code using
stew tunnel:install
stew http:expose example.test
You can pass the
--jail-redirects option to have steward jail all redirects within the chosen host. This can come in very handy when your application keeps sending 301 responses containing the
location header to it's original hostname.
To specify a specific local port, e.g. 8080, you can add the option like
---port 8080. The
--handle option allows you to use any specific subdomain you would like, as long as it is available at that moment.
stew http:expose example.test --handle example --port 8080 --jail-redirects
The command above will start listening for requests to the url and forward these requests to your local webserver with host header "example.test", on port 8080.
Use
--tmux to open the webserver in a tmux session and/or add
--daemonize to send the process to the background.
To remove the http tunnel source code, use the following command.
stew tunnel:uninstall
You can create a self-signed certificate for a specific domain and add it to your keychain's trusted SSL certificates using the following command.
stew http:secure example.test
You can also remove or revoke a local site's ssl certificates by using the following command.
stew http:unsecure example.test
You can quickly start php's built-in webserver using the following command. Use
--document-root= and
--port= to specify another document root (default current working directory) or port (default 8080). Use the
--expose option to immediately open a tunnel as well and get a public url.
You can use
--tmux to open the webserver in a tmux session and/or add
--daemonize to send the process to the background.
stew http:serve
Steward uses tmux to manage background processes and allow easy access to their output. The same goes for the built-in webserver. You can kill the background process for the built-in web server as follows.
tmux kill-session -t serve
The background process for the http tunnel can be killed as follows.
tmux kill-session -t expose
Steward allows you to launch and expose a webserver publicly, probably faster than you can imagine. Try the following steps and see for yourself.
# Clone any (dummy) repo, below is an example.git clone /tmp/dummysite# Serve and expose the site in /tmp/dummysitestew http:serve -d /tmp/dummysite --expose | https://docs.stew.sh/features/http-features | 2021-02-25T02:32:28 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.stew.sh |
Although each programmer has his prefered IDE/Text editor, here are some recommendations for setting up popular IDE’s for writing and debugging QGIS Python plugins.
On Linux there is no additional configuration needed to develop plug-ins. But on Windows you need to make sure you that you have the same environment settings and use the same libraries and interpreter as QGIS. The fastest way to do this, is to modify the startup batch file of QGIS.
Si vous avez utilisé l’installateur OSGeo4W, vous pouvez le trouver dans le dossier bin de votre installation OSGeo4W. Cherchez quelque chose du genre C:\OSGeo4W\bin\qgis-unstable.bat.
For using Pyscripter IDE, here’s what ou have to do:
Faites une copie du fichier qgis-unstable.bat et renommez-le en pyscripter.bat.
Open it in an editor. And remove the last line, the one that starts qgis.
Add a line that points to that Pyscripter, Eclipse is a common choice among developers. In the following sections, we will be explaining how to configure it for depelopping and testing plugins. To prepare your environment for using Eclipse in windows, you should also create a batch file and use it to start Eclipse.
Pour créer ce fichier de commandes, suivez ces étapes.
call "C:\OSGeo4W\bin\o4w_env.bat" set PATH=%PATH%;C:\path\to\your\qgis_core.dll\parent\folder C:\path\to\your\eclipse.exe
Afin d’utiliser Eclipse, assurez-vous d’avoir installé
l’extension Aptana d’Eclipse ou PyDev
There is some preparation to be done on QGIS itself. Two plugins are of interest: Remote Debug and Plugin reloader.
In Eclipse, create a new project. You can select General Project and link your real sources later on, so it does not really mather where you place this project.
Projet Eclipse
Now right click your new project and choose New =>=>Open Perspective=>Other=>Debug).
Now start the PyDev debug server by choosing PyDev=.
Console de débogage de PyDev
You have now an interactive console which let’s you test any commands from within the current context. You can manipulate variables or make API calls or whatever you like.
A little bit annoying is, that everytime=>Preferences=>PyDev=>Interpreter - Python.
You will see your configured python interpreter in the upper part of the window (at the moment python2.7 for QGIS) and some tabs in the lower part. The interesting tabs for us are Libraries and Forced Builtins.
Console de débogage de PyDev its ~/.qgis
Cliquer sur OK et c’est fini.
Note: everytime the QGIS API changes (e.g. if you’re compiling QGIS master and the sip file changed), you should go back to this page and simply click Apply. This will let Eclipse parse all the libraries again.
Pour une autre configuration d’Eclipse pour travailler avec des extensions Python de QGIS, consultez ce lien
If you do not use an IDE such as Eclipse, you can debug using PDB, following this steps.
First add this code in the spot where you would like to debug:
# Use pdb for debugging import pdb # These lines allow you to set a breakpoint in the app pyqtRemoveInputHook() pdb.set_trace()
Ensuite exécutez QGIS depuis la ligne de commande.
Sur Linux, faites:
$ ./Qgis
Sur Mac OS X, faites:
$ /Applications/Qgis.app/Contents/MacOS/Qgis
And when the application hits your breakpoint you can type in the console! | https://docs.qgis.org/2.2/fr/docs/pyqgis_developer_cookbook/ide_debugging.html | 2021-02-25T01:45:31 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.qgis.org |
Set host values based on event data
You can configure the Splunk platform to assign host names to your events based on the data in those events. You can use event data to override, then forward that data onward to your Splunk Cloud instance. This is because you cannot edit configuration files on a Splunk Cloud instance directly. On Splunk Enterprise, you can edit configuration files, either on an indexer or a heavy forwarder. You cannot use a universal forwarder in any case, because universal forwarders cannot transform data except in certain limited cases.
For a primer on regular expression syntax and usage, see Regular-Expressions.info. The Splunk community wiki also has a list of useful third-party tools for writing and testing regular expressions. You can test regular expressions by using them in searches with the rex search command.
Use configuration files to override the host name default field in events
The Splunk platform tags event data with default fields during ingestion. Creating host name overrides for events that the Splunk platform indexes involves editing two configuration files on the Splunk platform instance that collects the data, based on some of those default fields.
The first file, transforms.conf, configures the host name override by using a regular expression to determine when the instance should overwrite, or transform, the host name default field. You supply the regular expression by determining what exactly in your event data is to trigger the transformation, and then providing that regular expression to the transforms.conf file. This appears as a stanza within the file, and the Splunk platform triggers the override when incoming event data matches the regular expression that you specify.
The second file, props.conf, determines the default fields to which the host name override can apply. This appears must occur before the Splunk platform overrides.
The general procedure for creating a host name override follows:
- Review your event data to determine a string that represents when you want the Splunk platform to perform the host name override. This string becomes the regular expression you supply later in the procedure. See the example later in this topic.
- Review "Configure a transforms.conf stanza with a host name override transform" and "Configure a props.conf stanza to reference a host name override transform" later in this topic to understand how stanza syntax for host name overrides works.
- On a heavy forwarder where you want to do the host name overrides, open a text editor.
- With that editor, open the
$SPLUNK_HOME/etc/system/local/transforms.conffile for editing.
- Add a stanza to this file that represents when the Splunk platform is to do the host name override.
- Save the transforms.conf file and close it.
- Open the
$SPLUNK_HOME/etc/system/local/props.conffile
In this stanza:
<unique_stanza_name>can be anything, and is what you will use to refer to the transform from the props.conf configuration file. Best practice>
In this stanza:
log file. You want the Splunk platform to set the host default field for each event to the host name found within the event. The host is in the third position of each line in the log file, for example, "fflanda".
41602046:53 accepted fflanda 41602050:29 accepted rhallen 41602052:17 accepted fflanda
First, create a new stanza in the
transforms.conf configuration file and provide a regular expression that extracts the host value:
[houseness] DEST_KEY = MetaData:Host REGEX = \s(\w*)$ FORMAT = host::$1
Next, reference the
transforms.conf stanza in a stanza in the
props.conf configuration file. For example:
[source::.../houseness.log] TRANSFORMS-rhallen=houseness SHOULD_LINEMERGE = false
This example stanza has the additional settimg/value pair
SHOULD_LINEMERGE = false, to break events at each newline. This is not a requirement, but is a best practice.
The events then appear in search results like the! | https://docs.splunk.com/Documentation/Splunk/6.3.11/Data/Overridedefaulthostassignments | 2021-02-25T02:37:51 | CC-MAIN-2021-10 | 1614178350706.6 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Symptom
If a recovery kit is uninstalled while there are resource hierarchies of that kit in service, the Remove hangs.
To avoid this situation, it is recommended to always take a recovery kit’s resource hierarchies Out of Service and delete them before uninstalling the recovery kit software.
Solution
If you encounter this situation, you will most likely need to re-boot your system since there are many related processes that hang, and clearing them all can be difficult.
Feedback
Thanks for your feedback.
Post your comment on this topic. | https://docs.us.sios.com/dkse/8.6.4/en/topic/remove-hangs-during-recovery-kit-uninstall?q=flg+remove | 2021-02-25T03:05:03 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.us.sios.com |
Provides attached properties for items pushed onto a StackView. More...
The Stack type provides attached properties for items pushed onto a StackView. It gives specific information about the item, such as its status and index in the stack view the item is in.
[read-only] Stack.index : int
This property holds the index of the item inside StackView, so that StackView.get(index) will return the item itself. If view is
null, index will be
-1.
[read-only] Stack.status : enumer
[read-only] Stack.view : StackView
This property holds the StackView the item is in. If the item is not inside a StackView, view will be
null.
© The Qt Company Ltd
Licensed under the GNU Free Documentation License, Version 1.3. | https://docs.w3cub.com/qt~5.15/qml-qtquick-controls-stack | 2021-02-25T02:50:09 | CC-MAIN-2021-10 | 1614178350706.6 | [] | docs.w3cub.com |
SagePay has rebranded to Opayo, you can read more about the change HERE ↑ Back to top
Opayo Reporting is now available in WooCommerce.
Read about the benefits of Opayo Reporting here
Our Opayo Form/Direct plugin is two payment gateways in one, allowing you to use one or both Certificate not required
Option 2: Opayo Direct
-
Opayo Fraud Scoring information available in WooCommerce (see a href=”” target=”_blank”>here for more information on Opayo Reporting) the SagePay Form or SagePay Direct Settings page(s).
Configure the Settings – SagePay Form ↑ Back to top
Configure the settings page to suit your business. At a minimum, you must:
- Tick the Enable SagePay Form box
- Enter your Vendor Name (supplied by SagePay)
- Enter your Encryption Password (supplied by SagePay)
- Save.
Configure the Settings – Opayo Direct ↑ Back to top
Configure the settings page to suit your business. At a minimum, you must:
- Tick the Enable Opayo Direct box
- Enter your Vendor Name (supplied by Opayo)
- Save.
How to setup PayPal for Opayo Direct ↑ Back to top
Creating a PayPal test account
Enabling Sage Pay on your PayPal test account
Linking PayPal to your Live account
Once Sage have enabled PayPal on your account you will need to add PayPal as a card type in your WooCommerce Opayo settings.
Now your customers will see the PayPal option in the card type dropdown
IMPORTANT : PayPal will not show if there is a subscription product in the cart.
Opayo Reporting ↑ Back to top
There is separate documentation for Opayo Reporting HERE
Testing ↑ Back to top
Place several test transactions to confirm that everything is working correctly. Once you have completed testing, contact Opayo about making your account live. Opayo will notify you when ready, and then you set the status to Live.
Opayo has a list of test cards you can use to carry out test transactions at: Test Card Details for Test Transactions.
Frequently Asked Questions ↑ Back to top
I’m getting a message of: MALFORMED 3045 : The Currency field is missing. ↑ Back to top
This is because you are using the wrong password in the Encryption Password field. Opayo sends you at least two passwords, one for your account and one encryption password. You need to use the second one.
I’m seeing a 5080 error when I get to Sage. ↑ Back to top
Normally this is a password issue, make sure you have the encryption passwords set correctly – the live and testing passwords should be different. If it’s not a password issue then check the PHP error logs.
My customers are seeing “Sage Request Failure Check the WooCommerce Opayo Settings for error messages” after paying with SagePay Form ↑ Back to top
This is usually due to a server plugin called SUHOSIN, you will need to edit PHP.ini on your server and change the following settings
to
My customers are returned to a blank screen after paying with SagePay Form ↑ Back to top
Are you using iThemes Security? Make sure to uncheck the “Long URL Strings” option.
Do I need to use Opayo Form and Opayo Direct? ↑ Back to top
No, you can use whichever method(s) you set up with SagePay.
Why do transactions that fail 3D Secure still show as approved? ↑ Back to top
Log into MySagePay () and check your 3D Secure rules. For example:
4020 : Information received from an Invalid IP address ↑ Back to top
You must add the IP address of your hosting to MySagePay. If you don’t know the IP address, you can obtain it from here
Surcharges and Opayo Form ↑ Back to top
The surcharge settings have been removed from the Opayo Form settings. To bring them back you will need add the following function in your custom functions :
There are two filters available to allow for conditional application of the surcharges and conditional modification of the surcharges.
To set when the surcharges should be applied use :
apply_filters( 'woocommerce_sagepayform_apply_surcharges', true, $order, $sage_pay_args_array );
To modify the surcharge XML use :
apply_filters( 'woocommerce_sagepayform_modify_surcharges', $surchargexml, $order, $sage_pay_args_array, $cardtypes );
Card Type Drop Down ↑ Back to top
With Version 3.2.1 the Opayo Direct checkout form was changed to include a drop down for card type. Opayo requires that the card type is included in the transaction information. Previously this was done by checking the first 6 digits of the card number using a 3rd party service BIN List () Unfortunately this service has proved to be occasionally unreliable and so has been replaced by the drop down.
Tokens ↑ Back to top
As of version 3.3.0 tokens are supported with Opayo Direct. Your site will need to be running WooCommerce 2.6.0 or higher.
Tokens must be enabled on your Opayo account before your site will be able to use them.
The card details are not stored on your site, only the token from Opayo, the last four digits of the card number and the expiry date. You will not be able to store the CV2 number so this is not used during transactions that use a token, it will be checked when the token is created.
3D Secure will only be checked when the token is created, not for subsequent transactions using the token.
Tokens can also be used for Subscription payments making it easier for your customers to change their card details on your site.
Fraud Screening in Opayo Direct ↑ Back to top
Opayo provide some fraud screening during the payment process. If they flag a transaction then the order status will be changed during the checkout process to alert you. You will need to login to MySagePay to confirm that you are prepared to ship the order or that you need to cancel it. Once you have reviewed the reasons for the fraud notification you can go back to WooCommerce and update the order as necessary.
You can read about the way transactions are scored by Opayo here
“Checks” column ↑ Back to top
This section displays the status of checks done by Sage, previously this information was only included in the order notes. You will see
which will allow you to quickly check that the address, postcode, CV2 and 3D Secure information where all provided correctly. Green indicates correct, yellow indicates not checked and red indicates the information provided by the customer was incorrect. It is up to you to decide how to proceed if the icons are not green. Please note, renewal orders for subscription payments may not be all green as the checks are not re-done.
Note: This information may not be available or may be incomplete for orders placed before version 3.4.0 was installed. It has always been included in the transaction information in the order notes.
3D Secure 2 setup and testing ↑ Back to top
Setup ↑ Back to top
Opayo has enabled 3D Secure 2 on their live servers.
- In the WooCommerce Opayo Direct settings, make sure you have the VPS Protocol option set to “4.00”
- Make sure you have set up 3D Secure rules in the LIVE and TEST MySagePay. You can read more about setting up the rules on the SagePay website
Testing ↑ Back to top
To place test orders using 3D Secure 2.0 you will need to be in “testing”
Then you can choose the “Magic Value” in the drop down
Each value in the drop down will give a different result for a test transaction.
Test Cards ↑ Back to top
You will always receive an OK response and an Authorisation Code from the test server if you are using one of the test cards listed below. All other valid card numbers will be declined, allowing you to test your failure pages.
If you do not use the Address, Postcode and Security Code listed below, the transaction will still authorise, but you will receive NOTMATCHED messages in the AVS/CV2 checks, allowing you to test your rulebases and fraud specific code.
There are different cards for Visa and MasterCard to simulate the possible 3D-Secure responses.
Billing Address 1: 88 The Street
Billing Post Code: ST41 2PQ
Security Code: 123
Valid From: Any date in the past
Expiry Date: Any date in the future
Feedback and feature requests ↑ Back to top
For feedback on the SagePay Form/Direct. | https://docs.woocommerce.com/document/sagepay-form/ | 2021-02-25T02:43:58 | CC-MAIN-2021-10 | 1614178350706.6 | [array(['http://docs.woocommerce.com/wp-content/uploads/2013/03/sagepay-settings-950x633.png',
'sagepay-settings'], dtype=object)
array(['https://cl.ly/2Z2S162N2D2b/Image%202018-03-02%20at%2011.33.37%20am.png',
None], dtype=object)
array(['https://cl.ly/0i271w1g3t0Y/Image%202018-03-02%20at%2011.37.39%20am.png',
None], dtype=object)
array(['https://dl.dropboxusercontent.com/s/hmldldeihihm8s8/2015-06-26%20at%2010.34.png',
None], dtype=object)
array(['https://dl.dropboxusercontent.com/s/v4flkbk6e5o7oz0/2015-06-26%20at%2010.34%20%281%29.png',
None], dtype=object)
array(['https://cl.ly/0W03192P0P3Z/Image%202017-11-22%20at%204.00.12%20pm.png',
'iThemes Security Long Strings Option'], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/03/sagepay-3dsecure-950x336.png',
'sagepay-3dsecure'], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2013/03/Image-2020-12-14-at-7.57.24-pm.png?w=950',
None], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2013/03/Image-2020-12-14-at-7.45.06-pm.png?w=948',
None], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2013/03/Image-2020-12-14-at-7.48.51-pm.png?w=616',
None], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2013/03/Image-2019-08-29-at-4.38.55-pm.png?w=950',
None], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2013/03/Image-2019-08-29-at-4.39.49-pm.png?w=950',
None], dtype=object) ] | docs.woocommerce.com |
Global Secondary Indexes for N1QL
Global Secondary Index (GSI) supports a variety of OLTP like use-cases for N1QL including basic, ad-hoc, and short-running reporting queries that require filtering. For example, if you have a WHERE clause in a N1QL statement that selects a subset of your data on which the secondary index is defined, you can see a significant speedup in the N1QL query performance by using GSI.
Global secondary indexes are deployed independently into a separate index service within the Couchbase Server cluster, away from the data service that processes key based operations. GSIs provide the following advantages:
Predictable Performance: Core key based operations maintain predictable low latency even in the presence of a large number of indexes. Index maintenance does not compete with key based operations even under heavy mutations to data.
Low Latency Queries: GSIs independently partition into the index service nodes and don’t have to follow hash partitioning of data into vBuckets. Queries using GSIs can achieve low latency response times even when the cluster scales out because GSIs don’t require a wide fan-out to all data service nodes.
Advanced Scaling Model: GSI can be placed onto independent set of nodes. Administrators can add new indexes and evolve the application performance without stealing cycles from the incoming workload.
Creating Global Secondary Indexes
You can define a primary or secondary index using GSIs in N1QL using the CREATE INDEX statement and the USING GSI clause. For more information on the syntax and examples, see CREATE INDEX statement.
Placement of Global Secondary Indexes
GSIs reside on the index nodes in the cluster. Each index service node can host multiple indexes. Every index has an index key(s) (used for lookup). When the index type is not primary index, index can have an index filter with the WHERE clause.
CREATE INDEX index_name ON ( index_key1, ..., index_keyN) WHERE index_filter WITH { "nodes": [ "node1:8091" ] } USING GSI | VIEW;
Based on the index filter, the index can be partitioned across multiple index service nodes and placed on a given node using the WITH clause and "nodes" argument in CREATE INDEX statement.
Administrators can place each index partition in a separate node to distribute index maintenance and index scan load. Index metadata stored on the index node knows about the distribution of the index. GSI does not use scatter-gather. Instead, based on the index metadata, it only touches the nodes that have the relevant data.<<
GSI comes with two storage settings:
Memory optimized global secondary indexes
Standard global secondary indexes. Standard GSIs come with two write modes:
Append-only write mode
Circular write mode
The default storage setting for GSI is standard GSI. Memory optimized GSI setting can be selected at the time of the initial cluster setup. Write mode can be selected when the storage setting for GSI is standard. You can change the write mode at any time while the cluster is running, however the setting change require a restart of the index service which can cause a short period of unavailability. multiple copies of the same index to allow scans to be routed to multiple nodes by placing the copies of the same index in separate index service
Some of your indexes may become hot as they are utilized more often by N1QL queries required for your application logic. You can create multiple copies of the same index across multiple index service nodes under different index names. N1QL automatically load balances across multiple copies of the same index both for load balancing and availability of queries.
To load-balance GSIs, you must manually specify the nodes on which indexes should be built on.
For example, create two indexes
productName_index1and
productName_index2, using the following commands:"]};
The indexing load will be distributed equally between the indexes
productName_index1and
productName_index2.
- Partitioning with GSI
Some of your indexes may no longer fit a single node as the number of mutations, index size, and item count increases. You can partition a single index using a WHERE clause with CREATE INDEX scan logical timestamp of the update is retrieved with the mutation ACK response and is passed to the query request. This behavior achieves consistency, at least or later than the moment of the logical timestamp. If the index maintenance is running behind the logical timestamp, the query waits for the index to catch up to the last updates logical timestamp. At_plus scan consistency flag automatically degrades to the same characteristics as
stale=falsein the view API.. At_plus scan consistency flag can yield faster response times if the application can relax its consistency requirements to read-your-own-write, as opposed to a stricter request time consistency. Request_plus scan consistency value has the same characteristics as
stale=falsein the view API.
For N1QL, the default consistency setting is
not_bounded.
Index Replication and High Availability with N1QL
GSIs are not automatically replicated however you can create "replicas" yourself and achieve full high availability.
To create a replica of a GSI, you can create an identical index definition with unique index names under 2 or more nodes. Queries will load balance across the indexes and if one of the indexes become unavailable, all requests are automatically rerouted to the available remaining index without application or admin intervention. You can create more than 2 copies of the index for better redundancy and load balancing."]};
Standard Global Secondary Indexes
Standard global secondary indexes is the default storage setting for Couchbase Server clusters. Standard global secondary indexes (also called global secondary indexes, indexes, or GSI) can index larger data sets as long as there is disk space available for storing the index.
Standard Global Secondary Indexes uses ForestDB for indexes that can utilize both memory and persistent storage for index maintenance and index scans. ForestDB is Couchbase’s state-of-the-art storage engine with a modified data structure to increase read and write performance from storage.
Enabling Standard Global Secondary Indexes
By default, Couchbase Server uses standard global secondary indexes storage setting with the circular write mode for all indexes.
Standard and memory optimized storage settings apply to all indexes in the cluster and cannot be mixed within a single cluster.
At the time of the cluster’s initial setup, storage setting can be switched between standard and memory optimized GSI storage settings. Changing the storage setting for GSI requires removing all index service nodes.
Standard Global Secondary Index Performance
Different from the memory optimized storage setting for GSI, the performance of standard GSIs depend heavily on the IO subsystem performance. Standard GSIs come with 2 write modes:
Append-only Write Mode: Append only write mode is similar to the writes to storage in the data service. In append-only write mode, all changes are written to the end of the index file (or appended to the index file). Append only writes invalidate existing pages within the index file and require frequent full compaction.
Circular Write Mode: Circular write mode optimizes the IO throughput (IOPS and MB/sec) required to maintain the index on storage by reusing stale blocks in the file. Stale blocks are areas of the file that contain data which is no longer relevant to the index, as a more recent version of the same data has been written in another block. Compaction needs to run less frequently under circular write mode as the storage engine avoids appending new data to the end of the file.
In circular write mode, data is appended to the end of the file until the relative index fragmentation (
stale data size/
total file size) exceeds 65%. Block reuse is then triggered which means that new data is written into stale blocks where possible, rather than appended to the end of the file.
In addition to reusing stale blocks, full compaction is run once a day on each of the days specified as part of the circular mode time interval setting. This full compaction does not make use of the fragmentation percent setting unlike append-only write mode. Between full compaction runs, the index fragmentation displayed in the UI will not decrease and will likely display 65% most of the time, this particular metric is not relevant for indexes using circular write mode.
By default, Couchbase Server uses the circular write mode for standards GSIs. Append only write mode is provided for backward compatibility with previous versions.
When placing indexes, it is important to note the disk IO "bandwidth" remaining on the node as well as CPU, RAM and other resources. You can monitor the resource usage for the index nodes through Web Console and pick the nodes that can house your next index.
There are also per-index statistics that can help identify the item counts,disk and data size, and more individual statistics for an index.
Aside from the performance characteristics, the mechanics of creating, placing, load balancing, partitioning and HA behavior is identical in both standard and memory optimized global secondary indexes.
Memory-Optimized Global Indexes
Memory optimized global secondary indexes is an additional storage setting for Couchbase Server clusters. Memory optimized global secondary indexes (also called memory optimized indexes or MOI) can perform index maintenance and index scan faster at in-memory speeds.
Enabling Memory-Optimized Global Indexes
By default, Couchbase Server uses standard global secondary indexes storage setting with the circular write mode for all indexes. In this release, standard vs memory optimized storage settings apply to all indexes in the cluster and cannot be mixed within a single cluster. At the time of the cluster’s initial setup, storage setting can be switched to memory optimized GSI setting.
Memory Optimized Global Secondary Index Performance
There are several performance advantages to using memory optimized global secondary indexes: MOIs use a skiplist, a memory efficient index structure for a lock-free index maintenance and index scans. Lock-free architecture increases the concurrency of index maintenance and index scans. This enhances the index maintenance rate by more than an order of magnitude under high mutation rates. Skiplist based indexes take up less space in memory. Memory optimized indexes can also provide a much more predictable latency with queries as they never reach into disk for index scans.
MOIs use ForestDB for storing a snapshot of the index on disk; however writes to storage are done purely for crash recovery and are not in the critical path of latency of index maintenance or index scans. The snapshots on disk is used to avoid rebuilding the index if a node experiences failure. Building the index from the snapshot on disk minimizes the impact of index node failures on the data service nodes.
In short, MOIs completely RAM Quota becomes available on the node. There are two important metrics you need to monitor to detect the issues:
MAX Index RAM Used %: Reports the max ram quota used in percent (%) through the cluster and on each node both realtime and with a history over minutes, hours, days, weeks and more.
Remaining Index RAM: Reports the free index RAM quota for the cluster as a total and on each node both realtime and with a history over minutes, hours, days weeks and more.
If a node is approaching high percent usage of Index RAM Quota, it is time to take action:
You can either increase the RAM quota for the index service on the node to give indexes more RAM.
You can also place some of the indexes on the node in other nodes with more RAM available.
It is also important to understand other resource utilization beyond RAM. You can monitor the resource usage for the index nodes through Web Console and pick the nodes that can house your next index.
There are also per-index statistics that can help identify the item counts, disk/snapshot and data size and more individual statistics for an index for memory optimized indexes.
Aside from the performance characteristics, the mechanics of creating, placing, load balancing, partitioning and HA behavior is identical in both standard and memory optimized global secondary indexes.
Handling Out-of-Memory Conditions
Memory-optimized global indexes reside in memory. When a node running the index service runs out of configured Index RAM Quota on the node, indexes on the node can no longer process additional changes.
stale=false or RYOW semantics fail if the timestamp specified exceeds the last timestamp processed by the specific index on the node.
However, queries with
stale=ok continue to execute normally.
To recover from an out-of-memory situation, use one or more of the following fixes:.
Changing the Global Secondary Index Storage Mode (Standard vs Memory Optimized) push to replicate the data to the new cluster. If you don’t have a space cluster, you can also create all the indexes using the View indexer. See the CREATE INDEX statement and the USING VIEW clause for details). However, the View indexer for N1QL provides different performance characteristics as it is a local index and not a global index like GSI. For better availability when changing the storage mode from MOI to GSI, we recommended that you use the XDCR approach as opposed to views. | https://docs.couchbase.com/server/4.5/indexes/gsi-for-n1ql.html | 2019-06-16T04:45:38 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['_images/query-exe-with-global-indexes.png',
'query exe with global indexes'], dtype=object)
array(['_images/monitor-index-resource-usage.png',
'monitor index resource usage'], dtype=object)
array(['_images/per-index-stats.png', 'per index stats'], dtype=object)
array(['_images/moi-index-resource-usage.png', 'moi index resource usage'],
dtype=object)
array(['_images/moi-per-index-stats.png', 'moi per index stats'],
dtype=object) ] | docs.couchbase.com |
Finding Missing Indexes
The missing indexes feature is a lightweight, always-on way to identify indexes missing on database tables and indexed views that might enhance query performance if implemented.
In This Section
About the Missing Indexes Feature
Describes the components of the missing indexes feature and how to enable or disable this feature.
Using Missing Index Information to Write CREATE INDEX Statements
Provides guidelines for and examples of using the information returned by the missing index feature components to write CREATE INDEX DDL statements.
Limitations of the Missing Indexes Feature
Describes limitations and restrictions for using the missing indexes feature.
Related Query Tuning Features
Lists other SQL Server features that can be used with the missing indexes feature to tune query performance. | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms345417(v%3Dsql.105) | 2019-06-16T05:54:11 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
On this page: Watch the video: Start Monitoring with the Android Agent
On this page:
Watch the video:
Start Monitoring with the Android Agent
The Maven Central build is currently supported only for Gradle and Maven-based builds. For builds using Ant, please see Instrument an Android Application - Manual > Instrumentation > End User Monitoring.
- Click the Mobile Apps tab.
2. Record the Application Key
Record the application key generated for this application, displayed in step .
You will need this key when you modify the source code.
3. Set Up Your Environment
Follow the instructions based on your build environment.
Setup for Gradle
If you are using Gradle, you need to modify both your
build.gradle files.
Modify your top-level build.gradle
Make the following changes to your top-level
build.gradle file:
Edit or create the "buildscript" section and add the AppDynamics plugin as a dependency as follows:
buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle:1.1.0' classpath 'com.appdynamics:appdynamics-gradle-plugin:4.+' // this line added for AppDynamics // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { jcenter() } }
To stay on a particular version of the agent, replace "4.+" with the desired version number.
Modify your module-level build.gradle
Apply the "adeum" plugin immediately after the "com.android.application" plugin.
- Add "com.appdynamics:appdynamics-runtime:4.+" as a compile-time dependency.
- Optional: to automate uploading Proguard mapping files so that they are updated every time you build, add the Proguard snippet. You can use this instead of the manual options discussed in Step 8 below.
After you have added all the AppDynamics Android Agent requirements, your module-level
build.gradle resembles this:
apply plugin: 'com.android.application' apply plugin: 'adeum' // this line added for AppDynamics android { compileSdkVersion 21 buildToolsVersion "21.1.2" defaultConfig { applicationId "com.appdynamics.example" minSdkVersion 15 targetSdkVersion 21 versionCode 1 versionName "1.0" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } } dependencies { compile 'com.appdynamics:appdynamics-runtime:4.+' // this line added for AppDynamics } adeum { // this section added for AppDynamics //Optional. Uploads Proguard mapping files with each build url "" //Use this to point to the Collector of an on-prem instance of the EUM Server account { name "The EUM Account Name from the License screen" licenseKey "The EUM License Key from the License screen" } proguardMappingFileUpload { failBuildOnUploadFailure true //should build fail if upload fails? Defaults to false. enabled true //enables automatic uploads. Defauts to true. } }
Optional: Enable/Disable Instrumentation with Build Types
In certain cases you might want to be able to select whether or not a particular build is instrumented, for example, if you are tracking down an issue in your code. You can modify your module-level "build types" section to support two options, one of which is instrumented and the other not. You need to add information to two sections of the module-level
build.gradle: the
android "build-types" section and the
adeum section, if you already have one.
android { // usual stuff buildTypes { // usual stuff release {//these lines added for AppDynamics //release based configuration buildConfigField "boolean", "CHECK_ENABLED", "true" } debug { buildConfigField "boolean", "CHECK_ENABLED", "false" } } } adeum { //other stuff, if it exists //Optional. By default, instrumentation is enabled for both debug & release builds enabledForDebugBuilds = false enabledForReleaseBuilds = true }
If you use this option, you need to do a bit more work when you set up instrumentation in your code. See Optional: Modify the source if you are using Build Types
Setup for Maven
If you are using Maven you must:
These instructions assume you are building your application using the android-maven-plugin with Maven 3.1.1+.
Add the Maven runtime dependency
Add the following code to the <dependencies> section:
<dependency> <groupId>com.appdynamics</groupId> <artifactId>appdynamics-runtime</artifactId> <version>4.1.2.0</version>//this version changes with each new releases </dependency>
Add the Maven plugin
Add the following code to the <plugins> section:
<plugin> <groupId>com.appdynamics</groupId> <artifactId>appdynamics-maven-plugin</artifactId> <version>4.1.2.0</version>//this version changes with each new releases <executions> <execution> <phase>compile</phase> <goals> <goal>adinject</goal> </goals> </execution> </executions> </plugin>
4. Integrate ProGuard, if Necessary
If you use ProGuard to verify or optimize your code, add the following lines to your ProGuard configuration file, by default
proguard.cfg.
Depending on your environment, this file may renamed by your build.
-keep class com.appdynamics.eumagent.runtime.DontObfuscate -keep @com.appdynamics.eumagent.runtime.DontObfuscate class * { *; }
If you use Proguard to obfuscate your code, note the name and location of the mapping file that ProGuard produces, because AppDynamics needs this file to create human-readable stack traces for crash snapshots. For details about why you should do this, see Get Human-Readable Crash Snapshots. For more information on the process, see Upload the ProGuard Mapping File.
Every time the application is changed and recompiled the ProGuard mapping file changes, so you need to upload the new mapping file to AppDynamics every time you modify your app, either manually or as a part of your build process.
5. Modify the Source
In the source file that defines your application's primary Activity, add the following import:
import com.appdynamics.eumagent.runtime.Instrumentation;
In your primary Activity's onCreate() method, add the following lines, passing in the App Key from step 2 above:
Instrumentation.start("$CURRENT_APP_KEY", getApplicationContext());
Save the file.
Your code should look something like this.
import com.appdynamics.eumagent.runtime.Instrumentation; ... @Override public void onCreate(Bundle savedInstanceState) { ... Instrumentation.start("$CURRENT_APP_KEY", getApplicationContext()); }
Optional: Point to an on-prem EUM Server
If you are using an on-prem EUM Server instead of the EUM Cloud, you need to tell the Agent where to send its beacons. To do that, you need to create an Agent Configuration object and pass it into the
Instrumentation.start call. The AgentConfiguration object allows you to customize the configuration of the Agent in various ways. To set a URL for an on-prem Server, use the
withCollectorUrl method. You also need to pass in the App Key, as in the simpler process above.
In your primary Activity's onCreate() method, use this form of the Instrumentation.start call:
Instrumentation.start(AgentConfiguration.builder() .withContext(this) .withAppKey("$CURRENT_APP_KEY") .withCollectorURL("$URL_OF_YOUR_EUM_SERVER") .build());
The full JavaDocs that describe the available options using this method are available here.
Optional: Modify the source if you are using Build Types (Gradle only)
If you are using Gradle and have used the "build types" option in your
build.gradle file to enable/disable instrumentation, you will need to do a little additional work. Instead of the above, in your primary Activity's onCreate() method, add the following lines:
Instrumentation.start( AgentConfiguration config = AgentConfiguration.builder() .withAppKey($CURRENT_APP_KEY") .withContext(this) .withCompileTimeInstrumentationCheck(BuildConfig.CHECK_ENABLED) .build(); );
Save the file.
Your code should then look something like this.
import com.appdynamics.eumagent.runtime.Instrumentation; ... @Override public void onCreate(Bundle savedInstanceState) { ... Instrumentation.start( AgentConfiguration config = AgentConfiguration.builder() .withAppKey($CURRENT_APP_KEY") .withContext(this) .withCompileTimeInstrumentationCheck(BuildConfig.CHECK_ENABLED) .build(); ); }
6. Add the Required Permissions
Open your application's
AndroidManifest.xml file and verify that it has these permissions:
<uses-permission android:</uses-permission> <uses-permission android:</uses-permission>
If both of these permissions are not present, add them.
7. Rebuild the Application
Rebuild your application. If you are using Gradle, add the -i flag (
gradle -i).
In the console, you should see something like this:
[injector] /=========================================\ [injector] | AppDynamics BCI Instrumentation summary | [injector] \=========================================/ [injector] [injector] [injector] - Total number of classes visited (#720 classes) [injector] - Total number of classes instrumented (#1 classes) [injector] - Total number of classes failed to instrument (#2 classes) [injector] - Total number of features discovered (#3) [injector]
Verify your instrumentation - Gradle
If you didn't use the
-i flag, check to make sure there is a line in your console output that contains "
inject". If you don't see this information printed in your console, either your project is incorrectly configured or the injector failed to run completely. There is a very detailed log of this process at
<project>/build/appdynamics_eum_android_bci.log
Verify your instrumentation - Maven
If you don't see this information printed in your console, either your project is incorrectly configured or the injector failed to run completely. There is a very detailed log of this process at
<project>/target/appdynamics_eum_android_bci.log
8. Upload the ProGuard Mapping File
If you did not obfuscate your application source code, you can skip this step. If you have set up automatic uploading of this file using your
build.gradle file, you can skip this step.
This step is optional but highly recommended if you obfuscated your code and plan to monitor crashes. AppDynamics needs the mapping file for the application to produce human-readable stack traces for crash snapshots. By default, the mapping file is named
mapping.txt, unless you have renamed it in your build.
If you update your application, you need to upload the new version of the mapping file.
To associate the mapping file with the correct version of the application, you need to provide:
- the package name of the Android package for the application
- the version code for that application from the
AndroidManifest.xmlfile
You can either upload the mapping file using the instrumentation screen in the Controller UI or use a special REST API. Perform the upload separately for each ProGuard mapping file that you are providing.
Upload the ProGuard Mapping File using the UI
- In the instrumentation window in the controller UI, click the Upload ProGuard mapping file for Android crashes button.
- In the ProGuard mapping file upload window, either
- select an existing package from the dropdown list
or
- enter a new package name for the mobile application.
If the application is already registered with the controller, you can select its package which is listed in the dropdown list.
If the application is not yet registered, enter the package name in the New Package field.
- Enter the version code (a number) for the package. This is the
versionCodeproperty in the
AndroidManifest.xmlof the application for which this mapping file was generated.
- Click Select ProGuard mapping file.
The uploader expects a file with .txt extension. The file is named
mapping.txt.
- In the file browser locate and select the mapping file and click Open.
- Click Upload.
Upload the ProGuard Mapping File using the API
The API uses HTTP basic authentication to send a PUT request to AppDynamics. The username is your AppDynamics EUM account name and the password is your EUM license key.
Set up your HTTP basic authentication credentials
- In the upper right section of the Controller UI, click the gear icon > License.
- Scroll down to the End User Monitoring mapping file
Send the ProGuard mapping file as as a text file in the body of the PUT request to the following URI:<androidPackageName>/<versionString>
These parameters are required:
androidPackagename: the name of the Android package for which this mapping file was generated
versionString: the string representation of the "versionCod"e property in the
AndroidManifest.xmlof the application for which this mapping file was generated
The request body contains the mapping file. The content type of the body is either text/plan or gzip if the body was ended with gzip.
Sample Request and Response Using the REST API
This is a sample request and response using the REST API.
Upload Request
The following example uses curl to send a mapping file. The account name is "Example account" and the license key/password is "Example-License-Key-4e8ec2ae6cfe". The plus signs replace spaces in the account name when the account name is URL-encoded. The package name for the Android application is "com.example.networklogger". The mapping file corresponds to the version with versionCode 1.
curl -v --upload-file mapping.txt --user Example+account:Example-License-Key-4e8ec2ae6cfe
Upload Response
The successful output of the example/proguardMappingFile/com.example.networklogger/1: app.eum-appdynamics.com > Accept: */* > Content-Length: 4 > Expect: 100-continue > < HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Length: 0 < Server: Jetty(8.1.4.v20120524) < * Connection #0 to host app.eum-appdynamics.com left intact * Closing connection #0
9. Customize Your Instrumentation (Optional)
The
Instrumentation class has additional methods to allow you to extend the kinds of application data you can collect and aggregate using Mobile RUM. There are five basic kinds of extensions that you can create:
- custom timers
- custom metrics
- information points
- breadcrumbs
- user data
In addition, if you are using an environment with an on-premise EUM Server you need to update your code to indicate the URL to which your agent should send its beacons.
For more information, see Use the APIs of the Android SDK to Customize Your Instrumentation.
Upgrade the Android Mobile Agent
As new features are added to the agent, you will need to rebuild to upgrade the Android SDK. | https://docs.appdynamics.com/display/PRO41/Instrument+an+Android+Application+-+Maven+Central | 2019-06-16T05:21:05 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.appdynamics.com |
Instructions for Upgrading to Crafter CMS 3.0.12 from a previous 3.0.x version¶
After upgrading your Crafter CMS install, you will need to update the dependency-resolver configuration file in your existing site.
To update your existing site’s dependency resolver configuration, open the file
resolver-config.xml found in the following directory:
{REPOSITORY_ROOT}/sites/SITENAME/config/studio/dependency/ and overwrite the content of the file with the following: | https://docs.craftercms.org/en/3.0/system-administrators/upgrade/upgrading-to-craftercms-3-0-12.html | 2019-06-16T05:49:23 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.craftercms.org |
This article covers:
- Preferred Exporter
- Expense Report Status
- Reimbursable and Non-reimbursable Expenses
- Expense Report Mapping
Preferred Exporter
You can select any admin on the policy to receive notifications in their Inbox when reports are ready to be exported.
Expense report status
Here you select if you want your reports to be exported as Approved or Submitted.
Reimbursable and Non-Reimbursable Expenses
Both reimbursable and non-reimbursable expenses will export as expense reports.
Expense Report Mapping
Summary
Expense Detail
Still looking for answers? Search our Community for more content on this topic! | https://docs.expensify.com/articles/1262979-financialforce-psa-srp-export-options | 2019-06-16T05:13:34 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['https://downloads.intercomcdn.com/i/o/82244766/9f1e01b164f7b16fdc945d03/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/82244820/3c401db8b8b66693b9dc9a00/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/82244855/28ef2a206f22896a396808ef/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/90692622/abd5392dd82c044c3d1db5b6/FF+PSA+Export.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/82244976/2b37f5ca2bb7b606e33d0c49/image.png',
None], dtype=object) ] | docs.expensify.com |
1 Introduction
This document describes the prerequisites for the create-custom-action how-to’s. Each section gives a description of what is expected in that prerequisite.
2 Mendix Rapid App Developer
General knowledge of the Mendix Platform is a must when creating custom actions. The HTML of a Mendix app is a direct representation of your app as you see it in the Modeler.
In the Become a Rapid Developer Learning Path, you learn all the fundamentals of the Mendix Platform, including how the different widgets work and how to adjust them. This is a great advantage when looking at a widget through the HTML.
3 Guidelines for Building a Custom Action
Guidelines for Building a Custom Action is a best practice document that describes the guidelines you must follow when building custom actions. These guidelines deliver the best results.
4 Google Chrome and Chrome DevTools Basics
The how-to’s make use of the Google Chrome web browser and the Chrome DevTools. You must have a basic understanding of the Chrome DevTools to get the full experience from each how-to.
When you use the Chrome browser, you can follow the how-to’s in detail. You may use other browsers and their DevTools, but those might not be compatible with the how-to.
Google provides how-to’s for learning more about the DevTools.
5 Understanding the HTML Structure
The how-to’s use the debugger to see the HTML of the page. An understanding of HTML helps a lot when creating custom actions. An understanding of the following points is sufficient:
- HTML hierarchy
- Basic structure (different types of elements, etc.)
For more information, see W3Cschools.
6 ATS Selectors
The how-to’s use jQuery selectors to find elements in the browser. Next to the standard jQuery selectors, ATS also uses pseudo-selectors. You can find these pseudo-selectors in the ATS Selector documentation.
For more information on selectors, see Helpful Resources.
7 General ATS Knowledge
The how-to’s describe how to create custom actions, but they do not describe every step. If the how-to says to add an action, you must add the action, but the how-to may not describe the whole process. Make sure you know the basics of ATS (for example, how to search for actions).
8 Custom Action Basics
The how-to’s don’t explain everything in detail. Simple things like adding an input parameter are not explained. Make sure you have completed the Custom Action Basics how-to. | https://docs.mendix.com/ats/howtos/ht-version-1/custom-action-prerequisites | 2019-06-16T05:17:55 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.mendix.com |
The ‘Change Variable’ action can be used to change the value of an existing variable.
See Microflow Element Common Properties for properties that all activities share (e.g. caption). This page only describes the properties specific to the action.
Input Properties
Variable name
The variable of which you want to change the value.
Action Properties
Value
The new value for the variable. The value can be entered using microflow expressions. The type of the microflow expression should be the same as the type of the selected variable. | https://docs.mendix.com/refguide5/change-variable | 2019-06-16T05:57:40 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.mendix.com |
What is Azure IoT Hub?
IoT Hub is a managed service, hosted in the cloud,. You can connect virtually any device to IoT Hub.
IoT Hub supports communications both from the device to the cloud and from the cloud to the device. IoT Hub.
Scale your solution
IoT Hub scales to millions of simultaneously connected devices and millions of events per second to support your IoT workloads. IoT Hub offers several tiers of service to best fit your scalability needs. Learn more by checking details on quota limits:
Next steps
To try out an end-to-end IoT solution, check out the IoT Hub quickstarts:
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/azure/iot-hub/about-iot-hub?WT.mc_id=docs-azuredevtips-micrum | 2019-06-16T04:44:48 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
T!
Prerequisites¶
Warning
We recommand to run your masternode on Ubuntu 18.04 LTS. This version have python 3.6 and has been reported as working out of the box.
Installation of Python¶
To install Python under debian based distribution, run the following commands.
apt update apt install python3-pip
To check if you have installed the right Python version (must be greater than 3.5).
python3 --version
Installation of Docker CE¶
To install Docker, first update the apt package index.
sudo apt update
Then Install packages to allow apt to use a repository over HTTPS.
sudo apt
Set up the stable Docker repository.
sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
Update the apt package index. Then install the latest version of Docker CE.
sudo apt update sudo apt install docker-ce
Once installed, add your current user to the Docker group.
usermod -aG docker $your_user_name
Warning
You need to relog into your account for this to take effect.
Verify that Docker CE is installed correctly by running the hello-world image:
docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.
tmn¶
Installation¶
Simply install it from pip.
pip3 install --user tmn
Update¶
Update it from pip.
pip3 install -U tmn
First start¶.
--api: Expose RPC and websocket on ports
8545 and
8546.
Important note:
Those ports should not be accessible directly from the internet. Please setup firewalling accordingly if you need to access them localy. Use a reverse proxy if you want to expose them to the outside.
It could look like this:
tmn start --name [YOUR_NODE_NAME] --net testnet --pkey [YOUR_COINBASE_PRIVATE_KEY] --api
Once started, you should see your node on the stats page!
Note: it can take up to one hour to properly sync the entire blockchain.
:$HOME/.local/bin' >> $HOME/.bashrc
On MacOS:
Replace
[VERSION] by your version of python (3.5, 3.6, 3.7)
echo 'export PATH=$PATH:$HOME/Library/Python/[VERSION]/bin' >> $HOME/.bashrc
Then reload your environment:
source ~/.bashrc
error: could not access the docker daemon¶
If you have installed Docker, you probably forgot to add your user to the docker group. Please run this, close your session and open it again.
usermod -aG docker $your_user_name
pip3 install fails due to not being able to build some package¶
Your OS might not come with build tools preinstalled.
For ubuntu, you can solve that by running:
sudo apt install build-essential python3-dev python3-wheel
pip3 install fails due to "No Module named Setuptools"¶
Your OS might not come with setuptools preinstalled.
For ubuntu, you can solve that by running:
sudo apt install python3-setuptools | https://docs.tomochain.com/masternode/tmn/ | 2019-06-16T05:40:47 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/assets/tmn_python.png', 'tmn python'], dtype=object)
array(['/assets/tmn_key.png', 'tmn docker key'], dtype=object)
array(['/assets/tmn_stats.png', 'tmn stats'], dtype=object)] | docs.tomochain.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
A group can also have managed policies attached to it. To retrieve a managed policy document that is attached to a group, use GetPolicy to determine the policy's default version, then use GetPolicyVersion to retrieve the policy document.
For more information about policies, refer to Managed Policies and Inline Policies in the IAM User Guide.
Namespace: Amazon.IdentityManagement
Assembly: AWSSDK.dll
Version: (assembly version)
Container for the necessary parameters to execute the GetGroupPolicy service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MIAMIAMServiceGetGroupPolicyGetGroupPolicyRequestNET35.html | 2017-11-18T02:52:42 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.aws.amazon.com |
Crying Time uses DAZ Studio’s ability to layer images, so the presets will behave differently to normal MAT poses. They will layer on top of your existing textures and, if you apply more than one, they will layer on top of each other. If you try one and want to try another, to restore you original texture can reapply the texture, you can use the “undo” function as needed or you can select your original texture from the drop down material lists under the Diffuse, Specular and Displacement channels. As with all layered images within DAZ Studio, you can apply as many as your computer memory will allow. If you start to receive “failure to load” messages, your textures come up white or the presets don’t seem to apply, it means you have reached the limit of your computer’s memory. Save your scene, close DAZ Studio, open it again and reload your scene. This allows you to continue where you left off with fresh memory. The sclera reddening presets are also layered, so they will stack on top of each other if more than one is applied. There may be a short delay between clicking on any of the layered presets and their application. The tear presets use specularity as well as displacement, so will be affected by the lighting in your scene, particularly by specular lights. You can adjust both the specularity and the displacement via the percentage sliders on the Surfaces Tab. | http://docs.daz3d.com/doku.php/artzone/azproduct/10778 | 2017-11-18T02:53:50 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.daz3d.com |
List Relationships (Hub)
[This topic is pre-release documentation and is subject to change.]
Lists the Relationship type definitions for the current Hub.
Request
The request is constructed as follows:
URI Parameters
Response
The response includes a standard HTTP status code, a set of standard response headers, and a response body.
Response Body
A linked collection of Relationship identifiers of the form:
{ "value": [ { "id": "string", "name": "string", "type": "string" } ], "nextLink": "url" } | https://docs.microsoft.com/en-us/dynamics365/customer-engagement/customer-insights/ref/hub/relationshiplist | 2017-11-18T02:34:18 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.microsoft.com |
Performance metrics overview
View performance metrics in graph views from the dashboard. Clone, rename, or delete graph metric presets. Export and import presets into other clusters or OpsCenter instances.
- Cluster Performance Metrics
- Pending Task Metrics
- Table (Column Family) Metrics
-.
You can delete, clone, rename, choose the default view of graphs, and share graphs with other users.
Export dashboard configurations to import into other clusters or OpsCenter instances.
For automated guidance with setting up performance monitoring metrics for DataStax Enterprise, see the Performance Service. | https://docs.datastax.com/en/opscenter/6.1/opsc/online_help/opscUsingPerformanceMetrics_c.html | 2017-11-18T02:53:18 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.datastax.com |
New Features
Google GCE Support Added to Instance Provisioner - The Instance Provisioner in the RightScale Dashboard now supports Google GCE. You can access the Instance Provisioner by navigating to Clouds > Google > Instances > New or Manage > Instances & Servers > New Instance > Google. For additional information on using the Instance Provisioner in the RightScale Dashboard see New Instance on the RightScale support site.
Support Added for New AWS D2 Instance Types - The new Dense Storage instances for EC2 are suitable for situations where large amounts of data storage are needed. See the official [AWS blog on D2 Instance Types[()] for additional information.
Cloud Management API v1.6 - Reference documentation is now publicly available for Cloud Management API v1.6. View the new API v1.6 docs here.
Note: Archived Release Notes are Available Here | http://docs.rightscale.com/release-notes/cloud-management/2015/04/15.html | 2017-11-18T02:35:10 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.rightscale.com |
Splunkbase file standards reference table
The Splunkbase team evaluates your submitted content to ensure it meets the following file standards.
Packaging and naming standards
Splunk configuration file standards
Splunk XML file standards (if applicable)
Source code standards
Binary content standards
Splunk version support and installation standards
All submitted apps must run on the versions of Splunk that your app claims to support.
Operating system standards
Malware/viruses, malicious content, user security standards
External data sources
The application's publisher must document if the app calls an external source for data or other info.
Documentation standards
Support standards
Contact information (email, link to a ticket system, etc.) for application support is provided in the documentation or app's web content.
Intellectual property standards
If you are using the Splunk logo, its usage must meet Splunk branding guidelines.
This documentation applies to the following versions of Splunk® Answers and Splunkbase: splunkbase
Feedback submitted, thanks! | http://docs.splunk.com/Documentation/Splunkbase/latest/Splunkbase/Approvalcriteria | 2017-11-18T02:49:50 | CC-MAIN-2017-47 | 1510934804518.38 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Character restrictions apply to some passwords.
The vRealize Automation administrator password that you define during installation must not contain special characters. As of this version of vRealize Automation, the following special characters are known to cause errors:
Double quote marks (")
Commas (,)
A trailing equal sign (=)
Blank spaces
Non-ASCII or extended ASCII characters
Passwords that contain special characters might be accepted when you assign them, but cause failures when you perform operations such as saving endpoints or when the machine attempts to join the vRealize Automation cluster. | https://docs.vmware.com/en/vRealize-Automation/7.0/com.vmware.vrealize.automation.doc/GUID-E5443A9A-8561-47FB-84CF-4E8468766831.html | 2017-11-18T02:46:30 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
Getting Started¶
This section is primarily written for Windows users. There are extra sections about installing Git Extensions on Linux and Mac OS X.
Installation¶
There is a single click installer GitExtensions-X.XX.XX-SetupComplete.msi that installs Git for Windows 32bit, Kdiff3 32bit and Git Extensions. The installer can be found here.
Git Extensions depends heavily on Git for Windows. When Git for Windows is not installed, ensure the “Install Git for Windows” checkbox is checked. Kdiff3 is optional, but is advised as a merge tool.
Choose the options to install.
Choose the SSH client to use. PuTTY is the default because it has better Windows integration.
Installation (Linux)¶
You can watch this video as a starting point: Install Git Extensions on Ubuntu 11.04
For further help go to
This section only covers mono installation, you should have git installed in your Linux at this point. Please refer to
First, make sure you have the latest mono version on your Linux. This section will cover installation of Mono 4.6 on a Linux.
Install mono latest version. You can always check for this here:
If everything went okay, you should open your terminal and check mono version:
$ mono --version Mono JIT compiler version 4.6.1 (Stable 4.6.1.5/ef43c15 Wed Oct 12 09:10:37 UTC 2016) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen
Now download Git Extensions latest version from. Remember to select the appropriate package otherwise you could have problems.
Browse into the folder where you extracted the package and just run mono command, like the example below:
$ mono GitExtensions.exe
Installation (macOS)¶
This section only covers mono installation, you should have git installed in your Mac at this point. Please refer to
First, make sure you have the latest mono version on your Mac. This section will cover installation of Mono 4.6 on a Mac.
Download mono latest version. You can always check for this here:
After you have completed the download, you will see a .dmg file. Double click it to open the package.
Inside the .dmg file you will have MonoFramework-{version}.pkg. Double click to start the installation process.
Follow the wizard until it’s completion.
If everything went okay, you should open your terminal and check mono version:
$ mono --version Mono JIT compiler version 4.6.1 (mono-4.6.0-branch-c8sr0/abb06f1 Fri Sep 23 19:24:23 EDT 2016)
Now download Git Extensions latest version from. Remember to select the appropriate package otherwise you could have problems.
Browse into the folder where you extracted the package and just run mono command, like the example below:
$ mono GitExtensions.exe
This is the minimal setup you need in order to run Git Extensions.
Troubleshooting Mac Installation¶
- If your Git Extensions crashes with an exception that a font is missing (generic sans serif), you probably can fix this by installing Xquartz. This is a version of the X.Org X Windows System that runs on OS X. I am not sure what the side effects are. This can be installed from here:
- If Git Extensions still crashes because it is unable to load a plugin, empty the plugins folder.
Settings¶
All settings will be verified when Git Extensions is started for the first time. If Git Extensions requires any settings to be changed, the Settings dialog will be shown. All incorrect settings will be marked in red. You can ask Git Extensions to try to fix the setting for you by clicking on it. When installing Git Extensions for the first time (and you do not have Git already installed on your system), you will normally be required to configure your username and email address.
The settings dialog can be invoked at any time by selecting
Settings from the
Tools menu option.
For further information see Settings.
Start Page¶
The start page contains the most common tasks, recently opened repositories and favourites. The left side of the start page (Common Actions and Recent Repositories) is static. The right side of the page is where favourite repositories can be added, grouped under Category headings.
Recent Repositories can be moved to favourites using the repository context menu. Choose
Move to category / New category to create a new category
and add the repository to it, or you can add the repository to an existing category (e.g. ‘Currents’ as shown below).
A context menu is available for both the category and the repositories listed underneath it.
Entries on Category context menu
Entries on repository context menu
To open an existing repository, simply click the link to the repository under Recent Repositories or within the Categories that you have set up, or select Open repository (from where you can select a repository to open from your local file system).
To create a new repository, one of the following options under Common Actions can be selected.
Clone repository¶
You can clone an existing repository using this option. It displays the following dialog.
The repository you want to clone could be on a network share or could be a repository that is accessed through an internet or intranet connection. Depending on the protocol (http or ssh) you might need to load a SSH key into PuTTY. You also need to specify where the cloned repository will be created and the initial branch that is checked out. If the cloned repository contains submodules, then these can be initialised using their default settings if required.
There are two different types of repositories you can create when making a clone. A personal repository contains the complete history and also contains a working copy of the source tree. A central repository is used as a public repository where developers push the changes they want to share with others to. A central repository contains the complete history but does not have a working directory like personal repositories.
Clone SVN repository¶
You can clone an existing SVN repository using this option, which creates a Git repository from the SVN repository you specify. For further information refer to the Pro Git book.
Clone Github repository¶
This option allows you to
- Fork a repository on GitHub so it is created in your personal space on GitHub.
- Clone any repositories on your personal space on GitHub so that it becomes a local repository on your machine.
You can see your own personal repositories on GitHub, and also search for repositories using the
Create new repository¶
When you do not want to work on an existing project, you can create your own repository using this option.
Select a directory where the repository is to be created. You can choose to create a Personal repository or a Central repository.
A personal repository looks the same as a normal working directory but has a directory named
.git at the root level
containing the version history. This is the most common repository.
Central repositories only contain the version history. Because a central repository has no working directory you cannot checkout a revision in a central repository. It is also impossible to merge or pull changes in a central repository. This repository type can be used as a public repository where developers can push changes to or pull changes from. | http://git-extensions-documentation.readthedocs.io/en/latest/getting_started.html | 2017-11-18T02:48:02 | CC-MAIN-2017-47 | 1510934804518.38 | [array(['_images/install1.png', '_images/install1.png'], dtype=object)
array(['_images/install2.png', '_images/install2.png'], dtype=object)
array(['_images/install3.png', '_images/install3.png'], dtype=object)
array(['_images/install4.png', '_images/install4.png'], dtype=object)
array(['_images/install5.png', '_images/install5.png'], dtype=object)
array(['_images/install6.png', '_images/install6.png'], dtype=object)
array(['_images/settings.png', '_images/settings.png'], dtype=object)
array(['_images/start_page.png', '_images/start_page.png'], dtype=object)
array(['_images/move_to_category.png', '_images/move_to_category.png'],
dtype=object)
array(['_images/clone.png', '_images/clone.png'], dtype=object)
array(['_images/github_clone.png', '_images/github_clone.png'],
dtype=object)
array(['_images/new_repository.png', '_images/new_repository.png'],
dtype=object) ] | git-extensions-documentation.readthedocs.io |
LemonStand offers a built-in AJAX framework that makes sending requests to the server and updating page elements asynchronously a breeze.
Let's have a look at an example of AJAX and data attributes in action in the default install of your LemonStand store.
Start by logging in to the backend of your store, and click on the Store Design link in the sidebar. Then open the Partials section.
In the Partials section find the partial named
shop-cart-content and click on it to edit.
First a little explanation on what the shop-cart-content partial does. The shop-cart-content partial is basically the template for the shopping cart page. It contains a call to render the shop-cart-items partial as well as the shipping rates and order total calculator. The shop-cart-content partial also contains a block of buttons at the bottom, where you will see the continue shopping, coupon code, update cart and checkout buttons.
This button block is the area that we will be referencing for our example.
In the Content area of the shop-cart-content partial, find the following line of code, which should be located on line 60.
<a href="#" data-Update cart</a>
This snippet of code is for the update cart button. The update cart button will update the cart page based on any changes you have made to the cart since the page was originally loaded. Such as changing the quantity amount of an item in your cart.
There are two attributes that make up our AJAX functionality,
data-ajax-handler and
data-ajax-update.
The
data-ajax-handler will refer to the CMS AJAX handler that will process the request on the server, in this case it would be the
shop:cart AJAX handler.
The
data-ajax-update attribute refers to the page element that will be updated after the server processes the request. This is broken up into two sections. The CSS selector for the element that wraps the content that you want to be updated and the partial that renders the actual content to be updated.
In this case, for example, you can see the following for data-ajax-update:
<a href="#" data-Update cart</a>
From this snippet we can see that there are two areas being updated when the update cart button is clicked. First we update the shop-cart-content partial which is rendered within a DIV with an ID of cart-content. Then we update the mini-cart which is rendered within the shop-minicart partial that is contained within a DIV with the ID of mini-cart.
Writing Your Own AJAX Call
If you would like to write your own AJAX call instead of using the built-in framework follow these steps. There a couple reasons why you might want to do this, the most common is to have form field values copied to each other before form submission. You can't do this by using the built-in framework because if you put an on-click function on the form submission button like so:
<a class="btn" onclick="processBlankFields();" href="#" data- Submit </a>
It will submit the form via AJAX before your function is called. So the solution is to just make your AJAX call happen in your custom function after it does what you needed. So step one, change your form submission button to:
<a class="btn" href="#" onclick="processBlankFields();"> Submit </a>
Step two, add a script tag with your function:
<script type="text/javascript"> function processBlankFields () { // Your custom code //Your ajax call $.ajax({ data : $('#free-checkout').serialize(), type: 'post', url: window.location.href, headers: { 'X-Event-Handler': 'shop:checkout', 'X-Partials' : 'shop-checkout,shop-checkout-progress,shop-minicart', 'X-Requested-With': 'XMLHttpRequest' }, success: function(data) { $('#checkout-page').html(data['shop-checkout']); $('#breadcrumbs-area').html(data['shop-checkout-progress']); $('#mini-cart').html(data['shop-minicart']); } }); }; </script>
Some things to note, one this uses jquery to help keep it shorter. Two, the data uses the ID of your form. To add an ID to our custom form tag do this:
{{ open_form({'id': 'free-checkout','class': 'custom', 'data-validation-message' : ''}) }}
Now lets look at the headers. X-Event-Handler refers to the action set on the page. X-Partials refers to the partials you wish to reload after the ajax call. The names after X-Partials are the names of the partials. Then under success, we use the ID's of the HTML tags that contain those partials to reload them as shown. That is pretty much all there is to it. This will do the exact same thing AJAX call as:
<a class="btn" href="#" data- Submit </a> | https://docs.lemonstand.com/themes/ajax-and-data-attributes | 2017-11-18T02:27:12 | CC-MAIN-2017-47 | 1510934804518.38 | [array(['https://d3mm6pioj2vu78.cloudfront.net/AjaxandDataAttributesCartExample.png?mtime=20160426134652',
None], dtype=object) ] | docs.lemonstand.com |
A system administrator downloads the IaaS installer from the vRealize Automation appliance to a Windows 2008 or Windows 2012 physical or virtual machine.
About this task
If you see certificate warnings during this process, continue past them to finish the installation.
Prerequisites
Configure the Primary vRealize Automation Appliance and, optionally, Join a vRealize Automation appliance to.
- Log in to the Windows machine where you are about to perform the installation.
- Open a Web browser.
- Enter the URL of the VMware vRealize Automation IaaS Installation download page.
For example,, where vra-va-hostname.domain.name is the name of your vRealize Automation appliance host.
- Download the installer by clicking on the IaaS Installer link.
- When prompted, save the installer file, [email protected], to the desktop.
Do not change the file name. It is used to connect the installation to the vRealize Automation appliance.
- Download the installer file to each machine on which you are installing components.
What to do next
Install an IaaS database, see Choosing an IaaS Database Scenario. | https://docs.vmware.com/en/vRealize-Automation/7.0/com.vmware.vrealize.automation.doc/GUID-B5740EC5-563D-4850-9DC6-335128970E86.html | 2017-11-18T03:26:04 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
You can add an agent group that was defined as part of a content pack to your active groups and apply an agent configuration to the group. an agent template for the Available Templates list.
- Click Copy Template to copy the content pack agent group to your active groups.
- Click Copy.
- Select the required filters and click Save new group.
Results
The content pack agent group is added to the active groups and the agents are configured according to the filters that you specified. | https://docs.vmware.com/en/vRealize-Log-Insight/4.3/com.vmware.log-insight.administration.doc/GUID-9013DDC2-9C7A-4CCD-BBB6-68ABAC0E1343.html | 2017-11-18T03:25:46 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
Administrators can configure the ability to use USB devices, such as thumb flash drives, cameras, VoIP (voice-over-IP) devices, and printers, from a remote desktop. This feature is called USB redirection, and it supports using either the RDP or the PCoIP display protocol. A remote desktop can accommodate up to 32 USB devices.
When you use this feature, OS X computers, the USB devices are listed in a menu in Horizon Client. You use the menu to connect and disconnect the devices.
In most cases, you cannot use a USB device in your client system and in your remote desktop at the same time. Only a few types of USB devices can be shared between in desktop pools that are deployed on single-user machines. The feature is not available in RDS desktop pools.. | https://docs.vmware.com/en/VMware-Horizon-6/6.0/com.vmware.horizon-view.desktops.doc/GUID-777D266A-52C7-4C53-BAE2-BD514F4A800F.html | 2017-11-18T03:18:37 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
You can write a load balancing script to generate a load value based on any RDS host metric that you want to use for load balancing. You can also write a simple load balancing script that returns a fixed load value.
Your load balancing script must return a single number from 0 to 3. For descriptions of the valid load values, see Load Values and Mapped Load Preferences.
If at least one RDS host in the farm returns a valid load value, View Connection Server assumes a load value of 2 (mapped load preference of MED) for the other RDS hosts in farm until their load balancing scripts return valid values. If no RDS host in the farm returns a valid load value, the load balancing feature is disabled for the farm.
If your load balancing script returns an invalid load value or does not finish running within 10 seconds, Horizon Agent sets the load preference to BLOCK and the RDS host state to configuration error. These values effectively remove the RDS host from the list of RDS hosts available for new sessions.
Copy your load balancing script to the Horizon Agent scripts directory (C:\Program Files\VMware\VMware View\Agent\scripts) on each RDS host in the farm. You must copy the same script to every RDS host in the farm.
For an example how to write a load balancing script, see the sample scripts in the Horizon Agent scripts directory. For more information, see Sample Load Balancing Scripts for RDS Hosts. | https://docs.vmware.com/en/VMware-Horizon-7/7.2/com.vmware.horizon.published.desktops.applications.doc/GUID-54966EA2-8542-43B1-A5D4-60C8B21C0AC8.html | 2017-11-18T03:09:48 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
You define the global properties of a schema element in the element's Info tab.
Before you begin
Verify that the Schema tab of the workflow editor contains elements.
Procedure
- Click the Schema tab in the workflow editor.
- Select an element to edit by clicking the Edit icon (
).
A dialog box that lists the properties of the element appears.
- Click the Info tab.
- Provide a name for the schema element in the Name text box.
This is the name that appears in the schema element in the workflow schema diagram.
- From the Interaction drop-down menu, select a description.
The Interaction property allows you to select between standard descriptions of how this element interacts with objects outside of the workflow. This property is for information only.
- (Optional) : Provide a business status description in the Business Status text box.
The Business Status property is a brief description of what this element does. When a workflow is running, the workflow token shows the Business Status of each element as it runs. This feature is useful for tracking workflow status.
- (Optional) : In the Description text box, type a description of the schema element. | https://docs.vmware.com/en/vRealize-Orchestrator/7.2/com.vmware.vrealize.orchestrator-dev.doc/GUID3E71B95A-181E-4C6C-970B-4D89CF218F46.html | 2017-11-18T03:10:47 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.vmware.com |
The removal of the legacy modules included the loss of all the modules which depended on the legacy georeferencing system. Most notably, this cleanup included the loss of all the classes presenting simple GUI classes such as the popular MapPane swing class.
For this release, users must implement all of the GUI objects on their own without any examples. This is unfortunate and the GeoTools team understands the importance of this code for new users.
The user list is working on bringing this functionality back into the library, this time based on GUI objects from the GeoWidgets project. In later versions of GeoTools you can find: | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=30965798 | 2015-02-27T00:34:29 | CC-MAIN-2015-11 | 1424936459513.8 | [] | docs.codehaus.org |
This section provides information on the various components of the Apache Hadoop ecosystem and setting them up for high availability.
HDPs Full-Stack HA Architecture
Hortonworks Data Platform (HDP) is an open source distribution powered by Apache Hadoop. HDP provides you with the actual Apache-released versions of the stack with all the necessary bug fixes to make all the components in the stack interoperable in your production environments. This stack uses multiple ‘master’ services whose failure would cause functionality outage in your Hadoop cluster. Hortonworks’ Full-Stack High Availability architecture provides a common framework to make all the master services resilient to failures.
HDP uses industry proven HA technologies in order to provide a reliable HA solution.
The Hadoop stack contains multiple services (HDFS, MapReduce, HBase, etc.) and each of these services have their own co-dependencies. A client application, that interacts with Hadoop, can depend on one or more of these services. A highly available Hadoop platform must ensure that the NameNode master service as well as client applications are resilient to critical failure services. Hortonworks’ Full-Stack HA architecture considers this global view.
Also see. Hortonworks blog on NameNode HA with Hadoop 1.0 The HDP HA architecture has the following key properties:
It provides high availability for the NameNode master daemon service.
When the NameNode master daemon fails over, the HA solution initiates the following actions:
Dependent services (like JobTracker) automatically detect the failure or fail over of the co-dependent component (NameNode) and these dependent services pause, retry, and recover the failed service. (For example, the JobTracker does not launch new jobs or kill jobs that have been waiting for the NameNode.)
Applications running inside and outside the Hadoop cluster also automatically pause and retry their connection to the failed service.
The above actions are highlighted in the following illustration. This illustration shows how HDFS clients and MapReduce services (Jobtracker daemon) handle the NameNode fail over.
To configure High Availability for your Hadoop cluster see: | http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.3/bk_hdp1-system-admin-guide/content/ch_hdp1-high-availability-for-hadoop.html | 2015-02-27T00:27:15 | CC-MAIN-2015-11 | 1424936459513.8 | [] | docs.hortonworks.com |
.
Warning: The Ops Manager API is an experimental feature that is not fully implemented and could change without notice. Pivotal is developing an officially supported Ops Manager API that will replace many of these endpoints in a subsequent release.
Using Operations Manager and Installed Products
- Understanding the Ops Manager Interface
- Adding and Deleting Products
- Configuring Ops Manager Director for VMware vSphere
- Configuring Ops Manager Director for vCloud Air and vCloud
- Creating New Elastic Runtime User Accounts
- Logging into the Apps Manager
- Controlling Apps Manager User Activity with Environment Variables
Backing Up and Upgrading
Monitoring, Logging, and Troubleshooting
- Monitoring VMs in Pivotal CF
- Deploying Pivotal Ops Metrics
- Using SSL with a Self-Signed Certificate in Pivotal Ops Metrics
- Using Pivotal Ops Metrics
- Pivotal CF Troubleshooting Guide
- Troubleshooting Ops Manager for VMware vSphere
- Advanced Troubleshooting with the BOSH CLI
- Pivotal CF Security Overview and Policy | http://docs.pivotal.io/pivotalcf/customizing/ | 2015-02-27T00:24:55 | CC-MAIN-2015-11 | 1424936459513.8 | [] | docs.pivotal.io |
Document Type
Document
Abstract
This document summarizes the project that created a historical preservation plan for the Town of Warren, RI by the students in Professor Arnold Robinson’s graduate-level course: Historical Preservation Planning.
Recommended Citation
Community Partnerships Center, Roger Williams University, "Project Summary: Preservation Plan for Warren, RI" (2011). Preservation Project for Warren, Rhode Island. Paper 2.
Included in
Historic Preservation and Conservation Commons | http://docs.rwu.edu/warren_ri_project/2/ | 2015-02-27T00:27:34 | CC-MAIN-2015-11 | 1424936459513.8 | [] | docs.rwu.edu |
Difference between revisions of "JDatabaseSQLAzure::queryBatch"
From Joomla! Documentation
Revision as of 15:48,::queryBatch
Description
Execute a query batch.
Description:JDatabaseSQLAzure::queryBatch [Edit Descripton]
public function queryBatch ( $abortOnError=true $transactionSafe=false )
- Returns mixed A database resource if successful, false if not.
- Defined on line 729 of libraries/joomla/database/database/sqlazure.php
- Since
See also
JDatabaseSQLAzure::queryBatch source code on BitBucket
Class JDatabaseSQLAzure
Subpackage Database
- Other versions of JDatabaseSQLAzure::queryBatch
SeeAlso:JDatabaseSQLAzure::queryBatch [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JDatabaseSQLAzure::queryBatch&diff=next&oldid=56542 | 2016-02-06T01:27:04 | CC-MAIN-2016-07 | 1454701145578.23 | [] | docs.joomla.org |
Difference between revisions of "Development Team" From Joomla! Documentation Revision as of 07:18, 15 March 2008 (view source)Willebil (Talk | contribs)← Older edit Latest revision as of 07:39, 29 November 2012 (view source) Tom Hutchison (Talk | contribs) m (using new category inclusion) (6 intermediate revisions by 4 users not shown)Line 1: Line 1: −[[Image:workgroups_development.jpg|right]]+{{archived|30952|Production Working Groups|cat=Working Groups,Development}}:+ −# Does architectural design for major and minor releases. + −# Does final code review before code gets submitted to the trunk. + −# Guards the general integrity of the Joomla! framework in terms of architectural concept implementation, coding standards and documentation.+ −# Does mentoring of development work group members. + −# Does development of the Joomla! framework code. + −# Does testing. + −# Manages the creation of proper technical documentation of the framework (concepts, APIs, etc.). +).+ − + −[[Category:Development]]+ | https://docs.joomla.org/index.php?title=Development_Team&diff=77943&oldid=3215 | 2016-02-06T01:26:58 | CC-MAIN-2016-07 | 1454701145578.23 | [] | docs.joomla.org |
How to Submit a Bug ReportHow to Submit a Bug Report
Bugs happen, and we’re here to support you. If you find a bug in Altis, we'll handle fixing it.
To help our product team understand your issue, there are some requirements for a bug report that must be met. This allows our team to know how to both identify the problem, and whether it has been resolved.
When filing a bug report please provide all of the following:
Test ResultsTest Results
Please write a brief description of the bug, including what you expect to happen and what is currently happening.
Reduced Test CaseReduced Test Case
Bugs should be isolated to a reduced test case, as this helps us to ensure that the bug exists in Altis, rather than your project. Reduced test cases should be created against a standard Altis local environment with just the minimal changes needed to reproduce the bug.
Please ensure you provide the reduced test case as the basis of the bug report. If we cannot confirm that the test case is within Altis, we may ask you to reduce the test case further to ensure it isn't caused by your custom code.
The reduced test case can be provided as a code snippet in the report itself or as a link to a GitHub gist if there are multiple files involved.
Customers on Enterprise support plans may wish to use their dedicated technical engineering support to handle bug isolation.
Steps to ReproduceSteps to Reproduce
If possible, please also provide detailed step-by-step instructions to reproduce the issue. This will ensure that we can replicate the problem, and avoids back-and-forth conversations.
Environment Setup and ConfigurationEnvironment Setup and Configuration
Please let us know what environments the bug is occurring in, what configuration you have in
composer.json under the
extra.altis property (making sure to redact any sensitive information like API keys), and whether you were able to reproduce the bug locally or not. | https://docs.altis-dxp.com/v12/guides/how-to-submit-a-bug-report/ | 2022-09-24T18:53:21 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.altis-dxp.com |
Spline IK
The Spline IK Constraint is not strictly an Inverse Kinematics method (i.e. IK Constraint), but rather a Forward.
For the precise list of options, see Spline IK constraint. This section is intended to introduce the workflow.
Roll Control.
Note
There are a couple of limitations to consider:
Bones do not inherit a curve’s tilt value to control their roll.
There is no way of automatically creating a twisting effect where a dampened rotation is inherited up the chain. Consider using Bendy Bones instead.
Offset Controls
The thickness of the bones in the chain is controlled using the constraint’s X. | https://docs.blender.org/manual/en/dev/animation/armatures/posing/bone_constraints/inverse_kinematics/spline_ik.html | 2022-09-24T20:22:23 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.blender.org |
React Native SDK
This reference guide describes how to use the Optimizely React SDK.
Version
For the current version number of this SDK, see SDK Compatibility Matrix.
Quickstart
To get up and running quickly, see the Quickstart.
Reference
For reference docs, see the left-hand navigation, or start off with Install SDK.
Source files
Updated over 1 year ago
Did this page help you? | https://docs.developers.optimizely.com/full-stack/docs/javascript-react-native-sdk | 2022-09-24T19:07:24 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.developers.optimizely.com |
Last updated: April 2018
Tablein (the “Company”) provides this notice to describe and explain its information practices and the choices you can make about the way your information is collected and used on the Company’s website, (the “Website”).
The Website acts as an easy, efficient, affordable online service for restaurant owners who have registered with and paid a fee to the Company (each, a “Restaurant”) to manage reservations, table inventory, and guest contact information from any web-enabled device and to provide restaurant consumers (each, a “Guest”) with a fast, friendly way to search and reserve tables from any web-enabled device.
Account Terms
To access certain functions of the Website, you must register as a unique User by providing certain current, complete, and accurate information about yourself. You have right to complete and correct your information or delete your account in 14 days by email – [email protected] As part of the registration process, you will select a password and provide a valid email address as a User ID. You also have to give us certain registration information,.
This policy applies to:
– visitors and users of Tablein’s websites, including tablein.com (“Site”) and app.tablein.com (“System”) - any other online properties of Tablein or third parties explained within the Consumer Terms of Use.
This policy explains:
– when and why we collect personal information from people who visit our website and make reservations via our System
– how we use the information
– the conditions under which we may disclose the information to others
– how we keep it secure
By using the Tablein website, you consent to the data practices described in this statement.
Any questions regarding this privacy policy should be sent by email to [email protected]
Who are we?
Tablein is a table reservation system that processes reservations on behalf of thousands of Venues across the world. Venues can take bookings through:
The restaurant’s own website and social media using Tablein’s booking technology
Third-party sites that use our reservations software Our registered address is: Internet Marketing Solutions
Registration code 301840551
K. Donelaičio g. 62, Kaunas, Lithuania
How do we collect information from you?
We collect information related to how you use our Services, including how you interact with our software (including, but not limited to, clicks, page views, searches and steps are taken to complete actions).
What type of information is collected from you?
In connection with your registration, booking, or making a payment to a restaurant on our website, Tablein collects personal information.
When you make a booking:
Tablein collects information such as:
telephone number
name
billing information taken for deposits, ticketing or holding credit card information for use in the case of no-shows (where applicable)
special requests
When you make a booking, we do not proactively collect personal information considered as sensitive personal information such as health-related information. However, our Sites include text boxes which are designed for you to provide information you wish on dining preferences.
When you sign up to create a profile:
This refers to when you click Sign Up on Tablein Site. Tablein collects the above information in the “When you make a booking” section and additional information such as:
- ZIP code or postcode
- account settings
- demographics
- current and past restaurant reservation details
- dining activity (e.g. frequency, cancellations, no-shows)
When you access our sites:
There is “Device Information” about your computer hardware that is automatically collected by Table System, we may receive your generic location (such as city or neighbourhood).
How is your information used?
This information is used by Tablein for the operation of services, to maintain quality of the service, and to provide general statistics regarding use of the Tablein Sites.
Please keep in mind that if you directly disclose personally identifiable information or personally sensitive data through Tablein public message boards, this information may be collected and used by others.
Note: Tablein does not read any of your private online communications.
We may use your information to:
- process reservations
- notify you of your restaurant reservations
- pay deposits
- provide you with new and improved features
- personalise your experiences on our Sites
- seek your feedback on the services we provide
- let you know of changes in our services or terms and conditions
- send you marketing communications that you have opted into and you may be interested in
- collect data, including without limitation from you, with the purpose of improving the booking service and to provide feedback to the restaurant
present a quality index for the restaurant industry
- collate and share aggregated or de-identified information at its absolute discretion, including but not limited to aggregate statistical data.
-, for example:
- send you information relating to our products and services, including reservation confirmations and updates, receipts, technical notices, updates, security alerts, and support and administrative messages.
- and/or, subject to the Your Choices section, below, and/or applicable law, communicate with you about contests, offers, promotions, rewards, upcoming events, and other news about products and services offered by Tablein, our parent companies, our subsidiaries, our affiliates, restaurants, and other business partners.
- Tablein system on any other digital platforms, then the restaurant, or partner, is responsible for tracking marketing preferences.?
Tablein does not sell, rent or lease its customer lists to third parties.
Tablein System, such as a restaurant reservation or making a payment to a restaurant through our System, all details pertaining to the reservation is delivered to the restaurant’s Tablein system.
The notifications sent by email or SMS via Tablein) (“customised service”) and to improve the restaurant’s table and shift planning.
In addition to providing you with more customised Tablein Tablein for them to use for their own direct marketing purposes unless you have requested us to do so.
Tablein websites will disclose your personal information, without notice, only if required to do so by law or in the good faith belief that such action is necessary to:
(a) comply with the law or comply with legal process served on Tablein or the site
(b) protect and defend the rights or property of Tablein
(c) act under exigent circumstances to protect the personal safety of users of Tablein, or the public..
Payment Card Information
To use certain services on our System we may require credit or debit card account information in order to:
– make reservations at certain restaurants
– make payments to certain restaurants
– pay a deposit to certain restaurants
By submitting your credit or debit card account information through our Sites, to the extent permitted by applicable law, you expressly consent to the sharing of your information with restaurants, third-party payment processors (e.g Stripe, Paypal, Paysera), and other third-party service providers, Tablein [email protected]
Your choices
You can choose whether or not you wish to receive information from the Restaurant. If you do not want to receive direct marketing communications from the Restaurant, Tablein in the event of a data breach?
The point of contact from Tablein is the Chief Operating Officer, Paulius Suksteris, who is also our Data Protection Officer. He will invoke the data control procedure as required. Then we will report the breach to the relevant supervisory authority within 72 hours of the organisation becoming aware of it.
Where is data stored?
Data is stored securely in data centres in Lithuania.
Tablein encourages you to review the privacy statements of websites you choose to link to from Tablein so that you can understand how those websites collect, use and share your information. Tablein is not responsible for the privacy statements or other content on websites outside of those websites owned and/or controlled by Tablein or its affiliated companies.
Changes to this Statement
Tablein will occasionally update this Privacy Policy to reflect company and customer feedback. Tablein encourages you to periodically review this Statement to be informed of how Tablein is protecting your information. This policy was last updated in April 2018
Contact Information
Tablein welcomes your comments regarding this Privacy Policy. If you believe that Tablein has not adhered to this Privacy Policy, please contact Tablein at [email protected] We will aim to use commercially reasonable efforts to promptly determine and remedy the problem.
Right to Erasure
If you wish to delete your data, please notify us at [email protected] | https://docs.tablein.com/privacy/ | 2022-09-24T20:36:56 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.tablein.com |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
To participate in OAuth 2.0 flows on Apigee Edge, client apps must be registered.
What is registration?
Registration allows Apigee Edge (the authorization server) to uniquely identify your app. When you register your app, you receive back two keys: a client ID and client secret. The app needs these keys when negotiating for access tokens with the authorization server.
Quick steps
For development and testing, you can use one of the pre-registered developer apps to obtain keys. See Obtaining client credentials for details.
If you want to register a new app:
- Access the Developer Apps page, as described below.
Edge
To access the Developer Apps page using the Edge UI:
- Select Publish > Apps in the left navigation bar.
- Click + App
Classic Edge (Private Cloud)
To access the Developer Apps page using the Classic Edge UI:
- Sign in to, where ms-ip is the IP address or DNS name of the Management Server node.
- Select Publish > Developer Apps in the top navigation bar.
- Click + App.
- Fill out the form:
- Enter a name and display name for the app.
- Select a developer (you can choose one of the default developers or create your own).
- (Optional) Enter a callback URL. This is used for "three-legged" OAuth grant type flows. This is where Apigee Edge redirects the user after they complete authentication (login) with the resource server. It has to be a complete URL, so you might enter something like. For more about three-legged OAuth, see Implementing the authorization code grant type.
- Add an API product. You can select an existing product or create your own.
- Skip the custom attributes section for now.
- Click Save.
- Find your new app in the list of developer apps and select it.
- Click Show to see the Consumer ID (client ID) and Consumer Secret (client secret) values.
Deeper dive
For a more detailed discussion of app registration, see Register apps and manage API keys. If you'd like to know more about the role of API products, see What is an API product?. | https://docs.apigee.com/api-platform/security/registering-client-apps | 2022-09-24T19:53:47 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.apigee.com |
tags for the specified resource.
See also: AWS API Documentation
list>]
--arn (string)
The ARN of the specified resource for which to list tags.
-.
Tags -> (list)
The tags requested for the specified resource.
(structure)
A key-value pair that can be associated with a resource.
Key -> (string)The key of a tag. Tag keys are case-sensitive.
Value -> (string)The value of a tag. Tag values are case sensitive and can be null.
NextToken -> (string)
The token returned to indicate that there is more data available. | https://docs.aws.amazon.com/zh_cn/cli/latest/reference/alexaforbusiness/list-tags.html | 2022-09-24T21:07:34 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.aws.amazon.com |
Web
Apps Operations Extensions. Delete Process(IWebAppsOperations, String, String, String) void DeleteProcess (this Microsoft.Azure.Management.WebSites.IWebAppsOperations operations, string resourceGroupName, string name, string processId);
static member DeleteProcess : Microsoft.Azure.Management.WebSites.IWebAppsOperations * string * string * string -> unit
<Extension()> Public Sub DeleteProcess (operations As IWebAppsOperations, resourceGroupName As String, name As String, processId As String)
Parameters
- operations
- IWebAppsOperations
The operations group for this extension method.
- resourceGroupName
- System.String
Name of the resource group to which the resource belongs.
- name
- System.String
Site name.
- processId
- System.String
PID.
Remarks
Terminate a process by its ID for a web site, or a deployment slot, or specific scaled-out instance in a web site. | https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.management.websites.webappsoperationsextensions.deleteprocess?view=azure-dotnet | 2022-09-24T20:30:00 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.azure.cn |
ReadyFlow overview: ADLS to ADLS Avro
You can use the ADLS to ADLS Avro ReadyFlow to move data between source and destination ADLS locations while converting the files into Avro format.
This ReadyFlow consumes JSON, CSV or Avro files from a source Azure Data Lake Service (ADLS) location, converts the files into Avro and writes them to the destination ADLS location. You can specify the source format, the source and target location as well as the schema to use for reading the source data. The ReadyFlow polls the source container for new files (it performs a listing periodically).
Moving data with an ADLS to ADLS Avro flow
You can use an ADLS to ADLS Avro data flow when you want to move data from a location in ADLS to another ADLS location, and at the same time convert the data to Avro format. You need to specify the source format, the source and target location as well as the schema to use for handling the data. Your flow can consume JSON, CSV or Avro files from the source ADLS location. It converts the files into Avro format and writes them to the destination ADLS location. You define and store the data processing schema in the Schema Registry on a Streams Messaging Data Hub cluster. The data flow parses the schema by looking up the schema name in the Schema Registry. | https://docs.cloudera.com/dataflow/cloud/readyflow-overview-adls-adls-avro/topics/cdf-readyflow-adls-adls-avro.html | 2022-09-24T20:35:36 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.cloudera.com |
Streams Replication Manager security overview
Configuring Streams Replication Manager (SRM) security involves enabling and setting
security-related features and properties for the SRM service (Driver and Service roles) and the
srm-control command line tool. This permits SRM to access source and target
clusters and replicate data between them. In addition, it also enables the subcomponents of SRM
that act as servers to function in a secure way.
Streams Replications Manager functions both as a client and a server. When SRM replicates data and connects to Kafka clusters it functions as a client. In addition however, some processes and components of SRM act as servers. For example the SRM Service role spins up a REST server. Similarly, the SRM Driver role has replication specific Connect REST servers which provide background functionality.
As a result of this, configuring security for SRM has two distinct aspects as you can configure security for SRM not only when it acts as client, but also when it acts as a server.
Server configuration
Security for SRM processes and components that act as servers can be configured by enabling the TLS/SSL and/or Kerberos feature toggles as well as configuring key and truststore related properties available in Cloudera Manager. For more information see, Enable TLS/SSL for the SRM serviceor Enable Kerberos authentication for the SRM service
Client configuration
Configuring security for SRM when it functions as a client involves enabling and setting
security-related features and properties for the SRM service and the
srm-control command line tool. This permits both the service and the tool
to access your source and target clusters and replicate data between them. The configuration
workflow and the steps you need to take differ for the service and tool.
- SRM service
Before you can start replicating data with SRM, you must define the clusters that take part in the replication process and add them to SRM’s configuration. When you define a cluster, you also specify the required keys, certificates, and credentials needed to access the clusters that you define. This means that when you define a cluster for replication, at the same time, you also configure the necessary security-related properties needed for SRM to connect as a client.
The clusters and their security properties can be defined in multiple ways. What method you use depends on the clusters in your deployment and the type of security these clusters use. For more information regarding the workflow and the available options for configuration, see Defining and adding clusters for replication.
- srm-control tool
- In addition to configuring security for the SRM service, you also need to configure security related properties for the
srm-controltool. This is because the
srm-controltool also functions as a client and must have access to the clusters in the deployment. The exact configuration steps you need to take depends on how your deployment is set up, how you configured the service, and the type of security used on the clusters. For more information regarding the workflow and the available options for configuration, see Configuring srm-control. | https://docs.cloudera.com/runtime/7.2.15/srm-security/topics/srm-security-overview.html | 2022-09-24T20:31:53 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.cloudera.com |
Set server defaults at an organization level
Server settings provide default configurations to new servers (and their agents) are brought on board. Organization administrators can customize these configurations and set specific defaults for each environment.
To set server defaults:
Under organization settings, select Servers.
Use the dropdown to choose the environment in which you want to apply the default (development, test or production). Check the box next to Set as default environment if you want to specify a default environment for future server configuration.
Use the dropdown to choose the Log Level. The default log level selection is ERROR.
Under Automatic server cleanup, enter the length of time that you would like servers to be offline before they are automatically cleaned up. The default value is 30 days.
A background task runs every five minutes to check if there is an organization with automatic server cleanup enabled.
If there are one or more servers with no activity received within the configured time frame, Contrast disables the servers automatically. They are no longer visible under Servers in the Contrast web interface.
Contrast keeps Information on vulnerabilities and attacks related to these servers even after they're disabled. Protect licenses from disabled servers return to the pool of licenses.
Under Assess, select which stacktraces should be captured (all, some or none).
Select the check box to Enable sampling for higher performance.
If Contrast sees the same URL being called multiple times, it analyzes the URL based on the the number of times specified in the Baseline setting.
Afterwards, if Contrast continues to see the same URL, it only checks it based on the Frequency setting.
Contrast retains samples for the number of seconds specified for the Window setting. After the time specified for the Window setting elapses, Contrast analyzes the URL again, according to the Baseline setting..
Under Protect, use the green toggle to enable Protect.
Important
Turning Protect on by default requires that Protect licenses are automatically applied to servers.
Administrators receive emails each time a server is licensed. As servers go up and down frequently, you may want to setup an email filter for any unwanted traffic.
Select the check box to Enable bot blocking.
Bot blocking blocks traffic from scrapers, attack tools and other unwanted automation.
To view blocked bot activity, under Attacks > Attack Events, use the Automated filter option.
Note
You can configure bot blocking in the YAML files for Java, .NET Framework, .NET Core, Ruby, and Python.
Select the checkbox to Enable output of Protect events to syslog.
Enter the IP Address and Port in the given fields. Use the dropdown to chose the Facility.
Click on the event severity badges, and use the dropdown to choose a message Severity level for each one. The defaults are:
1 - Alert for Exploited
4 - Warning for Blocked
5 - Notice for Probe
If allowed at a system level, you can check the box to Automatically apply licenses to new servers for Protect.
Turn the toggle on (green) to enable the Retain Library Data function. When enabled library details on the last server will be retained instead of being deleted from Contrast during server cleanup. | https://docs.contrastsecurity.com/en/organization-servers.html | 2022-09-24T19:07:15 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.contrastsecurity.com |
GDC Documentation Site
Description
The GDC Documentation Site1 is the official project documentation site of the GDC which provides users with detailed information on GDC tools and data.
Overview
The GDC Documentation Site1 contains in-depth information and tutorials on the usage of the GDC features. The website includes tutorials on using the Data Portal, API, Data Transfer Tool, Submission Portal, and Data Dictionary. A guide to interpreting the data available at the GDC, which includes guides to the file formats and bioinformatics pipelines used to process the data, is also included.
References
External Links
- N/A
Categories: Tool | https://docs.gdc.cancer.gov/Encyclopedia/pages/GDC_Documentation_Site/ | 2022-09-24T18:43:02 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.gdc.cancer.gov |
Qlik Sense
Qlik is a Business Intelligence and Analytics tool for data based decision making. Qlik allows you to make presentations, create dynamic charts and tables, and perform statistical analysis. Qlik can also be used as virtual database.
Features of Qlik:
With the help of direct discovery, Qlik allow users to perform Business discovery and visual analysis.
Mobile-ready, Roles, and Permissions.
Interactive dynamic apps and Dashboards.
Installation
Qlik Sense Desktop and Qlik Sense Cloud version is available for free, with Qlik account registration.
To get Qlik Sense Desktop (Windows only) or Qlik Sense Cloud visit Qlik.com.
Create a Qlik account.
Installation link will be sent to your email.
Open the email and follow the download guidelines.
After download is complete, run the file and follow the installation guide.
Once installation successfully completed you are ready to used your Qlik Sense application.
Examples
Below are a few examples of how to work with Qlik in combination with inmation. The examples will be based on inmation DemoData objects. To configure the demo data in your own environment, follow this guide.
Flat files
DataStudio allows file export, right click on the selected object and choose add item(s) to History Grid. Select the desired period and History grid will be displayed on your screen.
On the top left corner of the history grid use the export button to export the history data as a CSV, Excel or JSON file (Qlik supports all of these formats).
The chosen file will be opened in whatever application is set as default for that file format on your system. Save the file in an accessible location.
Open Qlik Sense application and click on My Computer tab. Choose the file and click on add data.
You are ready to work with your data.
Web API
Postman or Swagger and Web API can be used to generate the request URL, which in turn can be used to import data from DataStudio to Qlik Sense. Qlik Sense supports HTTP Get and Post methods.
Generate a URL using the Web API and Swagger, or Postman application.
Open Qlik Sense application, click on the New tile, which is located on the left side of the screen. Choose REST.
Paste the URL, select Get or Post method, authentication method and click Create.
The data is loaded and format is automatically recognized.
Web - Advanced Endpoint
Advanced Endpoints are a powerful way to embed your corporate logic directly in the Web API. Advanced Endpoint can be used for data export and Content-Type setting, for example to CSV instead of the usual JSON.
To create an advanced endpoint for reading historical data go to DataStudio, select the Web API Server object in the Server Model Context and click on Script Library which is located in the Object Properties panel. Add a new Script library by pressing the plus sign and name the library "my-lib". This simplified script example reads raw historical data of the process data item from the DemoData. Add this script to Lua Script Body section:
local inAPI = require('inmation.api') local lib = {} function lib.readhist(_, arg, req, hlp) arg = arg or {} local now = syslib.currenttime() local startTime if type(arg.starttime) == 'string' then -- Use starttime supplied by the caller startTime = arg.starttime else -- Fallback to a default relative starttime startTime = syslib.gettime(now-(5*60*1000)) end local endTime = syslib.gettime(now) local qry = { start_time = startTime, end_time = endTime, items = { { p = "/System/Core/Examples/Demo Data/Process Data/DC4711" } } } local respData = {} local res = inAPI:readrawhistoricaldata(qry, req, hlp) local rawData = res.data or {} local histData = rawData.historical_data or {} local queryData = histData.query_data or {} if #queryData > 0 then queryData = queryData[1] local items = queryData.items or {} if #items > 0 then items = items[1] for i,t in ipairs(items.t) do local value = items.v[i] local timestampISO = syslib.gettime(t) local row = ("%s,%s"):format(timestampISO, value) table.insert(respData, row) end end end local result = table.concat(respData, '\n') return hlp:createResponse(result, nil, 200, { ["Content-Type"] = "text/csv" }) end return lib
HTTP Get
Invoke the Lua script from Postman tool to test if everything is working. If your test was successful, go to your Qlik Sense application, choose New → REST, paste your URL → click ok. A preview tab should pop up, with your data transformed into data table.
Response body
2019-07-23T07:39:00.041Z,16.824450683594 2019-07-23T07:39:30.041Z,16.898840332031 2019-07-23T07:40:00.041Z,17.029431152344 2019-07-23T07:40:30.041Z,17.212634277344 2019-07-23T07:41:00.041Z,17.340710449219 2019-07-23T07:41:30.041Z,17.212640380859 2019-07-23T07:42:00.041Z,17.141229248047 2019-07-23T07:42:30.041Z,17.09482421875 2019-07-23T07:43:00.041Z,17.226696777344 2019-07-23T07:43:30.041Z,17.072930908203
For more information about Advanced Endpoints, visit the Web API Advanced Endpoints page. | https://docs.inmation.com/jumpstarts/1.84/bi-tools/qlik.html | 2022-09-24T19:20:05 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.inmation.com |
Some security incidents are best detected by a sequence of logs originating from different products, i.e. log types.
For example: an email attachment followed by a malware infection. In this example, the first event is detected by an email security product and the second event is detected by an endpoint security product. In this case 2 separate events can be defined using a correlated rule.
Correlated events are instrumental for reducing false-positives. By defining a more specific use case that contains 2 scenarios, the trigger can be more sensitive and reduce unwanted noise.
Configuring a correlated rule
To correlate events, we need to configure 2 search queries, each with its own trigger condition. For the rule to trigger, both conditions must be satisfied.
If you opt to join the queries, you must also select aggregation criteria and fields to join. When the queries are joined, the values of the join fields must match for the rule to trigger.
This tutorial assumes you are familiar with the process of configuring a single-query rule. It explains what’s different when correlating queries.
- Name the rule
- Add another query
- Query 1 & Query 2
- Joining the queries (optional)
- Trigger conditions
- Notification description
- Configure output per query
- Save it!
- Investigating correlated rules
Name the rule rule:
- OpenSearch Dashboards for each query independently. Click Preview in OpenSearch Dashboards rule to trigger.
- First, select the group by fields for each of the queries.
- You can select as many as 3 fields.
- The number of group by fields can differ between the queries.
Group by fields provide criteria for count aggregations.
As a result, the rule rule looks for values that are common to the field pairs selected for joining the queries. This means the rule will only trigger if it finds matching values for the join field pairs.
Joined fields are indicated by the link icon .
When the rule triggers, the event log will specify the matching values that triggered the rule rule has 2 queries, you can set a single condition for each of your queries, and a single severity.
As usual, each query can take a different condition for a field of your choice.
The rule rule.
Investigating correlated rules
You can view all recently triggered rules from your Summary dashboard.
When a correlated rule triggers, it writes 2 event logs - 1 per query. The event logs will be numbered 1/2 and 2/2, respectively. Each event log will have its own Investigate drilldown link.
The best way to begin investigating a correlated event is to filter for the matching join values to narrow the list. Hover over the field
logzio-alert-join-values and click to filter for its value. Then click the Investigate link to dive into the details. | https://docs.logz.io/user-guide/siem/security-correlated-queries/ | 2022-09-24T18:45:51 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/correlated-alerts/add-another-query_aug2021.png',
'Add another query'], dtype=object)
array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/correlated-alerts/query1and2.png',
'empty alert form with 2 queries'], dtype=object)
array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/correlated-alerts/2-queries.png',
'rule with 2 queries'], dtype=object)
array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/correlated-alerts/correlated-join-queries.png',
'Add a group by field function for both queries'], dtype=object)
array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/correlated-alerts/correlated-trigger-conditions.png',
'Conditions and severity for correlated rule'], dtype=object)
array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/correlated-alerts/correlated-output-options.png',
'Notifications are auto-configured'], dtype=object)
array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/correlated-alerts/2-event-logs.png',
'Investigate correlated events'], dtype=object) ] | docs.logz.io |
RLEC 4.4 Release Notes (December 2016)
If you are upgrading from a previous version, make sure to review the upgrade instructions before beginning the upgrade process.
You can upgrade to this version from any 4.3 version. If you have a version older than 4.3 you must first upgrade to 4.3 and only then upgrade to this version.
New features
- Databases can now be configured to have multiple proxies for improved performance. Note that when you upgrade the cluster to this version and then upgrade existing databases, the databases will be updated to use the Single proxy policy and Dense shard placement policy. For additional details, refer to Multiple active proxies.
- Support for Redis version 3.2 added. When you install or upgrade the cluster the new default version for Redis databases will be 3.2 and when you upgrade the databases they will be updated to this version. If you would like to change the default version to Redis 3.0, refer to the instruction in the Upgrading databases If you would like to upgrade existing databases to the latest 3.0 minor version, refer to the Known Issues section below.
- The cluster can now be configured to support both private and public IPs to connect to database endpoints through both public and private networks. For additional details, refer to Private and Public Endpoints.
- rladmin status command output has been enhanced to include an indication on which node rladmin is running by adding the ‘*’ sign next to the node entry, and to show the host name of the machine the node is running on.
- Users can now be assigned security roles to control what level of the databases or cluster the users can view and/or edit.
Changes
- As result of adding the support for multiple proxies for a database, the following changes have been made:
- When you upgrade the cluster to this version and then upgrade existing databases, the databases will be updated to use the Single proxy policy and Dense shard placement policy.
- rladmin status command output has been updated.
- failover [db <db:id | name>] endpoint <id1 .. idN> and migrate [db <db:id | name> | node <origin node:id>] endpoint <id> target_node <id> commands are no longer relevant for databases using the single | all-master-shards | all-nodes proxy policy. Instead proxies can be bound or unbounded to databases as needed.
- New rladmin commands were added, such as bind and placement.
- RLEC has been updated to remove the need to use sudo in runtime. You still need to be root or use sudo when initially installing RLEC.
- You no longer need to be root or use sudo to run the rladmin command, now it is preferred to be a non-privileged user that is member of the redislabs group to run the command.
- All cluster services are now run using the supervisor mechanism. As a result starting, stopping and restarting RLEC services should be done using supervisorctl command from the OS CLI.
- Linux OS vm.swappiness is now advised to be set to zero, for more information see Disabling Swap in Linux.
Important fixed issues since 4.3.0
- RLEC-7542 - Add ability to create and manage role based user security
- RLEC-8283 - The cluster recovery process does not work properly when the cluster that needs to be recovered does not have a node with ID 1.
- RLEC-8284 - Add functionality to rladmin to mark a node as a quorum only node
- RLEC-8498 - Backup fails under rare conditions
- RLEC-8579 - rladmin supports uppercase for external_addr value
- RLEC-8656 - Fixed conflict with SELinux
- RLEC-8687 - Fixed issue where strong password requirements were not honored correctly.
- RLEC-8694 - DMC failed while creating DB with 75 (150 replicated) shards
- RLEC-8700 - Fixed issue with network split scenario
- RLEC-8833 - Fixed issue where in some cases endpoint were not getting new IPs after node replacement.
- RLEC-9069 - Fixed issue related to RHEL 7 and IPv6.
- RLEC-9156 - Fixed issue causing a full resync of data when a source or destination failure occurred.
- RLEC-9173 - Issue with writing data after master and replica failed
- RLEC-9235 - Issue with SSL connection error and self signed certificates
- RLEC-9491 - Fixed alerting issue due to incorrect measurement
- RLEC-9534 - Fixed issue with node remove command after RLEC uninstalled
- RLEC-9658 - Failed to import backup file from FTP server.
- RLEC-9737 - Fixed issue with backup process to use ephemeral storage when needed
- RLEC-9761 - UI had incorrect value increments
- RLEC-9827 - Server with a high number of cores and running RHEL can have issues running systune.sh
- RLEC-9853 - Fixed issues with logrotate on RHEL 7.1 so it runs as non-privileged user
- RLEC-9858 - If proxy crashed, in some cases this would prevent completion of redis failover process
- RLEC-9893 - DB recovery process doesn’t recognize original rack name when in uppercase
- RLEC-9905 - x.509 certificate signed by custom CA cannot be loaded in UI
- RLEC-9925 - master endpoint and shards goes down if co-hosted with master of the cluster and the node goes down (single proxy policy)
- RLEC-9926 - Master shard could remain down if on the same node as the master of the cluster and the entire node goes down
- RLEC-10340 - Fixed a typo that crashed rladmin status output in some cases
Changes in 4.4.2-42:
- RLEC-11941 - Upgrade to 4.4.2-35 on RHEL6 - leash failed when python2.6 is installed
- RLEC-11994 - RLEC 4.4.2-35: the UI doesn’t display the DBs with replication
Changes in 4.4.2 - 49
- RLEC-11209 - Unable to run upgrade due to running_actions check
- RLEC-12647 - Backup to S3 with periods in bucket name are failing in some cases.2 even if you updated the cluster’s Redis default version to 3.0. Workaround: If you would like to cancel the old version indication in rladmin status without upgrading the Redis version to 3.2 you should run the rladmin upgrade db command with the keep_current_version flag which will ensure the database is upgraded to the latest 3.0 version supported by RLEC.
- Issue: RLEC-9200 - in a database configured with multiple proxies, if a client sends the MONITOR, CLIENT LIST or CLIENT KILL commands, only commands from clients connected from the same proxy are returned instead of all commands from all connections. Workaround: If you would like to get a result across all clients, you need to send the monitor command to all proxies and aggregate them.
- Issue: RLEC-9296 - Different actions in the cluster, like node failure or taking a node offline, might cause the Proxy policy to change Manual. Workaround: You can use the rladmin bind [db <db:id | name>] endpoint <id> policy <single | all-master-shards | all-nodes> command to set the policy back to the required policy, which will ensure all needed proxies are bounded. Note that existing client connections might disconnected as result of this process.
- Issue: RLEC-8787 - In some cases when using the replica-of feature, if the source database(s) are larget than the target database, the memory limit on the target database is not enforced and that used memory of the target database can go over the memory limit set. Workaround: You should make sure that the total memory limit of all source databases is not bigger than the memory limit of the target database.
- Issue: RLEC-8487 - Some Redis processes stay running after purging RLEC from the machine and causes an attempt to reinstall RLEC to fail. Workaround: Run the purge process for a second time and ensure that the Redis processes were removed.
- Issue: RLEC-8747 - When upgrading to this version, if the UI is open in the browser the UI might not work properly after the upgrade. Workaround: Refresh the browser and the UI will return to work properly.
- “replica buffer” being exceeded. In this case, you will often see the status of the Replica Of process display as “Syncing”. Workaround: You must manually increase the “replica buffer” size through rladmin. To find the appropriate buffer size please contact support at: [email protected].
- Issue: In a cluster that is configured to support rack-zone awareness, if the user forces migration of a master or replica shard through rladmin to a node on the same rack-zone as its corresponding master or replica.
- Issue: DNS doesn’t change after having removed the external IP address. Workaround: Unbind IP from affected node and then bind it back.
- Issue: CCS gets an error and won’t start if /var/opt/redislabs/persist/ does not exist. Workaround: Make sure this directory is not deleted and continues to exist. | https://docs.redis.com/latest/rs/release-notes/legacy-release-notes/rlec-4-4-dec-2016/ | 2022-09-24T20:10:22 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.redis.com |
Homebuilt printer
You can select “Any Generic Printer” from the list of printers and enter your own goals and settings. If you want the printer to have a special image in the panel, you must at least have the "Hobby" plan as it will be possible to choose your own image which will be displayed in the panel. You select the image your printer should have by clicking on the gear on your printer and going into "Edit printer". There you will find a menu on the left side where you can upload your own image.
| https://docs.simplyprint.io/article/homebuilt-printer | 2022-09-24T19:24:19 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['https://cdn.simplyprint.io/wiki/i/d8bb387c76971e8db3cbae6e2d6e01c0c1337f76.png',
None], dtype=object) ] | docs.simplyprint.io |
MD5 Checksum
Description
The MD5 Checksum is a computer algorithm that calculates and verifies 128-bit MD5 hashes generated from a specific file.
Overview
MD5 Checksum modification.
Every file in the GDC contains an md5sum to ensure file integrity.
When downloading/uploading files using the gdc-client, an md5 checksum will occur to ensure that the file has been transferred successfully and with integrity to the original file.
References
- N/A
External Links
- N/A
Categories: General | https://docs.gdc.cancer.gov/Encyclopedia/pages/MD5_Checksum/ | 2022-09-24T19:33:39 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.gdc.cancer.gov |
SpatialMaterial¶
Inherits: Material < Resource < Reference < Object
Default 3D rendering material.
Description¶
This provides a default material with a wide variety of rendering features and properties without the need to write shader code. See the tutorial below for details.
Tutorials¶
Properties¶
Methods¶
Enumerations¶
enum TextureParam:
TEXTURE_ALBEDO = 0 --- Texture specifying per-pixel color.
TEXTURE_METALLIC = 1 --- Texture specifying per-pixel metallic value.
TEXTURE_ROUGHNESS = 2 --- Texture specifying per-pixel roughness value.
TEXTURE_EMISSION = 3 --- Texture specifying per-pixel emission color.
TEXTURE_NORMAL = 4 --- Texture specifying per-pixel normal vector.
TEXTURE_RIM = 5 --- Texture specifying per-pixel rim value.
TEXTURE_CLEARCOAT = 6 --- Texture specifying per-pixel clearcoat value.
TEXTURE_FLOWMAP = 7 --- Texture specifying per-pixel flowmap direction for use with anisotropy.
TEXTURE_AMBIENT_OCCLUSION = 8 --- Texture specifying per-pixel ambient occlusion value.
TEXTURE_DEPTH = 9 --- Texture specifying per-pixel depth.
TEXTURE_SUBSURFACE_SCATTERING = 10 --- Texture specifying per-pixel subsurface scattering.
TEXTURE_TRANSMISSION = 11 --- Texture specifying per-pixel transmission color.
TEXTURE_REFRACTION = 12 --- Texture specifying per-pixel refraction strength.
TEXTURE_DETAIL_MASK = 13 --- Texture specifying per-pixel detail mask blending value.
TEXTURE_DETAIL_ALBEDO = 14 --- Texture specifying per-pixel detail color.
TEXTURE_DETAIL_NORMAL = 15 --- Texture specifying per-pixel detail normal.
TEXTURE_MAX = 16 --- Represents the size of the TextureParam enum.
enum DetailUV:
DETAIL_UV_1 = 0 --- Use
UVwith the detail texture.
DETAIL_UV_2 = 1 --- Use
UV2with the detail texture.
enum Feature:
FEATURE_TRANSPARENT = 0 --- Constant for setting flags_transparent.
FEATURE_EMISSION = 1 --- Constant for setting emission_enabled.
FEATURE_NORMAL_MAPPING = 2 --- Constant for setting normal_enabled.
FEATURE_RIM = 3 --- Constant for setting rim_enabled.
FEATURE_CLEARCOAT = 4 --- Constant for setting clearcoat_enabled.
FEATURE_ANISOTROPY = 5 --- Constant for setting anisotropy_enabled.
FEATURE_AMBIENT_OCCLUSION = 6 --- Constant for setting ao_enabled.
FEATURE_DEPTH_MAPPING = 7 --- Constant for setting depth_enabled.
FEATURE_SUBSURACE_SCATTERING = 8 --- Constant for setting subsurf_scatter_enabled.
FEATURE_TRANSMISSION = 9 --- Constant for setting transmission_enabled.
FEATURE_REFRACTION = 10 --- Constant for setting refraction_enabled.
FEATURE_DETAIL = 11 --- Constant for setting detail_enabled.
FEATURE_MAX = 12 --- Represents the size of the Feature enum.
enum BlendMode:
BLEND_MODE_MIX = 0 --- Default blend mode. The color of the object is blended over the background based on the object's alpha value.
BLEND_MODE_ADD = 1 --- The color of the object is added to the background.
BLEND_MODE_SUB = 2 --- The color of the object is subtracted from the background.
BLEND_MODE_MUL = 3 --- The color of the object is multiplied by the background. --- No lighting is used on the object. Color comes directly from
ALBEDO.
FLAG_USE_VERTEX_LIGHTING = 1 --- Lighting is calculated per-vertex rather than per-pixel. This can be used to increase the speed of the shader at the cost of quality.
FLAG_DISABLE_DEPTH_TEST = 2 --- Disables the depth test, so this object is drawn on top of all others. However, objects drawn after it in the draw order may cover it.
FLAG_ALBEDO_FROM_VERTEX_COLOR = 3 --- Set
ALBEDOto the per-vertex color specified in the mesh.
FLAG_SRGB_VERTEX_COLOR = 4 --- Vertex color is in sRGB space and needs to be converted to linear. Only applies in the GLES3 renderer.
FLAG_USE_POINT_SIZE = 5 --- Uses point size to alter the size of primitive points. Also changes the albedo texture lookup to use
POINT_COORDinstead of
UV.
FLAG_FIXED_SIZE = 6 --- Object is scaled by depth so that it always appears the same size on screen.
FLAG_BILLBOARD_KEEP_SCALE = 7 --- Shader will keep the scale set for the mesh. Otherwise the scale is lost when billboarding. Only applies when params_billboard_mode is BILLBOARD_ENABLED.
FLAG_UV1_USE_TRIPLANAR = 8 --- Use triplanar texture lookup for all texture lookups that would normally use
UV.
FLAG_UV2_USE_TRIPLANAR = 9 --- Use triplanar texture lookup for all texture lookups that would normally use
UV2.
FLAG_AO_ON_UV2 = 11 --- Use
UV2coordinates to look up from the ao_texture.
FLAG_EMISSION_ON_UV2 = 12 --- Use
UV2coordinates to look up from the emission_texture.
FLAG_USE_ALPHA_SCISSOR = 13 --- Use alpha scissor. Set by params_use_alpha_scissor.
FLAG_TRIPLANAR_USE_WORLD = 10 --- Use world coordinates in the triplanar texture lookup instead of local coordinates.
FLAG_ALBEDO_TEXTURE_FORCE_SRGB = 14 --- Forces the shader to convert albedo from sRGB space to linear space.
FLAG_DONT_RECEIVE_SHADOWS = 15 --- Disables receiving shadows from other objects.
FLAG_DISABLE_AMBIENT_LIGHT = 17 --- Disables receiving ambient light.
FLAG_ENSURE_CORRECT_NORMALS = 16 --- Ensures that normals appear correct, even with non-uniform scaling.
FLAG_USE_SHADOW_TO_OPACITY = 18 --- Enables the shadow to opacity feature.
FLAG_ALBEDO_TEXTURE_SDF = 19 --- Enables signed distance field rendering shader.
FLAG_MAX = 20 --- Represents the size of the Flags enum. --- Billboard mode is disabled.
BILLBOARD_ENABLED = 1 --- The object's Z axis will always face the camera.
BILLBOARD_FIXED_Y = 2 --- The object's X axis will always face the camera.
BILLBOARD_PARTICLES = 3 --- Used for particle systems when assigned to Particles and CPUParticles nodes. Enables
particles_anim_*properties.
The ParticlesMaterial.anim_speed or CPUParticles.anim_speed should also be set to a positive value for the animation to play.
enum TextureChannel:
TEXTURE_CHANNEL_RED = 0 --- Used to read from the red channel of a texture.
TEXTURE_CHANNEL_GREEN = 1 --- Used to read from the green channel of a texture.
TEXTURE_CHANNEL_BLUE = 2 --- Used to read from the blue channel of a texture.
TEXTURE_CHANNEL_ALPHA = 3 --- Used to read from the alpha channel of a texture.
TEXTURE_CHANNEL_GRAYSCALE = 4 --- Currently unused.
enum EmissionOperator:
EMISSION_OP_ADD = 0 --- Adds the emission color to the color from the emission texture.
EMISSION_OP_MULTIPLY = 1 --- Multiplies the emission color by the color from the emission texture.
enum DistanceFadeMode:
DISTANCE_FADE_DISABLED = 0 --- Do not use distance fade.
DISTANCE_FADE_PIXEL_ALPHA = 1 --- Smoothly fades the object out based on each pixel's distance from the camera using the alpha channel.
DISTANCE_FADE_PIXEL_DITHER = 2 --- Smoothly fades the object out based on each pixel's distance from the camera using a dither approach. Dithering discards pixels based on a set pattern to smoothly fade without enabling transparency. On certain hardware this can be faster than DISTANCE_FADE_PIXEL_ALPHA.
DISTANCE_FADE_OBJECT_DITHER = 3 --- Smoothly fades the object out based on the object's distance from the camera using a dither approach. Dithering discards pixels based on a set pattern to smoothly fade without enabling transparency. On certain hardware this can be faster than DISTANCE_FADE_PIXEL_ALPHA.
ASYNC_MODE_VISIBLE = 0 --- The real conditioned shader needed on each situation will be sent for background compilation. In the meantime, a very complex shader that adapts to every situation will be used ("ubershader"). This ubershader is much slower to render, but will keep the game running without stalling to compile. Once shader compilation is done, the ubershader is replaced by the traditional optimized shader.
ASYNC_MODE_HIDDEN = 1 --- Anything with this material applied won't be rendered while this material's shader is being compiled.
This is useful for optimization, in cases where the visuals won't suffer from having certain non-essential elements missing during the short time their shaders are being compiled.
Property Descriptions¶
The material's base color.
Texture to multiply by albedo_color. Used for basic texturing of objects.
The strength of the anisotropy effect. This is multiplied by anisotropy_flowmap's alpha channel if a texture is defined there and the texture contains an alpha channel.
If
true, anisotropy is enabled. Anisotropy changes the shape of the specular blob and aligns it to tangent space. This is useful for brushed aluminium and hair reflections.
Note: Mesh tangents are needed for anisotropy to work. If the mesh does not contain tangents, the anisotropy effect will appear broken.
Note: Material anisotropy should not to be confused with anisotropic texture filtering. Anisotropic texture filtering can be enabled by selecting a texture in the FileSystem dock, going to the Import dock, checking the Anisotropic checkbox then clicking Reimport. The anisotropic filtering level can be changed by adjusting ProjectSettings.rendering/quality/filters/anisotropic_filter_level.
Texture that offsets the tangent map for anisotropy calculations and optionally controls the anisotropy effect (if an alpha channel is present). The flowmap texture is expected to be a derivative map, with the red channel representing distortion on the X axis and green channel representing distortion on the Y axis. Values below 0.5 will result in negative distortion, whereas values above 0.5 will result in positive distortion.
If present, the texture's alpha channel will be used to multiply the strength of the anisotropy effect. Fully opaque pixels will keep the anisotropy effect's original strength while fully transparent pixels will disable the anisotropy effect entirely. The flowmap texture's blue channel is ignored.
If
true, ambient occlusion is enabled. Ambient occlusion darkens areas based on the ao_texture.
Amount that ambient occlusion affects lighting from lights. If
0, ambient occlusion only affects ambient light. If
1, ambient occlusion affects lights just as much as it affects ambient light. This can be used to impact the strength of the ambient occlusion effect, but typically looks unrealistic.
If
true, use
UV2 coordinates to look up from the ao_texture.
Texture that defines the amount of ambient occlusion for a given point on the object.
TextureChannel ao ProjectSettings.rendering/gles3/shaders/shader_compilation_mode is
Synchronous (with or without cache), this determines how this material must behave in regards to asynchronous shader compilation.
ASYNC_MODE_VISIBLE is the default and the best for most cases.
Sets the strength of the clearcoat effect. Setting to
0 looks the same as disabling the clearcoat effect.
If
true, clearcoat rendering is enabled. Adds a secondary transparent pass to the lighting calculation resulting in an added specular blob. This makes materials appear as if they have a clear layer on them that can be either glossy or rough.
Note: Clearcoat rendering is not visible if the material has flags_unshaded set to
true.
Sets the roughness of the clearcoat pass. A higher value results in a smoother clearcoat while a lower value results in a rougher clearcoat.
Texture that defines the strength of the clearcoat effect and the glossiness of the clearcoat. Strength is specified in the red channel while glossiness is specified in the green channel.
If
true, the shader will read depth texture at multiple points along the view ray to determine occlusion and parrallax. This can be very performance demanding, but results in more realistic looking depth mapping.
If
true, depth mapping is enabled (also called "parallax mapping" or "height mapping"). See also normal_enabled.
Note: Depth mapping is not supported if triplanar mapping is used on the same material. The value of depth_enabled will be ignored if uv1_triplanar is enabled.
If
true, direction of the binormal is flipped before using in the depth effect. This may be necessary if you have encoded your binormals in a way that is conflicting with the depth effect.
If
true, direction of the tangent is flipped before using in the depth effect. This may be necessary if you have encoded your tangents in a way that is conflicting with the depth effect.
Number of layers to use when using depth_deep_parallax and the view direction is perpendicular to the surface of the object. A higher number will be more performance demanding while a lower number may not look as crisp.
Number of layers to use when using depth_deep_parallax and the view direction is parallel to the surface of the object. A higher number will be more performance demanding while a lower number may not look as crisp.
Scales the depth offset effect. A higher number will create a larger depth.
Texture used to determine depth at a given pixel. Depth is always stored in the red channel.
Texture that specifies the color of the detail overlay.
Specifies how the detail_albedo should blend with the current
ALBEDO. See BlendMode for options.
If
true, enables the detail overlay. Detail is a second texture that gets mixed over the surface of the object based on detail_mask. This can be used to add variation to objects, or to blend between two different albedo/normal textures.
Texture used to specify how the detail textures get blended with the base textures.
Texture that specifies the per-pixel normal of the detail overlay.
Note: Godot expects the normal map to use X+, Y+, and Z+ coordinates. See this page for a comparison of normal map coordinates expected by popular engines.
Specifies whether to use
UV or
UV2 for the detail layer. See DetailUV for options.
Distance at which the object appears fully opaque.
Note: If
distance_fade_max_distance is less than
distance_fade_min_distance, the behavior will be reversed. The object will start to fade away at
distance_fade_max_distance and will fully disappear once it reaches
distance_fade_min_distance.
Distance at which the object starts to become visible. If the object is less than this distance away, it will be invisible.
Note: If
distance_fade_min_distance is greater than
distance_fade_max_distance, the behavior will be reversed. The object will start to fade away at
distance_fade_max_distance and will fully disappear once it reaches
distance_fade_min_distance.
DistanceFadeMode distance_fade_mode
Specifies which type of fade to use. Can be any of the DistanceFadeModes.
The emitted light's color. See emission_enabled.
If
true, the body emits light. Emitting light makes the object appear brighter. The object can also cast light on other objects if a GIProbe or BakedLightmap is used and this object is used in baked lighting.
The emitted light's strength. See emission_enabled.
Use
UV2 to read from the emission_texture.
EmissionOperator emission_operator
Sets how emission interacts with emission_texture. Can either add or multiply. See EmissionOperator for options.
Texture that specifies how much surface emits light at a given point.
Forces a conversion of the albedo_texture from sRGB space to linear space.
Enables signed distance field rendering shader.
If
true, the object receives no ambient light.
If
true, the object receives no shadow that would otherwise be cast onto it.
If
true, the shader will compute extra operations to make sure the normal stays correct when using a non-uniform scale. Only enable if using non-uniform scaling.
If
true, the object is rendered at the same size regardless of distance.
If
true, depth testing is disabled and the object will be drawn in render order.
If
true, transparency is enabled on the body. See also params_blend_mode.
If
true, the object is unaffected by lighting.
If
true, render point size can be changed.
Note: This is only effective for objects whose geometry is point-based rather than triangle-based. See also params_point_size.
If
true, enables the "shadow to opacity" render mode where lighting modifies the alpha so shadowed areas are opaque and non-shadowed areas are transparent. Useful for overlaying shadows onto a camera feed in AR.
If
true, lighting is calculated per vertex rather than per pixel. This may increase performance on low-end devices, especially for meshes with a lower polygon count. The downside is that shading becomes much less accurate, with visible linear interpolation between vertices that are joined together. This can be compensated by ensuring meshes have a sufficient level of subdivision (but not too much, to avoid reducing performance). Some material features are also not supported when vertex shading is enabled.
See also ProjectSettings.rendering/quality/shading/force_vertex_shading which can globally enable vertex shading on all materials.
Note: By default, vertex shading is enforced on mobile platforms by ProjectSettings.rendering/quality/shading/force_vertex_shading's
mobile override.
Note: flags_vertex_lighting has no effect if flags_unshaded is
true.
If
true, triplanar mapping is calculated in world space rather than object local space. See also uv1_triplanar.
A high value makes the material appear more like a metal. Non-metals use their albedo as the diffuse color and add diffuse to the specular reflection. With non-metals, the reflection appears on top of the albedo color. Metals use their albedo as a multiplier to the specular reflection and set the diffuse color to black resulting in a tinted reflection. Materials work better when fully metal or fully non-metal, values between
0 and
1 should only be used for blending between metal and non-metal sections. To alter the amount of reflection use roughness.
Sets the size of the specular lobe. The specular lobe is the bright spot that is reflected from light sources.
Note: Unlike metallic, this is not energy-conserving, so it should be left at
0.5 in most cases. See also roughness.
Texture used to specify metallic for an object. This is multiplied by metallic.
TextureChannel metallic_texture_channel
Specifies the channel of the metallic_texture in which the metallic, normal mapping is enabled.
The strength of the normal map's effect.
Texture used to specify the normal at a given pixel. The
normal_texture only uses the red and green channels; the blue and alpha channels are ignored. The normal read from
normal_texture is oriented around the surface normal provided by the Mesh.
Note: The mesh must have both normals and tangents defined in its vertex data. Otherwise, the normal map won't render correctly and will only appear to darken the whole surface. If creating geometry with SurfaceTool, you can use SurfaceTool.generate_normals and SurfaceTool.generate_tangents to automatically generate normals and tangents respectively.
Note: Godot expects the normal map to use X+, Y+, and Z+ coordinates. See this page for a comparison of normal map coordinates expected by popular engines.
Threshold at which the alpha scissor will discard values.
If
true, the shader will keep the scale set for the mesh. Otherwise the scale is lost when billboarding. Only applies when params_billboard_mode is BILLBOARD_ENABLED.
BillboardMode params_billboard_mode
Controls how the object faces the camera. See BillboardMode.
Note: Billboard mode is not suitable for VR because the left-right vector of the camera is not horizontal when the screen is attached to your head instead of on the table. See GitHub issue #41567 for details.
The material's blend mode.
Note:.
Currently unimplemented in Godot.
The point size in pixels. See flags_use_point_size.
SpecularMode params_specular_mode
The method for rendering the specular blob. See SpecularMode.
If
true, the shader will discard all pixels that have an alpha value less than params_alpha_scissor_threshold.
The number of horizontal frames in the particle sprite sheet. Only enabled when using BILLBOARD_PARTICLES. See params_billboard_mode.
If
true, particle animations are looped. Only enabled when using BILLBOARD_PARTICLES. See params_billboard_mode.
The number of vertical frames in the particle sprite sheet. Only enabled when using BILLBOARD_PARTICLES. See params_billboard_mode.
Distance over which the fade effect takes place. The larger the distance the longer it takes for an object to fade.
If
true, the proximity fade effect is enabled. The proximity fade effect fades out each pixel based on its distance to another object.
If
true, the refraction effect is enabled. Refraction distorts transparency based on light from behind the object. When using the GLES3 backend, the material's roughness value will affect the blurriness of the refraction. Higher roughness values will make the refraction look blurrier.
The strength of the refraction effect. Higher values result in a more distorted appearance for the refraction.
Texture that controls the strength of the refraction per-pixel. Multiplied by refraction_scale.
TextureChannel refraction_texture_channel
Specifies the channel of the refraction_texture in which the refraction.
Sets the strength of the rim lighting effect.
If
true, rim effect is enabled. Rim lighting increases the brightness at glancing angles on an object.
Note: Rim lighting is not visible if the material has flags_unshaded set to
true.
Texture used to set the strength of the rim lighting effect per-pixel. Multiplied by rim. used to control the roughness per-pixel. Multiplied by roughness.
TextureChannel roughness, subsurface scattering is enabled. Emulates light that penetrates an object's surface, is scattered, and then emerges.
The strength of the subsurface scattering effect.
Texture used to control the subsurface scattering strength. Stored in the red texture channel. Multiplied by subsurf_scatter_strength.
The color used by the transmission effect. Represents the light passing through an object.
If
true, the transmission effect is enabled.
Texture used to control the transmission effect per-pixel. Added to transmission.
How much to offset the
UV coordinates. This amount will be added to
UV in the vertex function. This can be used to offset a texture.
How much to scale the
UV coordinates. This is multiplied by
UV in the vertex function.
If
true, instead of using
U.
How much to offset the
UV2 coordinates. This amount will be added to
UV2 in the vertex function. This can be used to offset a texture.
How much to scale the
UV2 coordinates. This is multiplied by
UV2 in the vertex function.
If
true, instead of using
UV.
If
true, the model's vertex colors are processed as sRGB mode.
If
true, the vertex color is used as albedo color.
Method Descriptions¶
Returns
true, if the specified Feature is enabled.
Returns
true, if the specified flag is enabled. See Flags enumerator for options.
Texture get_texture ( TextureParam param ) const
Returns the Texture associated with the specified TextureParam.
If
true, enables the specified Feature. Many features that are available in
SpatialMaterials need to be enabled before use. This way the cost for using the feature is only incurred when specified. Features can also be enabled by setting the corresponding member to
true.
If
true, enables the specified flag. Flags are optional behavior that can be turned on and off. Only one flag can be enabled at a time with this function, the flag enumerators cannot be bit-masked together to enable or disable multiple flags at once. Flags can also be enabled by setting the corresponding member to
true. See Flags enumerator for options.
void set_texture ( TextureParam param, Texture texture )
Sets the Texture to be used by the specified TextureParam. This function is called when setting members ending in
*_texture. | https://docs.godotengine.org/pt_BR/stable/classes/class_spatialmaterial.html | 2022-09-24T19:28:21 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.godotengine.org |
Configuration data for DOI minting
hres:doi:minting:safety-application-hostname
Expected type: String
This is used as a safety check to prevent the minting of DOIs on development and test systems. Set this as equal to the internal hostname in order to allow for minting of DOIs in the repository.
The internal hostname is accessible at:
hres:doi:minting:doi-prefix
Expected type: String – Usual Form: “10.12345/ABC.”
The DOI prefix for the repository, acquired from the institution/datacite. Datacite provides the numbers as the unique identifier, institutions will usually request something after the slash, e.g. their university acronym.
hres:doi:minting:doi-prefix-for-update
Expected type: Array of Strings
The DOI prefixes to keep updated if they change. Some institutions may have old DOI prefixes they wish to keep updated, at the very minimum this array must include the DOI prefix as shown above.
hres:doi:minting:service-url
Expected type: String
The URL of the minting service, used to send requests to datacite in order to update metadata or mint a DOI. | https://docs.haplo.org/app/repository/setup/configuration-data/required/doi | 2022-09-24T20:36:27 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.haplo.org |
Get group
/api/v0-user/group/[groupcode](GET only)
When generating URLs,
[groupcode] is the API code of a group
SecurityPrincipal
Response
This request can respond with following kinds, in addition to the generic kinds (see Introduction):
Along with the standard
success and
kind fields (see Introduction for explanation), the response has the following structure:
- users
- group
- id
- nameFirst
- nameLast
- name
- code
- ref
- isGroup
- isActive
- isSuperUser
- isServiceUser
- isAnonymous
- localeId
- directGroupMembership
The users field is an array of the IDs of all active users who are members of this group, whether directly, or indirectly through another group that is a member of this group. This array does not include groups.
Apart from
directGroupMembership, all the fields correspond directly to the equivalent field on the
SecurityPrincipal.
directGroupMembership is the equivalent of
directGroupIds, but returns an array of group API codes instead of group IDs.
Example response
{ "success": true, "kind": "haplo:api-v0:user:group-details", "users": [123, 456, 789], "group": { "id":130, "nameFirst":null, "nameLast":null, "name":"Test group", "code":"haplo:group:test", "email":null, "ref":null, "isGroup":true, "isActive":true, "isSuperUser":false, "isServiceUser":false, "isAnonymous":false, "localeId":"en", "tags": { "tagName": "tagValue" }, "directGroupMembership":[ "haplo:group:example", "group:321" ] } } | https://docs.haplo.org/rest-api/users/get-group | 2022-09-24T20:05:28 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.haplo.org |
Magento
MAGENTO INTEGRATION
Magento is the premier open source e-commerce app that is used by millions of customers each and every day. Magento integration enables you to streamline your business operations and improve the customers’ shopping experience. Due to its expandability and scalability, it has been named as one of the leaders in B2B e-commerce. It not only helps in adding productivity to your business but gives you real-time insights about the customers.
Read this article to know how to integrate IR with Magento so that you can run your campaigns on Magento.
INTEGRATE INVITEREFERRALS WITH MAGENTO
InviteReferrals is a marketing software solution that allows you to expand your customer base by rewarding your referrals. While you can create your referral campaigns in IR, you can enlarge customer experience through Magento integration. You can reach your existing customers and encourage them to convert their friends and acquaintances into customers. That means, By Integrating IR with Magento, you will be able to run your campaigns seamlessly on Magento and track conversions swiftly.
Below are the steps to integrate IR with Magento:
- In order to integrate Magento with Invitereferrals, you are required to first purchase the Invitereferrals software solutions from the Magento marketplace.
- Search for the InviteReferrals from the search category and choose the 2.3 (open source(CE))
store version.
- Simply click on the “add to cart” button to purchase the InviteReferral software solution. As
the product will be added to the cart, simply click on the Proceed to checkout button.
- Once you place your order, you will be navigated to the below page.
- Click on the download button to download the IR files and extract them.
Below are the files which you have to upload on your hosting site.
- Follow the below steps to upload your files :
We recommend you to duplicate your live store on a staging/test site and try installation on it in advance.
Backup Magento files and the store database.
Download FTP clients Recommend clients: FileZilla, WinSCP, cuteFtp.
Log into your hosting space via an FTP client
Create an “Invitereferrals/Invitereferrals” Dir under app/code/ dir.
Unzip extension package and upload them into app/code/Invitereferrals/Invitereferrals/.
Run the following at the command line.
I. php bin/magento setup:upgrade
II. php bin/magento setup:di:compile
III.php bin/magento setup:static-content:deploy
IV. php bin/magento cache:flush
- After adding the IR extension, you have to add brand id and encrypted key under your Magento Stores >> Configuration >> InviteReferrals.
Note: Navigate to the IR dashboard>integration to get the credentials for brand id and encrypted key
- Simply enter the required credentials and you are ready to create and run your campaigns on Magento.
- Navigate to the IR dashboard > Campaigns>create new campaigns to create IR campaigns.
Follow the above steps to integrate Magento with IR to run your campaigns on Magento and track key metrics for them in order to provide a seamless customer experience.
Updated almost 2 years ago | https://docs.invitereferrals.com/docs/magento | 2022-09-24T20:11:34 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['https://files.readme.io/57ddbe9-image_1.png', 'image 1.png 1352'],
dtype=object)
array(['https://files.readme.io/57ddbe9-image_1.png',
'Click to close... 1352'], dtype=object)
array(['https://files.readme.io/535067c-image_2.png', 'image 2.png 1345'],
dtype=object)
array(['https://files.readme.io/535067c-image_2.png',
'Click to close... 1345'], dtype=object)
array(['https://files.readme.io/b5062df-image_3.png', 'image 3.png 1347'],
dtype=object)
array(['https://files.readme.io/b5062df-image_3.png',
'Click to close... 1347'], dtype=object)
array(['https://files.readme.io/cdaf1f0-image_4.png', 'image 4.png 1257'],
dtype=object)
array(['https://files.readme.io/cdaf1f0-image_4.png',
'Click to close... 1257'], dtype=object)
array(['https://files.readme.io/299acfd-image_5.png', 'image 5.png 1277'],
dtype=object)
array(['https://files.readme.io/299acfd-image_5.png',
'Click to close... 1277'], dtype=object)
array(['https://files.readme.io/cb44f06-iamge_6.png', 'iamge 6.png 945'],
dtype=object)
array(['https://files.readme.io/cb44f06-iamge_6.png',
'Click to close... 945'], dtype=object)
array(['https://files.readme.io/51e5b71-magento.jpg', 'magento.jpg 1366'],
dtype=object)
array(['https://files.readme.io/51e5b71-magento.jpg',
'Click to close... 1366'], dtype=object)
array(['https://files.readme.io/7ba602a-ir_magento_last_ss.png',
'ir magento last ss.png 512'], dtype=object)
array(['https://files.readme.io/7ba602a-ir_magento_last_ss.png',
'Click to close... 512'], dtype=object) ] | docs.invitereferrals.com |
You can attach the MonoDevelop debugger to an Android device with ADB via TCP/IP. The process is described below.
Enable “USB debugging” on your device and connect the device to your development machine via USB cable. Ensure your device is on the same subnet mask and gateway as your development machine. Also, make sure there are no other active network connections on the device, i.e. disable data access over mobile/cellular network.
On your development machine, open up your terminal/cmd and navigate to the location of the ADB. You can find the ADB tool in <sdk>/platform-tools/
Restart host ADB in TCP/IP mode with the following command:
adb tcpip 5555
This will have enabled ADB over TCP/IP using port 5555. If port 5555 is unavailable, you should use a different port (see ADB.) The following output should be produced:
restarting in TCP mode port: 5555
adb connect DEVICEIPADDRESS
DEVICEIPADDRESS is the actual IP address of your Android device. This should produce the following output:
connected to DEVICEIPADDRESS:5555
adb devices
This should produce the following output:
List of devices attached DEVICEIPADDRESS:5555 device
Build and run your Unity application to the device. Ensure you build your application with Development Build flag enabled and Script Debugging turned on.
Disconnect the USB cable as the device no longer needs to be connected to your development machine.
Finally, while the application is running on your device, open your script in MonoDevelop, add a breakpoint, select “Run” -> “Attach to Process” and select your device from the list. (It might take a few seconds for the device to appear in the list. It may not appear in the list if the application is not running or if the device’s display goes to sleep).
For some more details and for troubleshooting, see the Wireless Usage section in the Android developers guide for the ADB.
Note: The device sends multicast messages and the editor and MonoDevelop subscribe/listen for them. For the process to work, your network will need to be setup correctly for Multicasting. | https://docs.unity3d.com/560/Documentation/Manual/AttachingMonoDevelopDebuggerToAnAndroidDevice.html | 2022-09-24T20:02:13 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.unity3d.com |
Drilldown: Article
Choose a table:
- AppSection (43)
- Application (19)
- ArchitectureArticle (1)
- Article (6592)
- ArticleCloudRN (91)
- ArticleGlossary (0)
- ArticleHelmRN (45)
- ArticleIndexRN (2)
- ArticleKVP (3)
- ArticleMaturity (64)
- ArticlePEServiceArchitecture (57)
- ArticlePEServiceDeploy (57)
- ArticlePEServiceMetrics (104)
- ArticlePEServiceObservability (45)
- ArticlePEServicePorts (14)
- ArticlePEServiceUpgrade (28)
- ArticleServiceHA (46)
- ArticleServiceUpgrade (0)
- ArticleSimplified (25)
- ArticleUnstructured (677)
- ArticleUpgrades (0)
- BDSMetric (75)
- Boilerplate (9)
- Browser (7)
- CAPDetails (6)
- CSDTArticle (2)
- CapabilitiesTable (28)
- CapabilitiesUseCasesTable (11)
- CloudRN (56)
- Component (70)
- ComponentRN (1109)
- ContentAreaLandingPage (8)
- ContentAreaLandingPageSection (37)
- ContentAreaSection (8)
- CustomArticle1 (0)
- CustomSection1 (120)
- DRType (4)
- DefinePlatforms (6)
- DeploymentType (5)
- DeprecationNotices (21)
- EOL (708)
- EngageRole (4)
- EngageServices (4)
- Environment (4)
- FAQ (41)
- FAQ Article (8)
- FeatureSection (118)
- FeatureTopic (5)
- FeatureTopicCategory (13)
- GCXI Attribute (866)
- GCXI Folder (142)
- GCXI Form (132)
- GCXI Metric (2044)
- GCXI Report (210)
- GEC AzAWSDiffs UseCases (44)
- GEcCap (81)
- GEcCompliance (9)
- GEcTelCon (19)
- GEcUC (42)
- GenerateCargoContent (1)
- GlossaryTooltip (1)
- HAType (5)
- HIW (40)
- HelmChartParameter (0)
- IncludedService (64)
- IncludedServiceHA (18)
- IncludedServiceUpgradeStrategy (24)
- Issue (3617)
- IssueCategory (8)
- IssueType (0)
- JDTest FeatureSupport (5)
- JiraIssue (10)
- KBArticle (4)
- KVP (83)
- MintyDocsGlossary (0)
- MintyDocsManual (403)
- MintyDocsProduct (148)
- MintyDocsProductLandingSections (568)
- MintyDocsVersion (249)
- Notices (542)
- PDMColumn (3423)
- PDMIndexItem (274)
- PDMRef (1102)
- PDMTable (286)
- PDMView (28)
- PDMViewColumn (282)
- PDMs SubjectArea (0)
- PEAlert (848)
- PEArchitecture (17)
- PEArchitectureSection (0)
- PEBlueGreenUpgrade (5)
- PEC Applications MasterList (20)
- PEC User Role MasterList (4)
- PEConfigOption (140)
- PEConnections (833)
- PEEvent (66)
- PEEventAttr (94)
- PEMetric (2021)
- PEMetric JD2 (4)
- PEMonitoring (106)
- PEPorts (65)
- PEPrerequisites (56)
- PESystemMetric (18)
- PageType (11)
- Platform (0)
- PrimaryUnitValues (2)
- Product (8)
- ProductCategory (4)
- ProductLine (4)
- ProductLineCategory (4)
- Protocols (17)
- PublicUseCase (12)
- RNElements (8545)
- RNSection (49)
- RTR Stats (109)
- ReleaseCategory (0)
- ReleaseType (6)
- Role (7)
- SMART AdditionalBenefits (0)
- SMART Benchmark MasterList (58)
- SMART Benefits (830)
- SMART BusinessImageFlow (234)
- SMART Canonical (115)
- SMART CloudAssumptions (3)
- SMART DataSheetFlow (569)
- SMART DistributionImageFlow (17)
- SMART HybridAssumptions (1)
- SMART Meta (115)
- SMART Popular (9)
- SMART PremiseAssumptions (5)
- SMART SolutionAboutPage (40)
- SMART UseCase (148)
- SMART Versions (0)
- SOW (5)
- SalesUseCase (0)
- Section (17122)
- SectionBrowser (35)
- SectionThirdPartyItem (222)
- SecurityBoilerplate (110)
- SelfHelpManual (2)
- Service (27)
- ServiceGroup (0)
- SimpleArticle (4)
- SimpleSection (7)
- StandardMetadataFeatureTopic (0)
- StandardMetadataFeatureTopicCategory (0)
- StandardMetadataRole (0)
- TSSection (726)
- TaskSummary (71)
- Test Capabilities (89)
- ThirdPartyItem (19)
- TrafficTypes (4)
- Type (8)
- UITab (50)
- UpgradeMode (0)
- UpgradeStrategy (3)
- Version (4)
- VersionCategory (4)
- VideoGallery (12)
- Videos (664)
- WorkspaceArticle (164)
- All pages (17618)
Use the filters below to narrow your results.
Draft:PEC-REP · PEC-REP · Draft:PEC-WFM · PEC-WFM · Draft:ATC · Draft:PEC-GSM · ATC · Draft:WID · Draft:ContentAdmin · Draft:DES · DES · Draft:PEC-AS · Draft:PEC-ROU · Draft:PEC-REC · Draft:PEC-OU · Draft:PrivateEdition · Draft:GAWFM · PEC-OU · Draft:ReleaseNotes · Draft:BDS
DisplayName:
None (7) · Get started with Genesys Predictive Engagement for Genesys Multicloud CX (3) · Get started (3) · About the data we track (2) · initialized (2) · forms:track (2) · destroy (2) · debug (2) · Create a web messaging offer (2) · serialize (2) · Track the #hash portion of the URL fragment (2) · Action map performance (2) · identify (2) · Web chat lifecycle (2)
TocName:
None (124) · ActionMaps (4) · Analytics (5) · api.session (3) · ArchFlow (7) · Compliance (3) · ContentOffers (6) · Cookies (3) · DevTracking (6) · EventsMethods (4) · FormTrackAPI (1) · FreqCap (2) · initialization methods (3) · Journey Shaping (1) · JourneyData (7) · Modules (8) · Outcomes (5) · Segments (7) · Segments, Outcomes, ActionMaps (2) · Sessions (8) · UtilityMethods (4) · WebActions (5) · WebChat (6) · WebMessaging (6) · WebTrackingLimit (1)
Showing below up to 231 results in range #1 to #231.
View (previous 500 | next 500) (20 | 50 | 100 | 250 | 500)
- Draft:ATC/Current/AdminGuide/Create a web messaging offer
- Draft:ATC/Current/AdminGuide/Create messaging offer
- Draft:ATC/Current/AdminGuide/Custom attributes
- Draft:ATC/Current/AdminGuide/Custom Events Limit
- Draft:ATC/Current/AdminGuide/Custom sessions
- Draft:ATC/Current/AdminGuide/Custom Web sessions
- Draft:ATC/Current/AdminGuide/Customer details
- Draft:ATC/Current/AdminGuide/Customer journey map
- Draft:ATC/Current/AdminGuide/DANNA
- Draft:ATC/Current/AdminGuide/DeployTrackingSnippet
- Draft:ATC/Current/AdminGuide/Entities Limit
- Draft:ATC/Current/AdminGuide/Event types
- Draft:ATC/Current/AdminGuide/FAQs
- Draft:ATC/Current/AdminGuide/GDPR
- Draft:ATC/Current/AdminGuide/Get Started GenesysCloud
- Draft:ATC/Current/AdminGuide/Get Started GenesysEngage-cloud
- Draft:ATC/Current/AdminGuide/Get Started GenesysEngage-onpremises
- Draft:ATC/Current/AdminGuide/Get Started PureConnect
- Draft:ATC/Current/AdminGuide/Get Started PureEngage Cloud
- Draft:ATC/Current/AdminGuide/How throttling works
- Draft:ATC/Current/AdminGuide/Journey action map report
- Draft:ATC/Current/AdminGuide/Journey outcome report
- Draft:ATC/Current/AdminGuide/Journey segment report
- Draft:ATC/Current/AdminGuide/Key concepts
- Draft:ATC/Current/AdminGuide/Live Now
- Draft:ATC/Current/AdminGuide/Maintain
- Draft:ATC/Current/AdminGuide/Manage outcomes
- Draft:ATC/Current/AdminGuide/Manage segments
- Draft:ATC/Current/AdminGuide/Messaging migration
- Draft:ATC/Current/AdminGuide/Messenger configuration
- Draft:ATC/Current/AdminGuide/Monitor web chat performance
- Draft:ATC/Current/AdminGuide/Monitor web messaging performance
- Draft:ATC/Current/AdminGuide/MonitorArchFlows
- Draft:ATC/Current/AdminGuide/MonitorContentOffers
- Draft:ATC/Current/AdminGuide/Object limits
- Draft:ATC/Current/AdminGuide/Operators
- Draft:ATC/Current/AdminGuide/Outcome probability
- Draft:ATC/Current/AdminGuide/Outcome scores
- Draft:ATC/Current/AdminGuide/Outcomes
D cont.
- Draft:ATC/Current/AdminGuide/Outcomes Overview
- Draft:ATC/Current/AdminGuide/Override frequency capping
- Draft:ATC/Current/AdminGuide/Overview action maps
- Draft:ATC/Current/AdminGuide/OverviewArchFlows
- Draft:ATC/Current/AdminGuide/PreparePCArchFlows
- Draft:ATC/Current/AdminGuide/Prioritize
- Draft:ATC/Current/AdminGuide/PureEngagePrereqs
- Draft:ATC/Current/AdminGuide/Route
- Draft:ATC/Current/AdminGuide/ScenarioArchFlows
- Draft:ATC/Current/AdminGuide/Schedules
- Draft:ATC/Current/AdminGuide/Searches performed
- Draft:ATC/Current/AdminGuide/Segment examples
- Draft:ATC/Current/AdminGuide/Segments
- Draft:ATC/Current/AdminGuide/Segments assigned
- Draft:ATC/Current/AdminGuide/Session attributes
- Draft:ATC/Current/AdminGuide/Session card
- Draft:ATC/Current/AdminGuide/Session library
- Draft:ATC/Current/AdminGuide/Sessions events overview
- Draft:ATC/Current/AdminGuide/Tracking snippet
- Draft:ATC/Current/AdminGuide/Trigger
- Draft:ATC/Current/AdminGuide/Unknown users
- Draft:ATC/Current/AdminGuide/Usage
- Draft:ATC/Current/AdminGuide/Use the Architect flow with an action map
- Draft:ATC/Current/AdminGuide/View audit logs
- Draft:ATC/Current/AdminGuide/Visitor Activity
- Draft:ATC/Current/AdminGuide/Visitor journey (Genesys Cloud beta)
- Draft:ATC/Current/AdminGuide/Web chat lifecycle
- Draft:ATC/Current/AdminGuide/Web chat overview
- Draft:ATC/Current/AdminGuide/Web messaging overview
- Draft:ATC/Current/AdminGuide/Web sessions
- Draft:ATC/Current/AdminGuide/Web tracking
- Draft:ATC/Current/AdminGuide/Web tracking limit
- Draft:ATC/Current/AgentGuide
- Draft:ATC/Current/AgentGuide/Additional information icons
- Draft:ATC/Current/AgentGuide/Customer journey information
- Draft:ATC/Current/AgentGuide/CustomerJourney
- Draft:ATC/Current/AgentGuide/Device icons
- Draft:ATC/Current/AgentGuide/FAQs
- Draft:ATC/Current/AgentGuide/GenesysEngage
- Draft:ATC/Current/AgentGuide/Get started GenesysCloud
- Draft:ATC/Current/AgentGuide/Get started GenesysEngage
- Draft:ATC/Current/AgentGuide/Get started PureCloud
- Draft:ATC/Current/AgentGuide/Get started PureConnect
- Draft:ATC/Current/AgentGuide/GPE Customer journey
- Draft:ATC/Current/AgentGuide/How Predictive Engagement enriches your chat experience
- Draft:ATC/Current/AgentGuide/Map icons
- Draft:ATC/Current/AgentGuide/PureConnect
- Draft:ATC/Current/AgentGuide/Visitor information
- Draft:ATC/Current/Deployment
- Draft:ATC/Current/Event
- Draft:ATC/Current/Event/6 Secs
- Draft:ATC/Current/Event/About event tracking
- Draft:ATC/Current/Event/About scenarios and best practices
- Draft:ATC/Current/Event/About tag managers
- Draft:ATC/Current/Event/Adobe Launch
- Draft:ATC/Current/Event/Button click
- Draft:ATC/Current/Event/Chat-related tags
- Draft:ATC/Current/Event/Event click-related tags
- Draft:ATC/Current/Event/Examples of events
- Draft:ATC/Current/Event/Form action
- Draft:ATC/Current/Event/Google Tag Manager
- Draft:ATC/Current/Event/Messaging-related tags
- Draft:ATC/Current/Event/Page-related tags
- Draft:ATC/Current/Event/Product-related tags
- Draft:ATC/Current/Event/Scroll to bottom
- Draft:ATC/Current/Event/Wait too long
- Draft:ATC/Current/MessengerSDK/debug
- Draft:ATC/Current/MessengerSDK/destroy
- Draft:ATC/Current/MessengerSDK/forms:track
- Draft:ATC/Current/MessengerSDK/identify
- Draft:ATC/Current/MessengerSDK/initialized
- Draft:ATC/Current/MessengerSDK/Journey.pageview
- Draft:ATC/Current/MessengerSDK/Journey.ready
- Draft:ATC/Current/MessengerSDK/Journey.record
- Draft:ATC/Current/MessengerSDK/serialize
- Draft:ATC/Current/MessengerSDK/Track
- Draft:ATC/Current/MessengerSDK/Track the
D cont.
-/SDK/About modules
- Draft:ATC/Current/SDK/About the tracking snippet
- Draft:ATC/Current/SDK/api.session.getCustomerCookieId
- Draft:ATC/Current/SDK/api.session.getData
- Draft:ATC/Current/SDK/api.session.getId
- Draft:ATC/Current/SDK/autotrackClick
- Draft:ATC/Current/SDK/autotrackIdle
- Draft:ATC/Current/SDK/autotrackInViewport
- Draft:ATC/Current/SDK/autotrackOfferStateChangesInAdobeAnalytics
- Draft:ATC/Current/SDK/autotrackScrollDepth
- Draft:ATC/Current/SDK/autotrackURLChange
- Draft:ATC/Current/SDK/Configure advanced tracking
- Draft:ATC/Current/SDK/Content offer lifecycle
- Draft:ATC/Current/SDK/Cookie usage
- Draft:ATC/Current/SDK/customAttribute
- Draft:ATC/Current/SDK/debug
- Draft:ATC/Current/SDK/Destroy
- Draft:ATC/Current/SDK/Display icons in the Journey gadget
- Draft:ATC/Current/SDK/dom - ready
- Draft:ATC/Current/SDK/Exclude URL query parameters
- Draft:ATC/Current/SDK/Form tracking API
- Draft:ATC/Current/SDK/Forms:track
- Draft:ATC/Current/SDK/Get started
- Draft:ATC/Current/SDK/Identify
- Draft:ATC/Current/SDK/Init
- Draft:ATC/Current/SDK/Initialization Methods
- Draft:ATC/Current/SDK/initialized
- Draft:ATC/Current/SDK/Load modules
- Draft:ATC/Current/SDK/Method reference
- Draft:ATC/Current/SDK/off
- Draft:ATC/Current/SDK/on
- Draft:ATC/Current/SDK/once
- Draft:ATC/Current/SDK/Pageview
- Draft:ATC/Current/SDK/Record
- Draft:ATC/Current/SDK/serialize
- Draft:ATC/Current/SDK/Session methods
- Draft:ATC/Current/SDK/Track hash portion
- Draft:ATC/Current/SDK/Tracking Methods
- Draft:ATC/Current/SDK/Traits mapper
- Draft:ATC/Current/SDK/Types of tracked data
- Draft:ATC/Current/SDK/Use canonical URL
- Draft:ATC/Current/SDK/Use Events methods with content offers
- Draft:ATC/Current/SDK/Use Events methods with web actions
- Draft:ATC/Current/SDK/Use Events methods with web chats
- Draft:ATC/Current/SDK/Utility Methods
- Draft:ATC/Current/SDK/Web chat lifecycle
- Draft:ATC/Current/SDK/Web tracking API
- Draft:ATC/Current/WDEPlugin
- Draft:ATC/Current/WDEPlugin/Configure GenAdmin
- Draft:ATC/Current/WDEPlugin/InstallWDEplugin
- Draft:ATC/Current/WDEPlugin/PrerequisitesArticle
- Draft:ATC/Data retention change
- Draft:ATC/Deprecation-API endpoint change for web gateway service
- Draft:ATC/Deprecation-Journey Reporting Service
- Draft:ATC/Deprecation-Select trait mapper traits
- Draft:ATC/Deprecation-Smart Tags
- Draft:ATC/Deprecation-Webhooks
- Draft:ATC/GenesysEngage-cloudPrereqs
- Draft:ATC/GenesysEngagePrereqs
- Draft:ATC/GenesysWidgetsIntegration
- Draft:ATC/Glossary
- Draft:ATC/GPEandWidgets
- Draft:ATC/GPESFLightning
- Draft:ATC/outcome limitation change
- Draft:ATC/Predictive Engagement Developers
- Draft:ATC/Predictive Engagement overview
- Draft:ATC/ProvisioningMulticloudCX
- Draft:ATC/RequiredPCDomains
View (previous 500 | next 500) (20 | 50 | 100 | 250 | 500) | https://all.docs.genesys.com/index.php?title=Special:Drilldown/Article&limit=500&offset=0&productshort=Draft%3AATC | 2022-09-24T19:12:40 | CC-MAIN-2022-40 | 1664030333455.97 | [] | all.docs.genesys.com |
Prometheus/Grafana Alert Manager Integration¶
The Integration of Grafana / Prometheus AlertManager allows alerts triggered by AlertManager to appear on Komodor's timelines.
Installation¶
The Alert Manager integration involves 3 steps:¶
- Enabling the integration in Komodor.
- Creating webhook on the alert manager / Grafana.
- Adding labels to the alert.
1. Enabling the integration in Komodor¶
To enable the Komodor Prometheus Alert Manager integration go to Komodor integrations page, select Prometheus/Grafana Alert Manager, and install the integration.
2. (Option A) The following steps are for manual configuration on alert manager¶
Creating webhook¶
- Open your
alertmanager.ymlconfiguration file
- Add a receiver to your receivers list, name the receiver
komodorand attach a sink to
webhook_configs. In the
urlfield, put the URL that was provided to you during the integration setup. Also set the field
send_resolvedto
true.
receivers: - name: komodor webhook_configs: - url: "<URL_FROM_KOMODOR>" send_resolved: true
- Next, in
alertmanager.yml, configure a route so that your alert is routed to komodor
route: receiver: komodor
If you already have configured routes you can config multiple as follows:
routes: - match: severity: critical receiver: pagerduty - match: receiver: komodor
A note about the
continue configuration of AlertManager routing rules. If it is set to false, AlertManager sends the alert to the first matching route and stops.
continue default value is false.
A full YAML configuration example:
global: group_interval: 5m repeat_interval: 12h routes: - match: receiver: komodor receivers: - name: komodor webhook_configs: - url: "<URL_FROM_KOMODOR>" send_resolved: true
2. (Option B) Configure alerts from Grafana¶
- Go to
Alerting->
Contact points.
- Click
New contact point.
- Enter in name
komodor, in
Contact point typeselect
Webhookand in
Urlinsert the URL from the UI.
- Click
Alerting->
Notification policies.
- In the notification policies, configure komodor contact endpoint as you wish to configure alerts to Komodor.
3. Adding labels to the alert.¶
To relate the alert to the relevant workload and make the alerts visible on workloads timelines - adding labels to alerts is required. Each alert without a label will be added to the system without mapping to a specific service.
- Please note to specify 2 labels on the alert in order to connect them to the Kubernetes workload:
service: <workload-name> cluster: <cluster-name>
- [Optional] defining custom description to the alert on Komodor (specify it in the annotation):
description: <content> | https://docs.komodor.com/Integrations/AlertManager.html | 2022-09-24T20:34:11 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['img/AlertManagerGrafana/alerting_contact_points.png',
'AlertingContactPoints'], dtype=object)
array(['img/AlertManagerGrafana/new_contact_point.png', 'NewContactPoint'],
dtype=object)
array(['img/AlertManagerGrafana/configure_new_contact_point.png',
'ConfigureNewContactPoint'], dtype=object)
array(['img/AlertManagerGrafana/notification_policies.png',
'NotificationPolicies'], dtype=object) ] | docs.komodor.com |
OpsRamp helps you discover, integrate seamlessly and access details of how your Google Cloud services are performing. Google Cloud consists of a set of physical assets, such as computers and hard disk drives, and virtual resources, such as virtual machines (VMs), that are contained in Google data centers around the globe. This page explains how to configure integration with Google Cloud Platform.
Prerequisite
The service account created to give access to the Google Cloud resources must at least be assigned the viewer role. For more details on IAM roles, see Google cloud documentation on Understanding roles. To grant a service account access to a project, see Creating and managing service accounts.
Google Cloud Platform configuration
To install GCP integration:
- Log into your Google Cloud portal.
- Select the project that you are assigned to and click Open.
- Copy the Project ID from the Project info section.
- On the left pane, click IAM & Admin > Service Accounts.
- From the displayed service accounts for your project, copy and paste the service account email ID in a text editor such as Notepad.
- From the available options, under the Actions column, select Create key.
- From the Create private key for “project-name” window, select P12 and click CREATE.
- Download the file at a safe location.
A new window opens, confirming the downloading of the file and the private key password.
- Copy the private key password to a text editor at a safe location.
You will not be able to see the password again. You need this password to use the private key.
OpsRamp configuration
After you have copied all the details from your Google Cloud console, use the details to install the Google Cloud integration on the OpsRamp console.
To install GCP integration:
Select a client from the drop-down list in which you want to install the Google Cloud app.
Go to Setup > Integrations and Apps. If apps are already installed, the INSTALLED APPS page is displayed, else the AVAILABLE APPS page is displayed.
Search for Google Cloud app using the search option. You can also use the All Categories dropdown and select the appropriate public cloud category.
Click ADD. The Add Google Cloud page is displayed.
Provide the details in the fields:
- Provide a suitable name for the integration.
- Enter the service account email ID.
- Enter the Project ID.
- For Service Account Management Certificate, click Choose File and select the private key P12 file through the file browser.
- In the Management Certificate PassPhrase field, enter the password of the private key file. Google Cloud app is installed.
All the discovered services are visible in the Infrastructure page under
Resources > Google Cloud
Click Google Cloud. The list of installed Google Cloud integrations are displayed. You can perform actions like Edit, Uninstall, Rescan. | https://docs.opsramp.com/integrations/google-cloud/installation-and-configuration/gcp-install/ | 2022-09-24T20:45:27 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.opsramp.com |
Migrating from Scaffolding Server to Scaffolding Cloud
In this page:
This page outlines the steps to migrate your Scaffolding data from Confluence Server or Data Center to Confluence Cloud.
Prework
Before you start, read and understand the following information:
- Understand the macro differences and feature differences between Scaffolding for Confluence Server and Confluence Cloud to have an idea about what the data will look like after migration to Cloud.
- Your Scaffolding Structures and Live Template macros will be migrated automatically along with Confluence data by using the current version of Atlassian's Cloud migration assistant.
Guide
At the end of this guide, you will have:
- Upgraded Scaffolding to version 8.25.0 or above.
- Prepared the Scaffolding data and templates in Server.
- Performed the migration using latest Confluence Cloud Migration tool.
- Checked and re-configured your Scaffolding data in Confluence Cloud.
These may take a few hours to complete depending on how large your Confluence data is.
Step 1 - Plan out your migration
Follow through with the steps in Step 1 - Plan Out Your Migration (Scaffolding)
Overview:
- Find and identify the pages that are using the Scaffolding macros.
- Prepare the Scaffolding data in Server.
- (Optional) Prepare a set of test data and a staging instance to perform a pre-migration before performing the production migration.
- Schedule the migration window.
Step 2 - Proceed with Cloud migration
Follow through with the steps in Step 2 - Proceed with Cloud migration (Scaffolding)
Overview:
- Have your Confluence Cloud instance ready.
- Perform the migration using Confluence Cloud Migration Assistant (CCMA) tool.
Step 3 - Post-migration Steps
Follow through with the steps in Step 3 - Post-migration Steps (Scaffolding)
Overview:
- All pages with Scaffolding macros will arrive with a clickable button "Migrate Scaffolding". Click on this to resume with the migration of the page.
- Check and compare the results of the migration.
Next Steps:
To go back to the migration hub, click here. | https://docs.servicerocket.com/migration/confluence-server-to-confluence-cloud/migrating-from-scaffolding-server-to-scaffolding-cloud | 2022-09-24T20:03:00 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['/migration/files/195855107/224493723/2/1650967620331/Scaffolding+Cloud+Migration+Steps.png',
'Scaffolding Cloud Migration Steps'], dtype=object) ] | docs.servicerocket.com |
interval in seconds at which physics and other fixed frame rate updates (like MonoBehaviour's FixedUpdate) are performed.
Unity does not adjust fixedDeltaTime based on Time.timeScale. The fixedDeltaTime interval is always relative to the in-game time which Time.timeScale affects.
See Time and Frame Rate Management in the User Manual for more information about how this property relates to the other Time properties. | https://docs.unity3d.com/2019.4/Documentation/ScriptReference/Time-fixedDeltaTime.html | 2022-09-24T21:20:54 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.unity3d.com |
.
Tool Settings
A horizontal strip at the top or bottom of the editor (similar to the header) containing settings for the currently selected tool. Just like the header, it can be hidden and moved through its context menu.
Adjust Last Operation
Adjust Last Operation is a region that allows tweaking an operator after running it. For example, if you just added a cube, you can use this region to tweak its size.
Arranging
Scrolling
A region can be scrolled vertically and/or horizontally by dragging it with the MMB. If the region has no zoom level, it can also be scrolled by using the Wheel while the mouse hovers over it.
Some regions, in particular animation timelines, have scrollbars with added control points to adjust the vertical or horizontal range of the region. These special scrollbars will have added widgets at the ends, as shown in the following image:
Scrollbars with zoom widgets.
This can be used to stretch or compress the range to show more or less detail within the available screen space. Simply drag one of the dots to either increase or decrease the displayed range. You can also quickly adjust both the horizontal and vertical range by dragging in the editor with Ctrl-MMB.
Changing the Size and Hiding
Resizing regions works by dragging their border, the same way as Areas.
To hide a region, resize it down to nothing. A hidden region leaves a little arrow sign. LMB on this icon to make the region reappear.
Scaling
The scale of certain regions (such as the Toolbar) can be changed by dragging inside them with Ctrl-MMB, or using NumpadPlus and NumpadMinus while hovering the mouse cursor over them. Press Home to reset the scale to the default. | https://docs.blender.org/manual/zh-hant/dev/interface/window_system/regions.html | 2022-09-24T19:24:42 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['../../_images/interface_window-system_regions_3d-view.png',
'../../_images/interface_window-system_regions_3d-view.png'],
dtype=object)
array(['../../_images/editors_3dview_introduction_3d-view-header-object-mode.png',
'../../_images/editors_3dview_introduction_3d-view-header-object-mode.png'],
dtype=object)
array(['../../_images/interface_window-system_regions_scrollbar_widget.png',
'../../_images/interface_window-system_regions_scrollbar_widget.png'],
dtype=object) ] | docs.blender.org |
Lookup lists are custom lists that you can use for simpler, easier query filtering in Kibana.
Instead of adding a long list of elements to your query, you can create lookup lists and use them to filter results by adding the operator
in lookups or
not in lookups. For example, you can create lookup lists of allowlisted or blocklisted usernames, IP addresses, regions, or domains.
Lookup list values are only string-based and do not support ranges. Kibana, however, supports range-based searches, such as IP: [127.0.0.0 TO 127.*].
Each list you create is added to the main Lookup lists library: Because the lookup lists are centrally managed, any list can be easily updated and changed without requiring manually updating multiple dashboards, saved searches, security rules, and so on.
To view and create lookup lists, from the Cloud SIEM menu, go to More Options > Lookups.
Static and Dynamic lookups
Logz.io offers two main lookup lists, Static and Dynamic.
A Static lookup list is created by adding individual values or uploading a CSV file with the different fields and values you want to track. While you can update the list, it has to be done manually and maintained by you.
a Dynamic lookup list uses a query as its source of data. The query’s results fill the list with the different fields and values, updating and maintaining it independently.
Learn more about the two lookup lists and how to use them:
- Create a Static lookup
- Create a Dynamic lookup
- Filter by lookup lists in Kibana
- Add a lookup list filter to a security rule
- Delete a lookup list
Filter by lookup lists in Kibana
You can filter by lookup lists in Kibana dashboards, security rules, and searches.
For example, go to the SIEM Kibana page or open a Dashboard. Click Add a filter to show the filter dialog box.
- Field - Select a field to filter by.
- Operator - Select the operator in lookups or not in lookups.
- Value - Select the lookup you want to filter by.
Add a lookup list filter to a security rule
If you want to use your lookup lists as a reference when creating security rules, navigate to the Create a rule page, and select Add a filter.
Select the field you want to filter by, and select whether it’s included or excluded from a lookup.
Next, select the lookup list you’d like to refer to from the dropdown menu.
Save your filter, and continue editing the rule.
Learn more about managing security rules.
Delete a lookup list
To delete a lookup, hover over the item and click delete to delete it. You’ll be asked to confirm the deletion. | https://docs.logz.io/user-guide/lookups/ | 2022-09-24T19:41:10 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/siem-lookups/lookuplist-nav.gif',
'Open Lookup lists'], dtype=object)
array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/siem-lookups/lookup_filter-kibana_or_dashbd.gif',
'Filter by lookup'], dtype=object)
array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/siem-lookups/filter-with-lookup_rules.png',
'Select a filter'], dtype=object) ] | docs.logz.io |
-
Release Notes
August 24, 2022 Release Notes
Reseller notifications and Colt to Colt virtual circuits.
July 25, 2022 Release Notes
Aggregate capacity.
July 7, 2022 Release Notes
Provisionable quotes for Reseller Admin Users and their customers.
June 13, 2022 Release Notes
Sales user group and customer feature flags.
May 31, 2022 Release Notes
Self-service renewals and Cloud Router page UI improvements.
May 16, 2022 Release Notes
Multiple EPL virtual circuits on ENNI ports.
April 28, 2022 Release Notes
Multiple BGP sessions for dedicated port Cloud Router connections, resend invites to pending users.
March 24, 2022 Release Notes
Quoting tool for resellers, various usability improvements to Cloud Router pages.
February 8, 2022 Release Notes
Various bug fixes and optimizations.
2021 Releases
Release notes from 2021.
2020 Releases
Release notes from 2020.
Updated on 29 Jun 2020 | https://docs.packetfabric.com/releases/ | 2022-09-24T20:25:02 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.packetfabric.com |
To start, VoltDB requires a recent release of the Linux operating system. The supported operating systems for running production VoltDB databases are:
CentOS V6.6 or later. Including CentOS 7.0 and later
Red Hat (RHEL) V6.6 or later, including Red Hat 7.0 and later
Ubuntu 14.04 and 16.04
It may be possible to run VoltDB on other versions of Linux. Also, an official release for Macintosh OS X 10.9 and later is provided for development purposes. However, the preceding operating system versions are the only fully tested and supported base platforms for running VoltDB in production.
In addition to the base operating system, VoltDB requires the following software at a minimum:
Java 8
NTP
Python 2.5 or later
Sun Java SDK 8 is recommended, but OpenJDK 8 is also supported. Note that although the VoltDB server requires Java 8, the Java client is also compatible with Java 7.
VoltDB works best when the system clocks on all cluster nodes are synchronized to within 100 milliseconds or less. However, the clocks are allowed to differ by up to 200 milliseconds before VoltDB refuses to start. NTP, the Network Time Protocol, is recommended for achieving the necessary synchronization. NTP is installed and enabled by default on many operating systems. However, the configuration may need adjusting (see Section 2.5, “Configure NTP” for details) and in cloud instances where hosted servers are run in a virtual environment, NTP is not always installed or enabled by default. Therefore you need to do this manually.
Finally, VoltDB implements its command line interface through Python. Python 2.4 or later is required to use the VoltDB shell commands. | https://docs.voltdb.com/v7docs/AdminGuide/adminserversw.php | 2022-09-24T20:32:15 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.voltdb.com |
Bug Squad/Communication
From Joomla! Documentation
< Portal:Bug Squad
The Joomla! Bug Squad uses. Additionally, you might want to subscribe to the Joomla! CMS Development Mailing list.
Joomla! uses GitHub as its code repository and the Joomla! Issue Tracker for reporting bugs. | https://docs.joomla.org/index.php?title=Portal:Bug_Squad/Communication&oldid=365898 | 2022-09-24T20:47:25 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.joomla.org |
Config Change API Integration¶
Config change API allows users to send changes in their config (from internal tools and infrastructure), and see them as part of the Komodor Service view.
How to use¶
Request URL¶
Mandatory query params will be used for service selection:
- serviceName
- namespace
- clusterName
URL example"
Authentication¶
To authenticate the request use API Key on your "REST API" integration tile in the Komodor app and add it to a header with
X-API-KEY name.
The REST API key can be found in the Integration page.
If REST API integration isn't available for your account, please contact your account manager in Komodor.
Body¶
This is the event itself with the relevant configuration you want to be connected to the service as JSON.
{ key1: value1, key2: value2… }
Config map and Secrets¶
Configmap and Secrets can be shown in events tab, please contact us if you want this option. Configmaps that include the coming words will be ignored:
- "istio"
- "cluster-autoscaler-status"
Full Example¶
curl -H "X-API-KEY: <rest api key>" -H "Content-Type: application/json" -d '{"key":"value"}' "" | https://docs.komodor.com/Learn/config-changes.html | 2022-09-24T19:39:17 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.komodor.com |
Recent Activities
First Activity
Since our last report we have held our meeting at ALA's Annual Conference in Washington DC. There we talked about Makerspace technology that participants were interested in learning about, programs that have worked for us using these technologies.
Meets LITA’s strategic goals for Education and Professional Development, Member Engagement
Second Activity
We held our elections for co-chair and secretary
Meets LITA’s strategic goals for Organizational Stability and Sustainability
Third Activity
Since then our small committee has talked about the idea of holding the meetings at Makerspaces and ways to address the website.
Meets LITA’s strategic goals for Education and Professional Development, Member Engagement
Fourth Activity
We added new people to the Maker Tech listserv.
Meets LITA’s strategic goals for Member Engagement
What will your group be working on for the next three months?
Changes to the website and the possibility of holding our meeting in Philadelphia at a Makerspace.
Is there anything LITA could have provided during this time that would have helped your group with its work?
Information on whether or not we are allowed to hold our meeting outside of a conference room.
Submitted by Erik Carlson on 09/28/2019 | https://docs.lita.org/2019/09/maker-technology-interest-group-february-2019-report-2/ | 2022-09-24T19:56:20 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.lita.org |
Creating Wholesale Orders
Wholesale orders can be created manually or via CSV upload on the Wholster platform. Here we will review these two order creation options.
Manual Order Creation (from the merchant facing dashboard)
The first step is to select the customer you would like to create an order for by navigating to Wholster Admin –> Actions –> Create New Order –> Select Customer.
Once you select a customer, you will be redirected to the catalog page, where you can select products and enter quantities for the order.
Next, select the shipping rate that you would like to apply to the order, and click ‘Continue‘.
Once here, you can choose how the order will be finalized. If you are sending the order as an invoice to a customer, please see this article:
Bulk / CSV Upload Order Creation (from the merchant facing dashboard)
To begin the CSV Order Creation process, go to Actions –> Create New Order from the Wholster dashboard.
Once here, in Step. 1, pick your customer. In Step.2, choose Upload Via CSV.
Now you are taken to the speadsheet order creation tool, where orders can be created in bulk, in different ways.
If you are sending a spreadsheet to a customer to have filled out, you can choose “Download Blank File” at the top of the page.:
Once shipping is selected, you are then taken to the checkout page. Select a saved credit card, or fill out out the necessary information and check “Confirm Order”.
_13<<
Once an account is created, you can login as a customer by going to wholester.io/login, and start creating orders on connected shops:
| https://docs.wholster.com/article/creating-wholesale-orders-2/ | 2022-09-24T20:29:40 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['https://docs.wholster.com/wp-content/uploads/sites/5/2021/06/Screen-Shot-2021-06-24-at-4.01.02-PM-1024x357.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2021/06/Screen-Shot-2021-06-24-at-4.08.23-PM-1024x684.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2021/06/Screen-Shot-2021-06-25-at-4.00.00-PM-1024x312.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2021/06/Screen-Shot-2021-06-25-at-4.11.34-PM-1024x588.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/image-5-1024x610.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/image-6-1024x469.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/Screen-Shot-2020-09-18-at-2.48.29-PM-1024x944.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/image-7-1024x691.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/Screen-Shot-2020-09-18-at-2.50.12-PM-1024x535.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/Screen-Shot-2020-09-18-at-2.49.01-PM-1024x386.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/Screen-Shot-2020-09-18-at-2.51.31-PM-1-1024x476.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/08/image-14-1024x675.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/08/image-13-1024x661.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/image-8-1024x155.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/image-9-1024x470.png',
None], dtype=object)
array(['https://docs.wholster.com/wp-content/uploads/sites/5/2020/09/image-10-1024x670.png',
None], dtype=object) ] | docs.wholster.com |
-21 for more information about using custom Authorization providers. Data Services Platform artifacts is as follows:
Figure 6-2 illustrates the securable resources in a Data Services Platform application.
Figure 6-2 Securable Resources.
Note: Note that the DSP Console itself constitutes an administrative resource you can secure with security policies. application access policy:
Figure 6-5 Securing an Application
When this option is selected, access to all resources is blocked by default and security policies are applied. You can either keep this restrictive policy as the default and selectively configure security policies on individual resources, or choose to permit access by default.
This permits access to all resources, even to unauthenticated users, unless a more specific policy blocks it.
You can now set function or element level security policies on Data Services Platform resources.
A data service typically has several functions, including one or more read functions, navigation functions, and a submit function. A submit function allows users to save data changes to back-end data sources. Function-level security policies enable you to control who can use data service functions and when. This enables you to set stricter controls on the ability to change data, for example, compared to the ability to read data. an another data service, the policy is not evaluated against the caller.
Note: For the purposes of security, data service functions are identified by name and number of parameters. This means that if you modify the number of parameters, you will need to reconfigure the security settings for the function.
To create a function security policy:
The functions in the data service appear, as illustrated in Figure 6-6.
Figure 6-6 Security Policy Function List
For more information, see Using the WebLogic Policy Editor.
Note: You must enable access control for the application to have function-level security policies applied to users. For more information, see Securing Applications.
Element-level security associates a security policy with a data element within a data service's return type. If the policy condition is not met, the corresponding data is not included in the result.
Warning: Any element for which you want to create a security policy must be defined as optional or repeating in the schema definition of the data service type. Mandatory elements in the schema definition are not securable to ensure conformance with the XSD.
An element-level security policy applies across all functions of the data service. However, note that it applies only in the context of that data service..
Figure 6-7 Secured Elements Tab
Selecting a parent node includes all children of the parent.
The element now appears in the resources list as an element type.-18.
Notice that the function uses the BEA extension XQuery function
is-access-allowed(). This function tests whether a user associated with the current request context can access the specified resource, which is denoted by a element name and a resource identifier...
Figure 6-11 Resource Identifier Format
The resource can be any of the following:
{ld:DataServices/ElectronicsWS/
getProductList}getProductList:1
ld:submit.
is-access-allowedfunction, for example.
These are generated when you select an element in the Secured Element tab of the DSP Console, as shown in Figure 6-12.
Figure 6-12 Element Resources | http://e-docs.bea.com/aldsp/docs20/admin/security.html | 2009-07-04T16:21:05 | crawl-002 | crawl-002-015 | [] | e-docs.bea.com |
This topic provides a basic overview of WebLogic Portal security. The other portal topics, listed in Topics Included in this Section, provide implementation instructions.
WebLogic Portal uses the underlying WebLogic Server security architecture to let you create secure portal applications. The ultimate goal of portal security is to restrict access to portal resources and administrative functions to only those users who need access to those resources and functions.
Note: Implementing security in a portal requires a basic understanding of standard security concepts, many of which are outside the scope of WebLogic documentation; for example, encryption, injection of SQL statements at login, and secure socket layers (SSL). The Related Topics section contains, among other things, links to information that will help give you a broader, more complete view of security and the issues surrounding it.
WebLogic Portal provides built-in functionality for authentication ("Who are you?") and authorization ("What can you access?").
WebLogic Portal provides many authentication samples that you can incorporate into your portals. WebLogic Portal also provides many tools for user/group management.
Implementing Authentication contains details about the authentication examples contained in the Tutorial Portal.
WebLogic Portal also provides two sample login portlets you can reuse in your portals to authenticate WebLogic users:
You can also build other types of authentication supported by WebLogic Server.
The WebLogic Administration Portal provides tools for managing users, groups, and setting user/group properties. For information on managing users and groups, see:
There are three fundamental categories of things that can be secured in portals:
Using the WebLogic Server concept of security roles, WebLogic Portal lets you dynamically match users to roles at login. Different roles are, in turn, assigned to different portal resources, administrative tools, and J2EE resources so users can access only the resources and tools that their assigned roles allow.
The WebLogic Administration Portal provides the tools for managing users, portal delegated administration roles, visitor entitlement roles, interaction management rules, content management, and portal resources.
You can lock down the WebLogic administration portal with delegated administration, which provides secure administrative access to the WebLogic Administration Portal tools. Delegated administration security is based on the delegated administration roles you create.
The WebLogic Workshop Portal Extensions and the WebLogic Administration Portal provide tools for creating and managing portals, desktops, shells, books, pages, layouts, look & feels, and portlets. You can control access to portal resources for two types of users: administrators and visitors.
Administrators - You can control the portal resources that can be managed by portal administrators using delegated administration.
Visitors - You can control visitor access to portals and portal resources with visitor entitlements. Visitor entitlements are based on the visitor entitlement roles you create.
J2EE resources are the application framework and logic (Web applications, JSPs, EJBs, and so on) for which you can control visitor access. Security on J2EE resources is based on global security roles set up in WebLogic Server and applied to the individual J2EE resources. Security roles for J2EE resources are different than security roles that users can belong to, though both types of roles use the same roles architecture.
The portal sample domain <BEA_HOME>\<WEBLOGIC_HOME>\samples\domains\portal and any portal domain you create with the Configuration Wizard include the following default users. You can add these usernames and passwords to your existing domains.
The Open Web Application Security Project (OWASP)
Unified User Profiles (edocs)
Creating User Profile Properties
Using Portal Controls (for user/group management)
User/Group Management JSP Tags
For details on managing users and groups, see the WebLogic Administration portal online help system, also available on e-docs. | http://e-docs.bea.com/workshop/docs81/doc/en/portal/security/securityIntro.html | 2009-07-04T16:20:51 | crawl-002 | crawl-002-015 | [] | e-docs.bea.com |
Using WebLogic Logging Services for Application Logging
The following sections provide an overview of localization and internationalization:
BEA Using Message Catalogs with WebLogic Server for more detailed information about message catalogs.
To view the WebLogic Server message catalogs, see the Index of Messages by Message Range..
In addition to message text, include information about the type and placement of any run-time values that the message contains. For details, see Using the WebLogic Server Message Editor..
For more detailed information, see Writing Messages to the WebLogic Server Log. | http://e-docs.bea.com/wls/docs90/i18n/Overview.html | 2009-07-04T16:20:43 | crawl-002 | crawl-002-015 | [] | e-docs.bea.com |
A single web service may communicate with multiple clients at the same time, and it may communicate with each client multiple times. In order for the web service to track data for the client during asynchronous communication, it must have a way to remember which data belongs to which client and to keep track of where each client is in the process of operations. In WebLogic Workshop, you use conversations to uniquely identify a given communication between a client and your web service and to maintain state between operations.
Conversations are essential for any web service involved in asynchronous communication. This includes web services that communicate with clients using callbacks or polling interfaces, and web services that use controls with callbacks.
The following sample web services use conversations:
For more information about samples, see Samples.
Getting Started: Using Asynchrony to Enable Long-Running Operations | http://e-docs.bea.com/workshop/docs81/doc/en/workshop/guide/converse/navMaintainingStatewithConversations.html | 2009-07-04T16:25:25 | crawl-002 | crawl-002-015 | [] | e-docs.bea.com |
Installing WebLogic Integration
The following sections provide information that you need to know before installing the BEA WebLogic IntegrationTM 8.5 software and the BEA WebLogic PlatformTM 8.1 components required for that installation:
WebLogic Integration 8.5 is distributed and installed using the BEA Installation and Distribution System, which provides a complete framework for the following:
The installation program installs the following:
Optionally, the installation program also installs the following products. These products are provided as a convenience to WebLogic Integration users who may incorporate one or more of them into their applications and domains:
The installation program also enables support for the following products:
You can use the WebLogic Integration installation program to install WebLogic Express 8.1. can use the WebLogic Integration installation program to install WebLogic Server Process EditionTM..
When you install WebLogic Integration using any of the installation procedures described in this document, all the components required for WebLogic Server Process Edition—namely, WebLogic Integration, WebLogic Server, and WebLogic Workshop—are installed by default.
For "The WebLogic Server Process Edition Mode" in the WebLogic Server Process Edition Overview at the following URL:
Note: WebLogic Integration and WebLogic Platform licenses include support for WebLogic Server Process Edition functionality, and can also be used for WebLogic Server Process Edition applications. For more information about WebLogic Platform licenses, see the Licensing page at the following URL:
Instructions for installing a license with WebLogic Server Process Edition support are provided in Installing and Updating WebLogic Integration 8.5 License Files.
For more information about WebLogic Server Process Edition, see the WebLogic Server Process Edition documentation at the following URL:
You can use the WebLogic Integration installation program to install WebLogic Platform ISV Edition. WebLogic Platform ISV Edition is a special software package tailored for Independent Software Vendors (ISVs) who have a current agreement with BEA to build value-added solutions on WebLogic Platformor one of its components. WebLogic Platform ISV Edition comprises a set of WebLogic Platform components specifically packaged to help ISVs jumpstart their service-oriented architecture (SOA) initiatives.
To use WebLogic Platform ISV Edition, you must install the following WebLogic Platform product components using the WebLogic Integration installation program:
You can do so using any of the installation methods described in this document. For standard installation procedures, refer to the following chapters:
The development license installed on your system when you install WebLogic Integration" in Installing WebLogic Platform at the following URL:
For more information about WebLogic Platform ISV Edition, see the WebLogic Platform ISV Partners' Guide at the following URL:
The BEA installation program supports the following methods for installing the BEA WebLogic Integration software:
Graphical-mode installation is an interactive, GUI-based method for installing WebLogic Integration. It can be run on both Windows and UNIX systems. For installation procedures, see Installing WebLogic Integration Using Graphical-Mode Installation. WebLogic Integration, from the command line, on either a UNIX system or a Windows system. For instructions about using this method, see Installing WebLogic Integration Using Console-Mode Installation.
Silent-mode installation is a noninteractive method of installing WebLogic Integration Installing WebLogic Integration Using Silent-Mode Installation.
WebLogic Integration provides a development and run-time framework for joining business process management and application integration capabilities into a single, flexible environment. If you select the entire WebLogic Integration component, the installation program installs the program files for the WebLogic Integration server, the WebLogic Workshop Integration Extension, and the examples.
Note: You must install WebLogic Integration to have access to the WebLogic Server Process Edition controls. For more information about WebLogic Server Process Edition, see the WebLogic Server Process Edition documentation at the following URL:
WebLogic Integration consists of the following subcomponents:
The BPEL Import tool imports BPEL 1.1 compliant code into a Process Definition for Java (JPD) process file, where it can be used in the WebLogic Workshop design environment.
The BPEL Export tool exports the semantics of JPD code into BPEL, where it can be used in a BPEL-compatible design environment.
Note: The installation of this subcomponent requires the installation of the WebLogic Workshop Integration Extension.
The TIBCO RV control is a Java control for WebLogic Workshop that enables seamless connection to, and transfer of data with, TIBCO Rendezvous using the Rendezvous daemon.
The TIBCO RV event generator is a WebLogic Integration event generator that listens for TIBCO Rendezvous messages and raises events to Message Broker channels..5, Platform, which is available from the WebLogic Integration installation program, consists of the following software, in addition to WebLogic Integration: Portal, provided as a convenience with this installation program, Integration 8.5 package, and is supported by WebLogic Integration and all WebLogic Platform components. In addition, it is available as a standalone SDK.
The BEA JRockit 1.4.2 SDK is installed automatically when you install WebLogic Integration 8.5 Integration 8.5, see Supported Configurations for WebLogic Platform 8.5 WebLogic Integration, to ensure that you do not uninstall a component that is required by another component.
WebLogic Integration 8.5 is distributed on both the BEA Web site and CD-ROM.
You can download the WebLogic Integration 8.5 software, including the WebLogic Platform 8.1 components it requires, from the BEA Web site at.
Two methods are available for download:
Before the download begins, the net installer prompts you to provide the following information: downloaded correctly.
Note: If you are planning to install the software in silent mode, you must download the package installer. Silent-mode installation is not supported by the net installer.
If you purchased WebLogic Integration from your local sales representative, you will find the following items in the product box:
Any service packs and rolling patches are included in the latest distribution of WebLogic Integration 8.5, which you can download as described in Web Distribution. If you do not have WebLogic Integration 8.5 installed, you should install this distribution.
If you already have WebLogic Integration 8.5 installed, and if you have a BEA eSupport account, you can upgrade your software in one of the following ways:, without downloading the entire WebLogic Integration distribution.
Details about installing WebLogic Integration service packs and rolling patches are provided in Installing Service Packs and Rolling Patches.
If you do not have a BEA eSupport account, go to the following URL to get one:
The following sections specify the installation prerequisites for a WebLogic Integration installation:
The system requirements for WebLogic Integration are given in the following table.
The BEA installation program uses a temporary directory in which it extracts the files necessary to install WebLogic Integration the following URL:
If WebLogic Workshop is installed on a Windows system by a user with Administrator privileges, then all users must have Administrator privileges to use WebLogic Workshop. Otherwise, you may encounter an error when running WebLogic Workshop applications.
The WebLogic Integration software cannot be used without a valid license. When you install WebLogic Integration 8.5, the installation program installs two non-expiring licenses: a license to use for development (
license.bea) and a license to use for limited scale production deployments (
license_scale_limited.bea).
To use WebLogic Integration, and the WebLogic Platform components it requires, in a full-scale production environment, you must purchase a production license. To find out how to do so, contact your sales representative or visit the BEA corporate Web site at.
For more information about development, scale-limited, and production licenses, see "Installing and Updating WebLogic Platform License Files" in Installing WebLogic Platform at the following URL:
Development and production licenses for version 8.1 of WebLogic Platform will continue to work with WebLogic Integration 8.5.
Secure sockets layer (SSL) encryption software is available with two levels of encryption: 56-bit and 128-bit. In WebLogic Integration 8.5 and higher, the license files installed with WebLogic Integration 8.5 enable both 56-bit and 128-bit encryption by default.
The WebLogic Integration installation program provides two types of installation: Complete and Custom.
In a complete installation, the following components are automatically installed:
The WebLogic Server Node Manager is not installed as a Windows service.
In a custom installation, you have the following options:
Two SDKs are installed on Windows platforms only: the Sun Java 2 SDK 1.4.2 and the BEA JRockit 1.4.2 SDK. For more information, see BEA JRockit SDK.
When installing WebLogic Integration the managed server instances of WebLogic Integration. Integration, the service is also uninstalled. Integration and all other WebLogic Platform components Server Integration, you need to specify locations for the following directories:
When you install WebLogic Integration, you are prompted to specify a BEA Home directory. This directory serves WebLogic Integration installation program, that includes a bundled SDK.
This illustration depicts only the files and directories required in the BEA Home directory. However, if you choose the default product installation directory, you will see additional directories in the BEA Home directory, such as
weblogic81_wli85. Although the default location for the product installation directory is within the BEA Home directory, you can select a different location outside the BEA Home directory.
Note: For some UNIX platforms, the WebLogic Integration installation program does not install an SDK.
During installation of WebLogic Integration, you are prompted to choose an existing BEA Home directory or specify a path to create a new BEA Home directory. If you choose to create a new directory, the WebLogic Integration Integration 8.5 in a BEA Home directory, but that BEA Home directory may also contain an instance of WebLogic Platform 7.0.
The files and directories in the BEA Home directory are described in the following table.
Although it is possible to create more than one BEA Home directory, we recommend that you avoid doing so. In almost all situations, a single BEA Home directory is sufficient. There can the initial installation of WebLogic Integration, to choose a product installation directory. If you accept the default, the WebLogic Integration software, and the WebLogic Platform components it requires, are installed in the following directory:
C:\bea\weblogic81_wli85
Here,
C:\bea is the BEA Home directory and
weblogic81_wli85 is the product installation directory for the WebLogic Integration 8.5 software. However, you can specify any name and location on your system for your product installation directory. There is no requirement that you name the directory
weblogic81_wli85 Integration Product Directory Structure.:
platform
XXX
_win32.exe -log=C:\logs\platform_install.log
In this filename,
XXX represents the version number of the software you are installing. installing service packs, see Installing Service Packs and Rolling Patches.
For information about upgrading WebLogic Integration components from previous releases, see BEA WebLogic Integration Upgrade Guide at the following URL:. | http://e-docs.bea.com/wli/docs85/install/prepare.html | 2009-07-04T16:25:19 | crawl-002 | crawl-002-015 | [] | e-docs.bea.com |
The following sections describe the different migration mechanisms supported by WebLogic Server:
These sections focus on migration of failed server instances and services. WebLogic Server also supports replication and failover at the application level. For more information, server migration, the server instance is migrated to a different physical machine upon failure. Server Migration.
WebLogic Server provides two separate implementations of the leasing functionality. Which one you use depends on your requirements and your environment.:
<WEBLOGIC:. For general information on configuring Node Manager, see Using Node Manager to Control Servers.
For general information on leasing, see Leasing...
Automatic singleton service migration allows the automatic health monitoring and migration of singleton services. A singleton service is a. The process of migrating these services to another server is handled via the Migration Master. See Migration Master on page 7. | http://e-docs.bea.com/wls/docs92/cluster/migration.html | 2009-07-04T16:25:05 | crawl-002 | crawl-002-015 | [] | e-docs.bea.com |
The.
Warranty. A copy of the specific warranty terms applicable to your Hewlett-Packard product and replacement parts can be obtained from your local Sales and Service Office.
Restricted Rights Legend. Use, duplication, or disclosure by the U.S. Government Department.
Use of this manual and flexible disk(s) or tape cartridge(s) supplied for this pack is restricted to this product only. Additional copies of the programs may be made for security and back-up purposes only. Resale of the programs in their present form or with alterations, is expressly prohibited.
Reproduction, adaptation, or translation of this document without prior written permission is prohibited, except as allowed under the copyright laws.
(C)copyright 1979, 1980, 1983, 1985-93 Regents of the University of California
This software is based in part on the Fourth Berkeley Software Distribution under license from the Regents of the University of California.
(C)copyright 1980, 1984, 1986 Novell, Inc.
(C)copyright 1986-1992 Sun Microsystems, Inc.
(C)copyright 1985-86, 1988 Massachusetts Institute of Technology.
(C)copyright 1989-93 The Open Software Foundation, Inc.
(C)copyright 1986 Digital Equipment Corporation.
(C)copyright 1990 Motorola, Inc.
(C)copyright 1990, 1991, 1992 Cornell University
(C)copyright 1989-1991 The University of Maryland.
(C)copyright 1988 Carnegie Mellon University.
Trademark Notices. UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company Limited.
X Window System is a trademark of the Massachusetts Institute of Technology.
MS-DOS and Microsoft are U.S. registered trademarks of Microsoft Corporation.
OSF/Motif is a trademark of the Open Software Foundation, Inc. in the U.S. and other countries.
Second Edition: April (HP-UX Release 10.20)
NOTE: In this white paper, words in double braces ({{word}}) represents a parameter or argument that you must replace with an actual value.
From the time you boot your system to the time you get a "login:" prompt, the system performs several important tasks automatically. The system tests computer hardware, loads and initializes HP-UX, communicates messages to users, and runs scheduled routines. When these system startup tasks are finished, HP-UX is in state where users can log in and execute processes.
NOTE: In case of trouble, you can boot from a recovery system (a minimal system kept on DDS media or cartridge tape). Be sure to make a recovery system as soon as you verify that the system is installed correctly. For more information, refer to the chapter "Backing Up and Restoring Data" in the HP-UX System Administration Tasks manual and the Support Media User's Manual.
System startup occurs in three phases:
As these phases occur, a series of messages appears quickly on the console screen. You can review HP-UX kernel messages once you have a login prompt by invoking the command /sbin/dmesg with superuser privilege. To see what happens during /sbin/rc and startup script execution, see /etc/rc.log.
The Boot ROM startup sequence is different for each architecture, but the HP-UX startup sequence is similar.
This white paper describes the Boot ROM startup sequence for Series 700 and 800 computers and the HP-UX startup sequence. By understanding them, you can tailor system startup to your unique needs.
Conceptually, the boot program performs the same basic functions regardless of architecture. When the system is powered up, the boot program initializes and tests hardware to bring the system into a usable state by the operating system.
This includes:
Stable storage is the non-volatile memory reserved for maintaining critical configuration parameters used during system boot. For example, the primary and alternate boot paths, console path, and autoboot settings are stored in Stable Storage.
After checking the viability of the system, the boot program searches for a copy of the secondary loader. It searches a list of potential sources, including disks and network (LAN), as shown by Figure 2-1. (LIF implementation differs, depending on architecture, but the basic concept remains true.)
Boot Program ----------------------------------------- | | | bootROM | | | v | _______________________ | | | | | LIF Volume | v |_____________________| ..._________________... LAN ..._________________...
Figure 2-1. The Boot Program Searches for an Operating System
The boot program software works with a specific hardware architecture. The secondary loader is a larger program, whose additional code provides the flexibility to deal with the changes to the booting process from operating-system release to release. (From the perspective of the boot program, the terms operating system and secondary loader are synonymous.)
Further, the secondary loader is stored in LIF (Logical Interchange Format) disks that are understood by HP 1000, 3000, and 9000 systems.
The boot program finds the secondary loader on the boot media (typically in the LIF volume of a mass storage media), loads it into memory, and starts it running.
The automatic boot process on the Series 700 resembles the Series 800 due to similarities in the Series 700 and 800 processor-dependent code (PDC).
Switching on the system or pressing the system transfer-of-control (TOC) button causes PDC and I/O-dependent code (IODC) firmware to be executed, to verify hardware and system integrity. The PDC runs self-tests and locates the console, using IODC and the paths stored in Stable Storage. The PDC displays on the console screen copyright information, PDC and IODC ROM revisions, and amount of memory configured. (See the manual page for pdc(1M) in the HP-UX Reference.)
Once checking is complete, one of two things might happen depending on your system (refer to your system's Owner's Guide for details):
PDC displays the following to the console:
Selecting a system to boot. To stop selection process, press and hold the Escape key.
If the Escape key is not pressed, PDC loads the initial system loader (ISL) from the primary path in Stable Storage (for example, a SCSI disk with address scsi.6.0) and transfers control to it.
The ISL, scheduled by the autoboot sequence, finds the autoexecute file and executes the command specified in it (typically "hpux" or "hpux boot disc(;0)/stand/vmunix"). By default, the autoexecute arguments load the program /stand/vmunix into memory and execute it.
While loading, the secondary loader displays information about the device and file being booted, the text, data, and BSS size of the kernel and the kernel's startup address.
If necessary, you have 10 seconds to override the default autoboot sequence by pressing the Escape key and enter Attended Mode.
For details on the Series 700 boot capabilities, including boot sequences from a variety of devices, autoboot, and restore, see hpux(1M) in the HP-UX Reference. Also, see the Owner's Guide for your Series 700 system for information on secondary loader enhancements.
Each time the computer is powered on, you have the opportunity to interact with it by entering Attended Mode. For example, you might need to interrupt the boot sequence for either of the following reasons:
Pressing the Escape key halts the automatic boot sequence and puts you into Attended Mode. The system searches the SCSI, LAN, and EISA interfaces for all potential boot devices -- devices for which boot I/O code (IODC) exists. Then, depending on your system, the system may display a table, such as the following:
Device Selection Device Path Device Type and Utilities P0 scsi.6.0 QUANTUM PD210S P1 scsi.5.0 QUANTUM PD210S P2 scsi.4.0 {{DDS_tape_drive_identifier}} P3 scsi.3.0 TOSHIBA CD-ROM DRIVE:XM P4 lan.123456-789abc homebase
At this point, the Boot Console User Interface main menu offers the following options:
b) Boot from specified device s) Search for bootable devices a) Enter Boot Administration mode x) Exit and continue boot sequence ?) Help
Using the b) option, you can direct hpux to boot from a specific device. For example, to direct hpux to boot from the QUANTUM PD210S disk drive at SCSI ID 5, you would type the key sequence "b p1".
Using the s) option of the Boot Console User Interface main menu, you direct the system to search through the list of potential boot devices for only those that have a LIF volume and initial program loader (IPL). The system then displays a table listing only those devices from which you can boot.
For example, a system might return the following selection:
Device Selection Device Path Device Type & Utilities ---------------- ----------- ----------------------- P0 scsi.6.0 Quantum PD210S IPL P1 scsi.5.0 Quantum PD210S IPL P2 scsi.1.0 HP 2213A IPL
Using the a) option of the Boot Console User Interface main menu, you can enter Boot Administration mode, from which you can alter default behaviors exhibited at boot-up or obtain useful information about the hardware as configured. The following commands are available at the Boot Administration Mode menu:
AUTO Display state of Autoboot/Autosearch flags AUTOSEARCH Set state of Autosearch flag AUTOBOOT Set state of Autoboot flag BOOT Boot from Primary/Alternate path or Specified Device DATE Read/Set the Real-Time Clock EXIT Return to previous menu FASTSIZE Display/Set FASTSIZE memory parameter HELP item Display Help information for item INFO Display boot/revision information LAN_ADDR Display LAN Station Address OS Display/Select Operating System PATH Display/Modify Path Information PIM_INFO Display Processor Internal Memory Information RESET Reset the System SEARCH Search for boot device SECURE Display/set secure boot mode SHOW Display the results of the previous search
For detailed information on using the Boot Administration mode, see the Owner's Guide for HP-UX Users for your Series 700 system.
When more than one operating system is present on the system's mass storage devices, both the location of the operating systems and the type of media on which they are stored determine which operating system is loaded. The primary boot path in Stable Storage determines the default boot path.
When you turn the computer on, the Boot ROM goes through the following sequence:
On Series 800 systems, storage on each disk may be divided into a set of partitions, or may be contained in a single "whole-disk" partition. For disks that are divided into partitions, the disk on which your bootable system resides has a boot area and a root partition. The boot area contains the bootstrap program and other files needed for bringing up the system. See isl(1M) in the HP-UX Reference for more information on these files. For disks that are divided into partitions, the disk partition for the boot area is typically Section 6. Root partitions are usually located on other sections of the disk.
The following describes what happens at powerup or reset for some systems. Refer to your system's Owner's Guide for details.
At powerup or reset:
Booting from default path To interrupt, press any key within 10 seconds
If you press a key the system asks if booting will be done from the primary autoboot path, the secondary autoboot path, or ask for an alternate path.
Once the operating system is loaded, the Boot ROM passes program control to the operating system. The operating system then controls the system until you re-boot the system. The section in this white paper called "HP-UX Startup Sequence" describes what HP-UX does between the time it takes control and the time you see the "login:" prompt.
The Boot ROM initializes the primary boot path, loads ISL, and allows you to select either the manual or autoboot mode. In manual mode, you can select the boot device from all the available peripheral devices. In autoboot mode, the Boot ROM automatically boots the operating system from the primary boot path defined in Stable Storage.
You should use autoboot except for first-time installation and operating system reconfiguration. The ISL "autoboot on" command enables autoboot. No reboot or automatic reboot on panic is possible. Panic is a condition when the system becomes inoperative due to an abnormal condition detected by the kernel. Before using autoboot, make sure the boot device is fully powered up and ready for operation before you turn on your computer. Autoboot is selected if you let the 10 second override period expire.
Manual boot can be entered by pressing any key during the 10-second override period in the beginning of the autoboot sequence. When manual mode is activated, the Boot ROM prompts for the path to be used. The primary boot path is not altered or disabled.
If you do not want to boot automatically from the primary boot path during the boot process, disable the autoboot flag in Stable Storage. You can do this using the ISL "autoboot off" command. However, this is not normally done because the autoboot feature makes your system administration tasks more efficient. Disabling autoboot requires your intervention each time the system is rebooted. To re-enable autoboot, use the ISL "autoboot on" command.
In the Autosearch mode, If the system cannot locate the console using the console path from Stable Storage, it searches for a console device. Then, if the system cannot locate the boot device using the primary or alternate boot paths and the autosearch flag is set, the system continues to search for a boot device.
The autosearch flag is much like the autoboot flag. It is in Stable Storage and can be enabled or disabled. Use the ISL" autosearch on" command to enable autosearch and the ISL "autosearch off" command to disable the feature.
When autosearch is invoked (if the two paths specified in Stable Storage fail), the following messages appear on the console:
Autosearch for boot device enabled. To override, press any key within 10 seconds.
If you press a key, the system responds:
Do you want to continue an interactive search? (Y or N)?>
If you answer "no," autosearch halts at that point and proceeds to manual boot. If you answer "yes," the system searches for a boot path, states it, and asks you if you want to boot from it. If you respond "yes," the system uses that path. Otherwise, it presents other logical paths until you respond positively or until it finds no other paths. It then proceeds to manual boot.
Once HP-UX takes control from the intermediate loader, it performs several tasks:
When the HP-UX kernel starts, it locates and initializes the system hardware, such as memory, I/O devices, bus interfaces, and devices found on I/O buses. Device drivers in the HP-UX kernel are associated to I/O devices at this time. A series of messages is printed out on the system console at this time; these messages can be reviewed later by executing the command /sbin/dmesg.
The major data structures of the HP-UX kernel are created and initialized when the kernel starts up. These data structures include the tables that describe the system processes, system memory, file systems, and devices. Status messages are also printed out to the system console at this time. Virtual memory is initialized, and the system is prepared to start user space processes.
Once HP-UX starts, it searches for the root file system. The root file system is the portion of the file system that forms the base of the global file system hierarchy -- that is, the base of the file system on which other file systems can be mounted. The root file system contains critical system files. The root file system is often found on the disk from which HP-UX booted.
After finding the root file system, the operating system starts a shell to execute a series of commands from the file /sbin/pre_init_rc. Among these commands is fsck(1M), which checks the root file system, which at that point is mounted in a read-only state. (Please refer to the man page for fsck(1M) or to sections on the fsck operation in white papers on HP-UX file systems. Inconsistencies or problems noted in the file system that fsck is unable to fix at this time are corrected later by a more exhaustive fsck operation in the startup script /sbin/bcheckrc. After fsck exits, the operating system remounts the root file system in a read-write state.
Do not modify the /sbin/pre_init_rc script; changes made to this script may cause the system to be unbootable.
HP-UX starts the init process, /sbin/init. The init process has process ID one (1) and no parent process. The init process is ultimately responsible for starting all other user processes.
The init process reads the /etc/inittab initialization file to define the environment for normal working conditions.
The init process reads the /etc/inittab file one line at a time, each line containing an entry that describes an action to take.
The syntax of inittab entries is:
id:run-levels:action:process id One- to four-character ID that uniquely identifies the entry. run-levels Defines the run-levels in which the entry can be processed. You can specify multiple run-levels. If this field is empty, all run-levels are assumed. The following describes the run-levels: Run-Level Description --------- ----------- 1 Provides essential services, such as mounting file systems and configuring essential system parameters. 2 Used for general multi-user run state. 3 Used for export of certain types of network file systems. 4 Used for HP VUE or presentation manager startup. Typically, entries tell init to run a process at specific run-levels. If no run-levels are specified, the process can execute in any run-level. For example, the following entry tells init to run the /usr/sbin/getty process in all run-levels: cons::respawn:/usr/sbin/getty console H action Identifies what action to take for this entry. The actions are as follows: sysinit Performs system initialization on devices needed by init for obtaining run-level information at the console, such as tty characteristics. sysinit entries must finish executing before /etc/inittab continues. initdefault Causes the initial (default) run-level to be the value of the run-levels field. If more than one run-level is specified in run-levels, init uses the highest specified run-level. boot Run the command specified in the process field at boot-time only. Do not wait for process to exit before reading the next entry. Before enabling other users to access the system, init executes all /etc/inittab entries marked boot or bootwait. These processes are known as boot processes. bootwait Run the command specified in the process field at boot-time only. Wait for process to exit before reading the next entry. wait On entering the run-level that matches the run-levels field of this entry, run process and wait for it to exit before reading the next entry. respawn On entering the run-level that matches the run-levels field of this entry, run process if it is not already running. Do not wait for process to exit before reading the next entry. When the process exits, run it again. process This is a shell command to be run, if the entry's run-levels matches the run-level and/or the action field indicates such action.
HP-UX recognizes any text following the # symbol as commentary.
Each system architecture has its own /etc/inittab file, with some unique architecture-specific /etc/inittab characteristics. For example, systems containing graphical consoles typically start a presentation manager such as HP VUE or CDE to manage the display of multiple windows on the console. The inittab entry corresponding to the startup of such a presentation manager is as follows:
vue :4:respawn:/usr/vue/bin/vuerc # VUE invocation
Systems that do not contain a graphical user interface would not have such an entry in /etc/inittab.
The /etc/inittab file sets up system run-levels. Run-levels are defined as collections of processes that allow the system to operate with certain properties. The entry marked initdefault sets the default run-level (typically 3 or 4):
init:3:initdefault:
Among the /etc/inittab entries are programs that ensure system integrity and set up essential processes. These programs are discussed in the next subheadings.
If you are implementing the Logical Volume Manager (LVM), bcheckrc calls /sbin/lvmrc to activate LVM volume groups.
On all systems, /sbin/bcheckrc verifies that the system was properly shut down, and that file systems were consistently saved on the disk. Depending on the type of the file system, bcheckrc may invoke a series of operations such as fsck that verify file system integrity, and correct it if necessary. If a file system has become damaged and fsck cannot repair it automatically without loss of data, then bcheckrc will start a shell at the system console with the prompt "(in bcheckrc)#", instructing you to run fsck interactively. If this occurs, you must run fsck to correct the integrity of your file system.
Some file system problems must be fixed in this way to minimize risk of data loss. After running fsck interactively, you may be instructed to reboot the system. If so, reboot the system using "reboot -n" to bring down the system without writing the in-core filesystem map out to disk; this maintains the disk file system's integrity. If fsck does not tell you to reboot, exit the shell by typing CTRL-D. This returns control to bcheckrc.
When fsck has verified the consistency of the file systems, it exits.
The following entry invokes /sbin/rc:
sqnc::wait:/sbin/rc >/dev/console 2>&1 # system initialization
The /sbin/rc script is a general purpose sequencer program that runs whenever there is a change in the system run-level (such as a change in the run-level from 2 to 3). The system executes /sbin/rc at startup as this is a change from run-level 0 (halted) to the initdefault level.
/sbin/rc invokes startup scripts appropriate for the run-level. When entering state 0, /sbin/rc starts scripts in /sbin/rc0.d. When a system is booted to a particular run-level, it will execute startup scripts for all run-levels up to and including the specified level. For example, if you are booting to run-level 4, the /sbin/rc sequencer script executes the start scripts in this order: /sbin/rc1.d, /sbin/rc2.d, /sbin/rc3.d, and /sbin/rc4.d.
Current start scripts and sequence numbers are listed in an accompanying file. Note that the entries on your system may vary depending on your configuration. The scripts are run in alphanumeric sequence. For a description of the script, see the ASCII-readable script file on your system. Also note that kill scripts for start scripts in directory /sbin/rc{{n}}.d reside in /sbin/rc({{n}}-1).d.
The init process waits until /sbin/rc exits before processing the next entry in /etc/inittab.
Once /sbin/rc has finished control returns to init, which runs the commands from the process field for appropriate run-level entries in /etc/inittab. Typically, entries in /etc/inittab for a given run-level invoke the /usr/sbin/getty command, one for each terminal on which users log in. (When you add a new terminal with the SAM utility, it automatically adds an appropriate /usr/sbin/getty entry to /etc/inittab.) The /usr/sbin/getty command runs the login process on the specified terminal, allowing users to login on the terminal.
For example, the following /etc/inittab entry runs a getty at the system console:
cons:123456:respawn:/usr/sbin/getty console console # system console
The respawn action field tells init to restart the getty process after it exits. This means that each time you log off the system console, a new "login:" prompt is displayed, so you can log in the next time. The 123456 run-levels field indicates that init runs getty in run-levels 1 through 6.
The default configuration for /etc/inittab invokes /usr/sbin/getty only for the system console. If your system has additional terminals on which you wish to support logins, you must add the appropriate getty entries to /etc/inittab. (Note, however, that SAM automatically creates these entries when you use it to add terminals).
The /usr/sbin/getty command is the first command executed for each login terminal. It specifies the location of the terminal and its default communication protocol, as defined in the /etc/gettydefs file. It prints the /etc/issue file (if present) and it causes the first "login:" prompt to be displayed. Eventually, the getty process is replaced by your shell's process (see the "Login" white paper).
When you logout, the /sbin/init process is signaled and takes control again. The init process then checks /etc/inittab to see if the process that signaled it is flagged as continuous (denoted by respawn). If the process is continually respawned, init again invokes the command in the command field of the appropriate inittab entry as described above (that is, the getty runs and a new "login:" prompt appears). If the process is not flagged as continuous, it is not restarted.
NOTE: Do not add /usr/sbin/getty entries to /etc/inittab for unconfigured terminals, unless action is "off".
If the system finds itself having to respawn entries too rapidly, it assumes that a problem exists and goes to sleep for five minutes before trying to respawn again. If the problem involves the getty for the system console, the system might not be bootable without repair.
Users can log in at all terminals for which getty processes have been executed.
A local powerfail is a power failure that halts the computer by affecting its central bus. On some architectures, the HP-UX operating system provides a mechanism for recovery from a local powerfail to ensure that any program running on the system at the time of failure can resume executing when power to the bus is restored.
The HP-UX kernel must be correctly configured to support the powerfail operation. You can change the value of the powerfail variable (the default is zero) by using the reconfiguring the kernel section of sam(1M).
Then, if need be, powerfail can be disabled by reconfiguring an operating-system parameter as follows:
pfail_enabled 0;
After the hardware detects a powerfail condition and notifies the HP-UX kernel, the kernel invokes user space processes to handle the event. The powerfail routine that will be invoked when powerfail is detected is configured as an entry in the file /etc/inittab:
powf::powerwait:/sbin/powerfail >/dev/console 2>&1 #power fail routines
The /sbin/powerfail script is an executable script that can be customized by users. The script is separated into three parts:
The configuration variables for powerfail reside in the /etc directory and follow the same rules as /etc/rc.config files.
The powerfail execution script resides under the /sbin or /usr/sbin directories and cannot be modified.
At the end of an HP-supplied script, users can add their own commands. These scripts reside in /etc/local/%%script_name%%.
For example, /sbin/powerfail can look like this:
source /etc/powerfail.cfg {{HP commands}} {{Additional HP commands}} source /etc/local/powerfail
where the contents of /etc/powerfail.cfg is:
VARIABLE=VALUE VARIABLE2=VALUE2
and the contents of /etc/local/powerfail are user-chosen commands.
Be sure to follow guidelines for correct shutdown and start-up of a system necessitated by powerfail. These guidelines are given in Chapter 3, "Starting and Stopping HP-UX" of the HP-UX System Administration Tasks manual.
The sections in this white paper describe the default operation of the system as shipped to you. However, by altering certain configuration or system files, any of the procedures can change. If, for example, you write your own /sbin/rc script, the paragraphs which follow may no longer apply.
Table 2-2 shows where to look for additional information.
Table 2-2. Additional Startup Information --------------------------------------------------------------------- To Learn More About... || Refer to... --------------------------------------------------------------------- /sbin/init || init(1M) in the HP-UX Reference /etc/inittab || inittab(4) in the HP-UX Reference /sbin/bcheckrc || /sbin/bcheckrc file /sbin/rc || /sbin/rc file /usr/sbin/getty || getty(1M) in the HP-UX Reference | http://docs.hp.com/en/935/boot.html | 2009-07-04T16:25:31 | crawl-002 | crawl-002-015 | [] | docs.hp.com |
Before you begin
A directory for the file store that must already exist on your file system, so be sure to create it before completing this page.
A file store is a physical repository for storing subsystem data, such as persistent JMS messages and durable subscribers.
Note: Once you create a file store, you cannot rename it. Instead, you must delete it and create another one that uses the new name.. | http://e-docs.bea.com/wls/docs92/ConsoleHelp/taskhelp/stores/CreateFileStores.html | 2009-07-04T16:26:05 | crawl-002 | crawl-002-015 | [] | e-docs.bea.com |
WebLogic Workshop provides Java controls that make it easy for you to encapsulate business logic and to access enterprise resources such as databases, legacy applications, and web services. There are three different types of Java Controls: built-in Java controls, portal controls, and custom Java controls.
Built-in controls provide easy access to enterprise resources. For example, the Database control makes it easy to connect to a database and perform operations on the data using simple SQL statements, whereas the EJB control enables you to easily access an EJB. Built-in controls provide simple properties and methods for customizing their behavior, and in many cases you can add methods and callbacks to further customize the control.
A portal control is a kind of built-in Java control specific to the portal environment. If you are building a portal, you can use portal controls to expose tracking and personalization functions in multi-page portlets.
You can also build your own custom Java control from scratch. Custom Java controls are especially powerful when used to encapsulate business logic in reusable components. It can act as the nerve center of a piece of functionality, implementing the desired overall behavior and delegating subtasks to built-in Java controls (and/or other custom Java controls). This use of a custom Java control ensures modularity and encapsulation. Web services, JSP pages, or other custom Java controls can simply use the custom Java control to obtain the desired functionality, and changes that may become necessary can be implemented in one software component instead of many.
If you are connecting to an enterprise resource that exposes a standards-based, J2EE, or Web Services interface, you can create a custom Java control to directly connect to that application. However, if you are connecting to an external resource that is proprietary or does not expose standard J2EE APIs, you may need to use a JCA (Java Connector Architecture) adaptor and an Application View control rather than a Java control to connect to that resource. JCA adaptors and the Application View control are available through WebLogic Integration. For more information on using JCA adaptors and the Application View control, see Overview: Application Integration. | http://e-docs.bea.com/workshop/docs81/doc/en/workshop/guide/controls/navWorkingWithJavaControls.html | 2009-07-04T16:25:43 | crawl-002 | crawl-002-015 | [] | e-docs.bea.com |
Getting Help HP Browse provides a comprehensive online help system that offers useful information about functions and the context in which they are used. To access the online help facility, do any one of the following: Press the Help function key. Or: Press H. Or: Press h. Or: Press ?. HP Browse presents you with three types of help. The first type lists several tasks you can do with HP Browse. It describes how to accomplish each task. The second type lists all the HP Browse functions and gives a description of each function. The third type lists all the HP Browse command keys and the functions they perform. To leave the help facility and return to browsing your file, do either of the following: Press the Exit Help function key. Or: Press Return one or more times until you get back to your file. When you press Return, HP Browse displays the previous help screen unless you are at the main help menu. In this case, your text file is displayed.
MPE/iX 5.0 Documentation | http://docs.hp.com/cgi-bin/doc3k/B3638490001.10244/9 | 2009-07-04T16:26:01 | crawl-002 | crawl-002-015 | [] | docs.hp.com |
.0. The 2to3 tool will automatically adapt imports when converting your sources to 3.0.
See also
Toplevel widget of Tix which represents mostly the main window of an application. It has an associated Tcl interpreter.
Classes. There is a demo of all the Tix widgets in the Demo/tix directory of the standard distribution.
The Tix module adds:. | http://docs.python.org/library/tix.html | 2009-07-06T13:54:05 | crawl-002 | crawl-002-015 | [] | docs.python.org |
Note
The repr module has been renamed to reprlib in Python 3.0. The 2to3 tool will automatically adapt imports when converting your sources to 3.0.
The repr.
Limits on the number of entries represented for the named object type. The default is 4 for maxdict, 5 for maxarray, and 6 for the others.
New in version 2.4: maxset, maxfrozenset, and set.>' | http://docs.python.org/library/repr.html | 2009-07-09T12:34:35 | crawl-002 | crawl-002-015 | [] | docs.python.org |
User Profile
Add Friend
Track User
Send V-Gift
s2docs's Journal
Created on 2003-12-10 15:52:46 (#1524038), last updated 2004-01-08
27 comments received, 0 comments posted
Permanent Account
7 Journal Entries, 0 Tags, 0 Memories, 0 Virtual Gifts, 2 Userpics
Contact:[email protected]
This will eventually transmogrify itself into the official documentation reference journal for LiveJournal's new and improved recipe S2 style system.
This journal is not the place to ask questions about S2. Any comments to entries in this journal asking for assistance with your journal are subject to deletion and/or a pointer to this page.
Here are some useful S2 resources that are already available to you:
Finally, if you have any S2-specific queries that aren't answered in any of the above journals, communities or FAQs, please submit a support request. Support volunteers are standing by.
This journal is not the place to ask questions about S2. Any comments to entries in this journal asking for assistance with your journal are subject to deletion and/or a pointer to this page.
Here are some useful S2 resources that are already available to you:
- First and foremost, LiveJournal's FAQ has an entire section devoted to S2, as well as an FAQ on the differences between LiveJournal's two style systems.
- Tutorials on how to do things to your existing S2 style are being added every day to the
s2howto community, and if you feel like contributing your own tutorial you're welcome to participate in the
howto_userdoc community.
- General discussion about S2 is conducted in the
s2styles community, which you're recommended to watch if you have any interest in learning more about S2 or following its progress.
- If you're a paid user who currently uses the "Component" S2 style with your journal, Component-specific tutorials and information can be found in the
s2component community. The style even has a dedicated support community, in the form of
component_help.
Interests (22):
customization, customizing, documentation, hacking, layers, layouts, livejournal, livejournal styles, lj, lj layout, lj style, making s2 less scary, new style system, s2, s2 documentation, s2 styles, style developers, style development, style system, styles, themes, tutorials
External Services:
Friends (5):
Friend of (28):
agochic, agreyowl, ardisia, arie, ballroomriot, bunney, byulibnida, cmshaw, dictyostelium, eatenberry, eeriedescent, eli_lilly, entirelysonja, irea-6242 [deadjournal], kamara, kunzite1, miegamice, mk_tortie, nodisc, quillscribe, s2_comms, silverthistle, squeecakes, starlog_entry, tinyjo, tjackson, undo, xella
Watching (0)
Watching (0) | http://s2docs.livejournal.com/profile | 2009-07-11T02:33:08 | crawl-002 | crawl-002-015 | [array(['http://l-stat.livejournal.com/img/profile_icons/arrow-down.gif',
None], dtype=object)
array(['http://l-stat.livejournal.com/img/profile_icons/arrow-down.gif',
None], dtype=object)
array(['http://l-stat.livejournal.com/img/profile_icons/arrow-down.gif',
None], dtype=object)
array(['http://l-stat.livejournal.com/img/userinfo.gif', None],
dtype=object)
array(['http://l-stat.livejournal.com/img/community.gif', None],
dtype=object)
array(['http://l-stat.livejournal.com/img/syndicated.gif', None],
dtype=object) ] | s2docs.livejournal.com |
September 21st, 2008 - San Diego, CA. USA - Mobile SMS Text and Email services provider Bulletin.net Inc today announced the launch of its full A2P messaging products and services and mTag technology and into the China market.
This follows the company establishing a new messaging connectivity hub in Hong Kong to support both inbound and outbound SMS Text Messaging and Mobile Email. The service is now fully operational.
Bruce Herbert, President and Chief Operating Officer of Bulletin.net Inc said, "We are thrilled to be able to provide our existing and new customers access to our full suite of SMS Text products and services for messaging to and from China. The Hong Kong Gateway is this latest major addition to our global SMS Gateway and responds to the fast increasing demand for carrier-grade, secure and reliable application to mobile text services in the region."
The launch of Bulletin's global connectivity and patented mTag technology in China enables multiple 2way conversations via SMS directly between computers and mobile phones from and to anywhere in the world. Leading products offered by the company which are now available both inbound and outbound in the China market include SMS Text Messaging from PC's, Email or the Web, a unique Mobile Email service, Mobile Marketing and an Applications-Connectivity solution with full API's and SDK's.
"With the phenomenal growth in demand for SMS Text and Mobile-Email services in China," continued Herbert, "Bulletin's Hong Kong Gateway provides enterprises, integrators and content providers with an even more powerful way of connecting with the expanding and enthusiastic mobile content audience."
Bulletin Wireless is a world-renowned developer and supplier of mobile-device messaging software and services, available to carriers, enterprises, integrators and content providers. With successful businesses established in the USA, UK, Australia, New Zealand, Brazil, The Philippines and Hong Kong. Bulletin's patents include the only known method of automatically enabling the 'reply' button on today's three billion cellphones to be used to seamlessly respond to software application initiated SMS text messages.
The company delivers high-performance and secure global mobile messaging products, connectivity and support services to an extensive list of clients around the world including many blue chip and Fortune 500 enterprises and telecommunication companies.
Anthony Kelly
Bulletin.net Inc.
4445 Eastgate Mall
Suite 200
San Diego, CA 92121
USA
Phone: +1 858 812 3169
[email protected] | http://docs.bulletinwireless.net/display/public/2008/09/23/Bulletin.net+Inc+opens+a+new+SMS+Gateway+in+Hong+Kong+and+launches+its+full+range+of+products+and+services+in+China | 2009-07-02T19:21:58 | crawl-002 | crawl-002-015 | [] | docs.bulletinwireless.net |
Installing VMware Tools in the Appliance
Nexthink recommends installing VMware Tools in any Appliance that runs on top of VMware virtualization products such as vSphere. VMware Tools significantly improves the performance and manageability of virtualized Appliances.
Starting from Nexthink V6, the Appliance is distributed with the open-vm-tools package already pre-installed. Therefore, no action is required on your part. When you deploy the Appliance in a VMware environment, it directly benefits from the features provided by the package. In addition, the package is automatically updated via the Appliance updates whenever a new version is available.
If for some reason you need to install the commercial version of VMware Tools, uninstall the open-vm-tools package first and then proceed as follows. Note however that VMware recommends the use of open-vm-tools on those platforms where the package is available, so do not install the commercial version of VMware Tools unless you really know what you are doing.
To install the commercial version of VMware Tools in the Appliance:
Open the vSphere Web Client and log in to connect to your vCenter Server.
On the left-side pane, click vCenter and select Virtual Machines from the Inventory Lists section.
Click the name of the virtual machine that runs the Appliance.
In the Summary tab, a yellow warning box displays the message VMware Tools is not installed on this virtual machine.
Click the link to the right of the warning message that reads Install VMware Tools.
Click Mount in the pop up dialog. A virtual CD with VMware Tools is now attached to your VM.
Open a terminal connection to the Nexthink Appliance (e.g. click Launch Console or connect to it via ssh) and log in to its CLI.
Type the following commands to mount the virtual CD:
sudo mkdir /mnt/cdrom sudo mount -t iso9660 /dev/cdrom /mnt/cdromCODE
Check whether the mount was successful by listing the contents of the cdrom folder. The file VMwareTools-<version>.tar.gz must appear in the list:
ls /mnt/cdrom/CODE
Copy the VMware Tools file to the tmp folder and extract its contents:
cp /mnt/cdrom/VMwareTools-*.tar.gz /tmp/ cd /tmp tar -xvzf VMwareTools-*.tar.gz cd vmware-tools-distribCODE
Install the VMware tools executing the following script:
sudo ./vmware-install.plCODE
Press Enter to accept the default option whenever asked during the installation process.
Reboot the Appliance after install:
sudo rebootCODE
After installing VMware Tools, you should be able to see the IP addresses of the VM hosting the Appliance in the Summary tab. The warning message about the installation of VMware Tools disappears.
RELATED REFERENCE | https://docs-v6.nexthink.com/V6/6.30/Installing-VMware-Tools-in-the-Appliance.330370933.html | 2022-05-16T19:46:10 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs-v6.nexthink.com |
Senate
Record of Committee Proceedings
Committee on Elections, Ethics and Rural Issues
Senate Bill 293
Relating to: automatic voter registration and the integration of registration information with information maintained by the Department of Transportation and other state agencies and granting rule-making authority.
By Senators Hansen, Johnson, Bewley, Carpenter, Larson, Ringhand, Risser, Schachtner, L. Taylor, Wirch and Smith; cosponsored by Representatives Crowley, Gruszynski, Vining, Anderson, Billings, Bowen, Brostoff, Cabrera, Considine, Doyle, Emerson, Fields, Goyke, Haywood, Hebl, Hesselbein, Hintz, Kolste, McGuire, L. Myers, Neubauer, Ohnstad, Pope, Riemer, Sargent, Shankland, Sinicki, Spreitzer, Stubbs, Subeck, C. Taylor, Vruwink and Zamarripa.
June 21, 2019 Referred to Committee on Elections, Ethics and Rural Issues
March 26, 2020 Failed to pass pursuant to Senate Joint Resolution 1
______________________________
Scott Nelson
Committee Clerk | https://docs.legis.wisconsin.gov/2019/related/records/senate/elections_ethics_and_rural_issues/1566312 | 2022-05-16T18:06:37 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.legis.wisconsin.gov |
Timeline
06/29/09:
- 19:54 Changeset [5255] by
- Add ECN0024 - Improve GPS ground and PCB layout
- 19:07 Changeset [5254] by
- Werner did a GPS review.
- 17:01 Changeset [5253] by
- Missing annotation
- 16:54 Changeset [5252] by
- Use net names instead of global labels
- 14:37 Changeset [5251] by
- Added convenience target "make update" to check out the latest SVN …
- 14:34 Changeset [5250] by
- Removed generated libraries.
- 14:29 Changeset [5249] by
- Added HT110 (single LED) and ISC5804AT2 (single transistor) to expanded …
- 14:26 Changeset [5248] by
- Renamed cpu_power.sch to cpu-power.sch.
- 12:17 Changeset [5247] by
- add spi
- 11:14 Changeset [5246] by
- RF trace impedance notes
- 11:04 Changeset [5245] by
- Missing caps annotation
- 11:00 Changeset [5244] by
- Missing inductor annotation
- 10:54 Changeset [5243] by
- Finish GPS schematic
- 01:10 Changeset [5242] by
- more vertical labels on cpu sheet
- 00:58 Changeset [5241] by
- use horizontal text in preference to vertical
06/28/09:
- 22:32 Changeset [5240] by
- space caps a little, and actually connect them up!
- 20:14 Changeset [5239] by
- Removed TST text from test points in IO schematics.
- 20:07 Changeset [5238] by
- Cleaned up vibrator circuit to remove crossings and follow the 4-grid …
- 18:53 Changeset [5237] by
- debug connector complete
- 18:05 Changeset [5236] by
- Remove unused/not needed components
06/27/09:
- 18:32 Changeset [5235] by
- add debug connector, more CPU parts to I/O sheet. Not yet complete.
- 12:36 Changeset [5234] by
- renumbering of ht110 to allow use of ht210's footprint.
- 12:23 Changeset [5233] by
- status / authors update
- 12:19 Changeset [5232] by
- Use FLASH_1v8 where we should.
- 12:15 Changeset [5231] by
- new power symbol - FLASH_1v8
- 12:04 Changeset [5230] by
- CPU_POWER sheet
- 11:39 Changeset [5229] by
- further pin moves for position
- 11:22 Changeset [5228] by
- move some pins on power module to more logically group function
- 10:58 Changeset [5227] by
- Bold sheet labels
- 10:33 Changeset [5226] by
- componant info for ht110
- 02:36 Changeset [5225] by
- one additional componant, and CPU is done. ish.
- 00:11 Changeset [5224] by
- New componant, ht110 (single LED) initial work for CPU sheet
06/26/09:
- 23:11 Changeset [5223] by
- labels for nand usage selection
- 22:21 Changeset [5222] by
- prep cpu sheet, move cpu timer block to CPU sheet.
- 18:55 Changeset [5221] by
- revert unintended elements of the last commit.
- 18:41 Changeset [5220] by
- use USB_DN / USB_DP labels for Bluetooth usb nets.
- 18:01 Changeset [5219] by
- swaped pin LRC and BCLK in symbol WM8753LGEFL added CPU parts to USB and …
- 10:10 Changeset [5218] by
- Update GPS ext. ant. removal info
- 06:26 Changeset [5217] by
- Updated ECNs 1 & 16 to reflect changes that were made to the schematics. …
- 06:03 Changeset [5216] by
- Removed duplicate CPU instantiaions from IO and LCM sheets. Updated LCM …
- 00:20 Changeset [5215] by
- ecn0022.txt: added ECN number in ECN0015 reference and corrected title
- 00:00 Changeset [5214] by
- updated ECN0022 and ECN0010, chaged status to discuss moved R4116/R4117 …
06/25/09:
- 19:50 Changeset [5213] by
- added Ref to C4111 and C4110 mirrored some components that Ref is on top, …
- 12:35 Changeset [5212] by
- add subsystem label to LCM sheet
- 12:32 Changeset [5211] by
- use descriptive lables for BT/WLAN 2-wire co-existence
- 12:04 Changeset [5210] by
- include lcm.sch in gta02-core.sch overview
- 12:01 Changeset [5209] by
- Added sheet name in 200 mil bold text to each sheet.
- 10:40 Changeset [5208] by
- r1837 has a bewildering number of build issues. tentative fix.
- 10:10 Changeset [5207] by
- fix-eeschema-libs.patch has made it upstream. We're now at r1837
- 08:58 Changeset [5206] by
- Minor logistics: - added new sheets to STATUS and AUTHORS - added …
- 07:51 Changeset [5205] by
- Initial LCM schematic created. Initial IO schematic created. Updated …
- 02:09 Changeset [5204] by
- Initial commit of Bluetooth and WLAN schematics
06/24/09:
- 18:22 Changeset [5203] by
- New sheet - bt.sch
- 07:04 Changeset [5202] by
- added microphone and speaker sections (work in progress)
06/23/09:
- 23:18 Changeset [5201] by
- added more components (work in progress)
06/22/09:
- 21:29 Changeset [5200] by
- Completed ECN0009.
- 21:24 Changeset [5199] by
- GPS work in progress
- 20:45 Changeset [5198] by
- started ecn removing audio amplifier
- 19:02 Changeset [5197] by
- Renamed ambiguous state "Review" to "Discuss". More new ECNs: 0018 Add …
- 16:01 Changeset [5196] by
- Added two more ECNs: 0016 Remove upper acceleration sensor (U7801) 0017 …
- 15:27 Changeset [5195] by
- Renamed state "Editing" to "Edit". Described ECN states in README. …
- 14:48 Changeset [5194] by
- One more ECN: 0008 Remove external GPS antenna connector and circuit …
- 14:45 Changeset [5193] by
- Completed ECNs 0003, 0005, and 0006. Added ECN0007: 0007 Remove KEEPACT …
- 13:59 Changeset [5192] by
- One more ECN: 0006 Either remove or populate R1701 (PMU.ADAPTSNS …
- 13:48 Changeset [5191] by
- Three new ECNs: 0003 Remove SHUTDOWN net and connect PMU.SHUTDOWN …
- 10:15 Changeset [5190] by
- PMU - some small fixes
- 10:10 Changeset [5189] by
- PMU - some small fixes
- 07:36 Changeset [5188] by
- work in progress (AUDIO)
- 07:08 Changeset [5187] by
- Updated memory.sch -Gave values to RPxxxx -Rotated RP2201 to match OM …
- 04:33 Changeset [5186] by
- replaced inductor symbol for B4902 with FILTER
- 03:43 Changeset [5185] by
- removed H- prefix of test points added ac voltage to varistors on the usb …
- 03:30 Changeset [5184] by
- Added ECN status file.
- 02:49 Changeset [5183] by
- Added schematics review status file.
- 02:38 Changeset [5182] by
- Allocated ECNs for Glamo and NOR removal.
- 02:35 Changeset [5181] by
- Added link to KLB0603K601SA data sheet (commented out for now).
- 02:32 Changeset [5180] by
- fixed some bugs and did some design changes werner mentioned
- 01:41 Changeset [5179] by
- Added convenience targets "gv" and "xpdf" to top-level Makefile and …
- 01:39 Changeset [5178] by
- memory_updates_06212009_1543.diff from Luke Duncan <duncan72187@…>
- 01:01 Changeset [5177] by
- reduced size of choke EXC24Bxxx symbol
- 00:33 Changeset [5176] by
- start working on audio codec schematic
- 00:32 Changeset [5175] by
- added AVCC_CODEC anf DVCC_CODEC to gta02_power.lib
06/21/09:
- 20:02 Changeset [5174] by
- Renamed all-sheets-ps to all-sheets
- 20:01 Changeset [5173] by
- Make overviews in PDF as well. Pre-shrink main schematics to A4. …
- 19:23 Changeset [5172] by
- Oops. SC32442 changes had been made without turning off pin sharing, …
- 19:03 Changeset [5171] by
- Upgraded compilation-making system for main schematics: - gta02-core.sch, …
- 18:15 Changeset [5170] by
- Rearranged SC32442-SDRAM according to Luke's schematics.
- 18:13 Changeset [5169] by
- Added convenience make target "make sch" to all places where we have a set …
- 13:38 Changeset [5168] by
- Added Alvaro's new GPS components to STATUS and AUTHORS. Regenerated …
- 01:19 Changeset [5167] by
- Update from Luke Duncan <duncan72187@…> Note that the CPU needs …
06/20/09:
- 21:11 Changeset [5166] by
- Proposed ECN process.
- 19:56 Changeset [5165] by
- GPS (work in progress)
- 19:46 Changeset [5164] by
- rearranged and resized sheets in gta02-core.sch
- 19:14 Changeset [5163] by
- GPS (work in progress)
- 19:13 Changeset [5162] by
- More power symbols
- 18:49 Changeset [5161] by
- added some missing References fixed some things Alvaro mentioned, missing …
- 18:07 Changeset [5160] by
- Add GPS sheet
- 17:52 Changeset [5159] by
- Add missing symbols - zxct1009 and upg2012tb for GPS
- 17:30 Changeset [5158] by
- added usb to gta02-core.sch added usb schematics
- 17:23 Changeset [5157] by
- added USB_VBUS to gta02_power.lib
- 16:14 Changeset [5156] by
- PMU (work in progress) - Fix annotation
- 16:11 Changeset [5155] by
- PMU (work in progress)
06/19/09:
- 23:51 Changeset [5154] by
- Updated generated libs for R_US change.
- 23:50 Changeset [5153] by
- Switched to combined library.
- 23:38 Changeset [5152] by
- Added text explaining the limited purpose of this file as well.
- 23:15 Changeset [5151] by
- New "schematics" for component editing only, so that we can use only the …
- 22:56 Changeset [5150] by
- Fixed placement of command in the LIBS line of .sch files.
- 21:37 Changeset [5149] by
- K4M51323PE was incorrectly defined as a power symbol.
- 21:33 Changeset [5148] by
- changed R_US to a more symetrical design
- 13:31 Changeset [5147] by
- PMU (work in progress)
- 13:31 Changeset [5146] by
- Add more power symbols
- 12:55 Changeset [5145] by
- Definition of SDRAM_1V8 was called IO_3V3. Oops. Added powers to expanded …
- 12:37 Changeset [5144] by
- Missed the other SDRAM_1V8.
- 12:26 Changeset [5143] by
- Added memory.sch by Luke Duncan <duncan72187@…> (Changed to use …
- 10:21 Changeset [5142] by
- Add GTA02 power
06/18/09:
- 21:41 Changeset [5141] by
- Add btp-03ja4g library
- 20:46 Changeset [5140] by
- PMU (ongoing)
- 20:35 Changeset [5139] by
- PMU (ongoing)
- 10:44 Changeset [5138] by
- Added Alvie's junction enlargement patch.
- 10:38 Changeset [5137] by
- More work on PMU - output stages
- 10:37 Changeset [5136] by
- Fix pin positions of PMU chip
- 01:44 Changeset [5135] by
- Added r_us to the meta- and generated files.
06/17/09:
- 21:34 Changeset [5134] by
- PMU schematic (work in progress). Add some basic resistors and connections
- 21:07 Changeset [5133] by
- Add R US-like symbol
- 16:54 Ticket #2296 (Cannot disable suspend anymore) created by
- In Om2009 Testing 5, setting "suspend-time" to -1 in Paroli, or "Suspend …
06/16/09:
- 19:19 Ticket #2295 (cpufreq: serial ports fail after suspend/resume: rxerr: port=1 ch=0x24, ...) created by
- Steps to reproduce: 1) boot andy-tracking 5a6ed99264c704e5 with Rask's …
- 19:03 Changeset [5132] by
- Removed 2. GND pin in the symbol of MS2V-T1S Reorderd pin numbers …
- 15:24 Changeset [5131] by
- Example for non-numerical pin numbers.
06/14/09:
- 22:48 Changeset [5130] by
- Review of: - rt9013, rt9702a, rt9711_bd_5, si1040x - sw_push_4, vibrator, …
- 07:45 Changeset [5129] by
- Changed the PMU's VISA from BiDi? to Passive. Reported by Rene.
- 02:50 Changeset [5128] by
- Review of k4m51323pe and pcf50633-04-n3
06/13/09:
- 21:39 Changeset [5127] by
- changed DTC123 pin numbering accordingly to what NXP and ON Semi. is using
06/12/09:
- 22:57 Changeset [5126] by
- Review of aat1275, ht210 and lis302dl
- 16:21 Changeset [5125] by
- Include all possible signals for multi-use pins
- 07:48 Changeset [5124] by
- Revert previous commit. Right adjustment confuses plotting.
- 07:44 Changeset [5123] by
- Updated sc32442.lib to use right-adjusted text for pseudo pins. Requires …
- 06:48 Changeset [5122] by
- Review of 74x1g125, 74x2g126, exc24cb102u, emh4 and varistor
- 06:29 Changeset [5121] by
- Removed text alignment. Jean-Pierre beat us to it :-)
- 00:36 Changeset [5120] by
- Reviewed "coax" and "dfbm-cs320". Some changes to dfbm-cs320: - changes …
06/11/09:
- 16:12 Changeset [5119] by
- Added Rene's component from June 7: - JKK4401: Headset jack - JAR02-062101 …
06/10/09:
- 22:09 Changeset [5118] by
- Review fa2012, wm8753l and 74x1g00_5
- 15:22 Changeset [5117] by
- Andre reviewed the SD/SIM connector smsn16. Added a remark that we name …
- 14:51 Changeset [5116] by
- Andre reviewed atr0610.
- 02:11 Changeset [5115] by
- Integrated Rene's recent components: - added new components to the merged …
06/09/09:
- 16:55 Changeset [5114] by
- Bumped priority of text alignment. Added a bit more details about the push …
- 16:19 Changeset [5113] by
- STATUS: - clarified that "component" is the file name (of the .lib file) …
- 01:32 Changeset [5112] by
- fixes to the atr0635 from Rene Harder (rehar@…) VCC2 change to E4, …
06/08/09:
- 20:00 Changeset [5111] by
- Update review status
- 17:07 Changeset [5110] by
- I've done a partial review of the atr0635
- 16:33 Changeset [5109] by
- update AUTHORS and STATUS
- 16:16 Changeset [5108] by
- Additional components from Rene Harder rehar@… - gps chipset …
- 16:09 Changeset [5107] by
- Additional component info - from rehar@…
- 15:10 Changeset [5106] by
- First draft of the (incomplete) KiCad? feature wishlist.
- 14:36 Changeset [5105] by
- STATUS updates: - Tobias reviewed si1040x, smsn16, sw_push_4, tas4025a - …
- 02:10 Changeset [5104] by
- Tobias reviewed the SC32442 as well.
06/07/09:
- 18:07 Ticket #2294 (xf86-video-glamo: stopping X can crash the whole system (not even JTAG ...) created by
- Steps to reproduce: 1) sudo gdb 2) attach <pid-of-X> 3) break …
- 00:27 Changeset [5103] by
- Andre's review of the sc32442 is finished.
- 00:15 Changeset [5102] by
- Review of mini-usb connector
06/06/09:
- 23:56 Changeset [5101] by
- Andre found a lot more table 1-1 victims: - VD16/GPD8/SPIMOSI1 is actually …
- 19:16 Changeset [5100] by
- review of b7840, emh4, exc24cb102u, fa2012, ht210, r3113d, rt9013, …
- 17:46 Changeset [5099] by
- nFLG is Open Drain
- 17:27 Changeset [5098] by
- nFLG is an open-drain output
- 16:35 Changeset [5097] by
- More 2442 issues found by Andre: - VDDIARM_1 should be VDDIARM - rename …
- 16:18 Changeset [5096] by
- Andre reviewed the si1040x. Updated compilations for recent fixes.
- 16:03 Changeset [5095] by
- correct pin numbering of pin 5
- 03:45 Changeset [5094] by
- A bit of cleanup and small improvements of components/STATUS: - added …
06/05/09:
- 22:16 Changeset [5093] by
- The cpu-power sheet was not included in the sheet collection.
- 20:27 Changeset [5092] by
- Review atr0610
- 20:15 Changeset [5091] by
- Review aat1275 - change FLT pin type
- 19:49 Changeset [5090] by
- Add Rene's LCD connector FH23-39S-03SHW for TD028TTEC1 module
- 19:47 Changeset [5089] by
- Add Rene's WLAN connector header and socket
- 00:11 Changeset [5088] by
- Updated generated libraries.
- 00:10 Changeset [5087] by
- Andre had reviewed the S3C2442.
- 00:10 Changeset [5086] by
- Renamed NCON0 to NCON. Reported by Andre <andre.knispel@…>
06/04/09:
- 22:03 Ticket #2293 (linux/andy-tracking: gps does not always honor power_on) created by
- Steps to reproduce: 1) echo 1 > …
06/03/09:
- 22:44 Changeset [5085] by
- Marked components reviewed by Andre <andre.knispel@…>
- 20:03 Changeset [5084] by
- Make PCF50633 VISC/REFC pins passive
- 19:41 Changeset [5083] by
- Add DFBM-CS320 symbol from Rene
- 19:41 Changeset [5082] by
- Update component review status
06/02/09:
- 10:15 Changeset [5081] by
- Update review list
06/01/09:
- 20:07 Changeset [5080] by
- Accelerometer: corrected naming of multi-use pins and made SDI and SDO …
05/30/09:
- 00:36 Changeset [5079] by
- Added IT3205BE by Rene Harder <rehar@…> Also added the .lib files …
- 00:00 Changeset [5078] by
- Added U7601 (ATR0610-PQQW) by Rene Harder <rehar@…>
Note: See TracTimeline for information about the timeline view. | http://docs.openmoko.org/trac/timeline?from=2009-06-29&daysback=30 | 2013-12-05T00:07:18 | CC-MAIN-2013-48 | 1386163037893 | [] | docs.openmoko.org |
10 outputs in an AWS CloudFormation template.
Each output is composed of a double-quoted key name, a single colon, and one or more properties.
There are two Output properties:
Value (required). The value of the property that is returned by the aws cloudformation describe-stacks command.
Description (optional). A String type up to 4K in length describing the output value.
Output properties are declared like any other property. In the following example, the
output named
LoadBalancer returns the information for the resource
with the logical name
BackupLoadBalancer.
"Outputs" : { "LoadBalancer" : { "Value" : { "Ref" : "BackupLoadBalancer" } } } | http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html | 2013-12-05T00:07:00 | CC-MAIN-2013-48 | 1386163037893 | [] | docs.aws.amazon.com |
Search This Document
Search by category
You can browse for locations by categories, such as restaurants, gas stations, or entertainment for a certain area. Your current location is the default search area. Certain categories can be refined to provide more specific results.
- On the home screen, select Go To.
- Select Find a Place. To search in a region that is not your current location, select My Location. Enter an address or choose an address from the listed options. Select the new address.
- Select the appropriate category.
- Select Enter.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/49200/rob1348172597318.jsp | 2013-12-05T00:07:16 | CC-MAIN-2013-48 | 1386163037893 | [] | docs.blackberry.com |
. This section describes the manual approach to building Python extensions written in C or C++.
To build extensions using these instructions, you need to have a copy of the Python sources of the same version as your installed Python. You will need Microsoft Visual C++ “Developer Studio”; project files are supplied for VC++ version (C> is the DOS prompt, >>> is the Python prompt; note that build information and various debug output from Python may not match this screen dump exactly):
C>..\..\PCbuild\python_d Adding parser accelerators ... Done. Python 2.2 (#28, Dec 19 2001, 23:26:37) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import example [4897 refs] >>> example.foo() Hello, world [4903 refs] >>>
Congratulations! You’ve successfully built your first Python extension module..
Now your options are:
Copy example.sln and example.vcproj, rename them to spam.*, and edit them by hand, or
Create a brand new project; instructions are below.
In either case, copy example_nt\example.def to spam\spam.def, and edit the new spam.def so its second line contains the string ‘initspam‘. setting in Project.
If your module creates a new type, you may have trouble with this line:
PyVarObject_HEAD_INIT(&PyType_Type, 0)
Static type object initializers in extension modules may cause compiles to fail with an error message like “initializer not a constant”. This shows up when building DLL under MSVC. Change it to:
PyVarObject_HEAD_INIT(NULL, 0)
and add the following to the module initialization function:
if (PyType_Ready(&MyObject_Type) < 0) return NULL;. | http://docs.python.org/3.2/extending/windows.html | 2013-12-05T00:17:04 | CC-MAIN-2013-48 | 1386163037893 | [] | docs.python.org |
numpy.fft.irfftn¶
- numpy.fft.irfftn(a, s=None, axes) == a to within numerical accuracy. (The a.shape..]]]) | http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.fft.irfftn.html | 2013-12-05T00:06:15 | CC-MAIN-2013-48 | 1386163037893 | [] | docs.scipy.org |
Welcome to Coverage Implementation Central. This page is a touchstone for all things related to the Coverage development effort in GeoAPI and GeoTools. You may use this page as a means of keeping track of the coverage development effort, or you may use it to prepare yourself to jump in and help.
The implementation is being broken down into small work units which occupy separate Jira tasks. Generally there will be parallel Implementation Work Unit Groups (IWUGs) in GeoAPI and GeoTools space. Also generally, the GeoTools IWUG will depend on the corresponding GeoAPI IWUG. If you are interested in keeping up with the latest developments in coverage land and you have enough time available to complete an IWUG, feel free to assign the issue to yourself in Jira.
I am currently generating Coverage IWUGs, as the implementation phase is just beginning (timestamp=March 3, 2006). My own involvement will be limited to the generation of IWUGs until all the IWUGs are generated. Then I'll switch gears to implementation mode and start taking issues myself. The moral of this story is: I know there's only one IWUG now. Keep checking this page for new and interesting things to do!
There is a single design and implementation master document which is being written in sections. The master document contains some administrative information, such as the overall package layout and the intended scope of the implementation effort. Each section is contained in a separate document and tackles a logically related subset of the coverage classes. The master document contains links to the component documents. Therefore, to see the entire document, you must download all of the source documents into the same folder, open the master document (*.odm), and respond "yes" when it asks you to update links. If there's problems with this procedure, email me.
The intent of this document is to describe the design in enough detail that implementors can use it to write code. Later on, it is hoped that this same document will serve as a users reference guide.
In the course of implementation, some changes may have to be made to the design in order to reflect reality. Do not be shy about making these changes. However, I do ask that you propagate your changes back up into the design/implementation document so that it is accurate.
Each section of the document follows a pattern.
The implementation documents and many supporting documents are written in OpenDocument Format (ODF) using OpenOffice.org (2.0). An updated list of applications which support ODF may be found here. I will occasionally convert the entire document to PDF, but the PDF version tends to be large. You can ensure that all sections contain the most current information by downloading all of the component documents. The following table associates the document sections with the corresponding GeoAPI and GeoTools Jira tasks.
During the course of this implementation effort, a Poseidon UML model is gradually going to be built up. This model contains the GeoAPI interfaces and the GeoTools implementation classes. This model is responsible for the creation of all the figures in the reference guide. The community edition of Poseidon seems to have an effective upper limit to the size of the model you can use. There's no error message, it's just that the program gets slower as the model gets bigger. Hence, the possibility exists that the model will have to be broken into two or more pieces.
This model, for the moment, can be found distributed among the GeoTools Jira issues. Each model concerns only the Implementation Work Unit Group (IWUG) to which it is attached.
The coverage effort in GeoTools depends in large part on the development of ISO 19123 interfaces in the GeoAPI project. For each IWUG, there exists a corresponding GeoAPI and GeoTools Jira task. For the most part, ISO 19123 interfaces in GeoAPI's pending directory just need to be moved to the final package and modified to match the UML diagram. Martin has done an extensive commenting job which should be preserved as much as possible. However, at the time these interfaces were generated, "Record" and "RecordType" did not exist, nor did any of the NameSpace infrastructure. As such, finalizing the GeoAPI interfaces is largely a matter of updating the existing classes to match the UML diagram.
Implementation of the Coverage classes in GeoTools should be aided greatly by the Reference Guide. However, the process of implementation is not a copy and paste operation. The Poseidon models and the UML diagrams included as "GeoTools Design" figures in the reference guide don't even include all the methods required to implement the interfaces. The diagrams are constructed to be as clear as possible about what items refer to GeoAPI interfaces and what refers to the implementation classes.
The implementation effort employs the Eclipse Modeling Framework (EMF). By doing so, we hope to realize some time savings without sacrificing robustness. Details on how to employ EMF are provided in the draft of Use of EMF.
The GeoTools implementation is taking place in the coverage_branch branch. This split off from trunk as a development branch last October. It is synchronized regularly with trunk to ease the eventual merging back into the main code base. The precise GeoTools version number at the time of this future merging is unknown. I am hesitant to venture a guess because the schedule has already slipped beyond what I was hoping for. (I got distracted by shiny Feature Models and didn't realize I'd have to implement things like Record and NameSpace along the way.)
Standards bodies aren't perfect. Published standards can be expected to contain some errors. They will never get fixed unless people implementing the standards serve as editors. This document is a table containing observations made during the implementation of ISO 19123. I don't exactly know what to do with it when I'm done, but I'll burn that bridge when I come to it. Feel free to review, comment on, or add to the list I've made. I do expect this list to slowly accrete elements as the implementation effort goes forward.
A number of documents have already been produced in the course of this effort. These are collected here for easy reference. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=46835 | 2013-12-05T00:18:14 | CC-MAIN-2013-48 | 1386163037893 | [] | docs.codehaus.org |
Using Tahoe-LAFS with an anonymizing network: Tor, I2P¶
- Overview
- Use cases
- Software Dependencies
- Connection configuration
- Anonymity configuration
- Performance and security issues
Overview¶
Tor is an anonymizing network used to help hide the identity of internet clients and servers. Please see the Tor Project’s website for more information:
I2P is a decentralized anonymizing network that focuses on end-to-end anonymity between clients and servers. Please see the I2P website for more information:
Use cases¶
There are three potential use-cases for Tahoe-LAFS on the client side:
- User wishes to always use an anonymizing network (Tor, I2P) to protect their anonymity when connecting to Tahoe-LAFS storage grids (whether or not the storage servers are anonymous).
- User does not care to protect their anonymity but they wish to connect to Tahoe-LAFS storage servers which are accessible only via Tor Hidden Services or I2P.
- Tor is only used if a server connection hint uses
tor:. These hints generally have a
.onionaddress.
- I2P is only used if a server connection hint uses
i2p:. These hints generally have a
.i2paddress.
- User does not care to protect their anonymity or to connect to anonymous storage servers. This document is not useful to you… so stop reading.
For Tahoe-LAFS storage servers there are three use-cases:
The operator wishes to protect their anonymity by making their Tahoe server accessible only over I2P, via Tor Hidden Services, or both.
The operator does not require anonymity for the storage server, but they want it to be available over both publicly routed TCP/IP and through an anonymizing network (I2P, Tor Hidden Services). One possible reason to do this is because being reachable through an anonymizing network is a convenient way to bypass NAT or firewall that prevents publicly routed TCP/IP connections to your server (for clients capable of connecting to such servers). Another is that making your storage server reachable through an anonymizing network can provide better protection for your clients who themselves use that anonymizing network to protect their anonymity.
Storage server operator does not care to protect their own anonymity nor to help the clients protect theirs. Stop reading this document and run your Tahoe-LAFS storage server using publicly routed TCP/IP.
See this Tor Project page for more information about Tor Hidden Services:
See this I2P Project page for more information about I2P:
Software Dependencies¶
Tor¶
Clients who wish to connect to Tor-based servers must install the following.
Tor (tor) must be installed. See here: . On Debian/Ubuntu, use
apt-get install tor. You can also install and run the Tor Browser Bundle.
Tahoe-LAFS must be installed with the
[tor]“extra” enabled. This will install
txtorcon
pip install tahoe-lafs[tor]
Manually-configured Tor-based servers must install Tor, but do not need
txtorcon or the
[tor] extra. Automatic configuration, when
implemented, will need these, just like clients.
I2P¶
Clients who wish to connect to I2P-based servers must install the following. As with Tor, manually-configured I2P-based servers need the I2P daemon, but no special Tahoe-side supporting libraries.
I2P must be installed. See here:
The SAM API must be enabled.
- Start I2P.
- Visit in your browser.
- Under “Client Configuration”, check the “Run at Startup?” box for “SAM application bridge”.
- Click “Save Client Configuration”.
- Click the “Start” control for “SAM application bridge”, or restart I2P.
Tahoe-LAFS must be installed with the
[i2p]extra enabled, to get
txi2p
pip install tahoe-lafs[i2p]
Connection configuration¶
See Connection Management for a description of the
[tor] and
[i2p] sections of
tahoe.cfg. These control how the Tahoe client will
connect to a Tor/I2P daemon, and thus make connections to Tor/I2P -based
servers.
The
[tor] and
[i2p] sections only need to be modified to use unusual
configurations, or to enable automatic server setup.
The default configuration will attempt to contact a local Tor/I2P daemon listening on the usual ports (9050/9150 for Tor, 7656 for I2P). As long as there is a daemon running on the local host, and the necessary support libraries were installed, clients will be able to use Tor-based servers without any special configuration.
However note that this default configuration does not improve the client’s
anonymity: normal TCP connections will still be made to any server that
offers a regular address (it fulfills the second client use case above, not
the third). To protect their anonymity, users must configure the
[connections] section as follows:
[connections] tcp = tor
With this in place, the client will use Tor (instead of an IP-address -revealing direct connection) to reach TCP-based servers.
Anonymity configuration¶
Tahoe-LAFS provides a configuration “safety flag” for explicitly stating whether or not IP-address privacy is required for a node:
[node] reveal-IP-address = (boolean, optional)
When
reveal-IP-address = False, Tahoe-LAFS will refuse to start if any of
the configuration options in
tahoe.cfg would reveal the node’s network
location:
[connections] tcp = toris required: otherwise the client would make direct connections to the Introducer, or any TCP-based servers it learns from the Introducer, revealing its IP address to those servers and a network eavesdropper. With this in place, Tahoe-LAFS will only make outgoing connections through a supported anonymizing network.
tub.locationmust either be disabled, or contain safe values. This value is advertised to other nodes via the Introducer: it is how a server advertises it’s location so clients can connect to it. In private mode, it is an error to include a
tcp:hint in
tub.location. Private mode rejects the default value of
tub.location(when the key is missing entirely), which is
AUTO, which uses
ifconfigto guess the node’s external IP address, which would reveal it to the server and other clients.
This option is critical to preserving the client’s anonymity (client use-case 3 from Use cases, above). It is also necessary to preserve a server’s anonymity (server use-case 3).
This flag can be set (to False) by providing the
--hide-ip argument to
the
create-node,
create-client, or
create-introducer commands.
Note that the default value of
reveal-IP-address is True, because
unfortunately hiding the node’s IP address requires additional software to be
installed (as described above), and reduces performance.
Client anonymity¶
To configure a client node for anonymity,
tahoe.cfg must contain the
following configuration flags:
[node] reveal-IP-address = False tub.port = disabled tub.location = disabled
Once the Tahoe-LAFS node has been restarted, it can be used anonymously (client use-case 3).
Server anonymity, manual configuration¶
To configure a server node to listen on an anonymizing network, we must first
configure Tor to run an “Onion Service”, and route inbound connections to the
local Tahoe port. Then we configure Tahoe to advertise the
.onion address
to clients. We also configure Tahoe to not make direct TCP connections.
- Decide on a local listening port number, named PORT. This can be any unused port from about 1024 up to 65535 (depending upon the host’s kernel/network config). We will tell Tahoe to listen on this port, and we’ll tell Tor to route inbound connections to it.
- Decide on an external port number, named VIRTPORT. This will be used in the advertised location, and revealed to clients. It can be any number from 1 to 65535. It can be the same as PORT, if you like.
- Decide on a “hidden service directory”, usually in
/var/lib/tor/NAME. We’ll be asking Tor to save the onion-service state here, and Tor will write the
.onionaddress here after it is generated.
Then, do the following:
Create the Tahoe server node (with
tahoe create-node), but do not launch it yet.
Edit the Tor config file (typically in
/etc/tor/torrc). We need to add a section to define the hidden service. If our PORT is 2000, VIRTPORT is 3000, and we’re using
/var/lib/tor/tahoeas the hidden service directory, the section should look like:
HiddenServiceDir /var/lib/tor/tahoe HiddenServicePort 3000 127.0.0.1:2000
Restart Tor, with
systemctl restart tor. Wait a few seconds.
Read the
hostnamefile in the hidden service directory (e.g.
/var/lib/tor/tahoe/hostname). This will be a
.onionaddress, like
u33m4y7klhz3b.onion. Call this ONION.
Edit
tahoe.cfgto set
tub.portto use
tcp:PORT:interface=127.0.0.1, and
tub.locationto use
tor:ONION.onion:VIRTPORT. Using the examples above, this would be:
[node] reveal-IP-address = false tub.port = tcp:2000:interface=127.0.0.1 tub.location = tor:u33m4y7klhz3b.onion:3000 [connections] tcp = tor
Launch the Tahoe server with
tahoe start $NODEDIR
The
tub.port section will cause the Tahoe server to listen on PORT, but
bind the listening socket to the loopback interface, which is not reachable
from the outside world (but is reachable by the local Tor daemon). Then the
tcp = tor section causes Tahoe to use Tor when connecting to the
Introducer, hiding it’s IP address. The node will then announce itself to all
clients using
tub.location, so clients will know that they must use Tor
to reach this server (and not revealing it’s IP address through the
announcement). When clients connect to the onion address, their packets will
flow through the anonymizing network and eventually land on the local Tor
daemon, which will then make a connection to PORT on localhost, which is
where Tahoe is listening for connections.
Follow a similar process to build a Tahoe server that listens on I2P. The
same process can be used to listen on both Tor and I2P (
tub.location =
tor:ONION.onion:VIRTPORT,i2p:ADDR.i2p). It can also listen on both Tor and
plain TCP (use-case 2), with
tub.port = tcp:PORT,
tub.location =
tcp:HOST:PORT,tor:ONION.onion:VIRTPORT, and
anonymous = false (and omit
the
tcp = tor setting, as the address is already being broadcast through
the location announcement).
Server anonymity, automatic configuration¶
To configure a server node to listen on an anonymizing network, create the
node with the
--listen=tor option. This requires a Tor configuration that
either launches a new Tor daemon, or has access to the Tor control port (and
enough authority to create a new onion service). On Debian/Ubuntu systems, do
apt install tor, add yourself to the control group with
adduser
YOURUSERNAME debian-tor, and then logout and log back in: if the
groups
command includes
debian-tor in the output, you should have permission to
use the unix-domain control port at
/var/run/tor/control.
This option will set
reveal-IP-address = False and
[connections] tcp =
tor. It will allocate the necessary ports, instruct Tor to create the onion
service (saving the private key somewhere inside NODEDIR/private/), obtain
the
.onion address, and populate
tub.port and
tub.location
correctly.
Performance and security issues¶
If you are running a server which does not itself need to be anonymous, should you make it reachable via an anonymizing network or not? Or should you make it reachable both via an anonymizing network and as a publicly traceable TCP/IP server?
There are several trade-offs effected by this decision.
NAT/Firewall penetration¶
Making a server be reachable via Tor or I2P makes it reachable (by Tor/I2P-capable clients) even if there are NATs or firewalls preventing direct TCP/IP connections to the server.
Anonymity¶
Making a Tahoe-LAFS server accessible only via Tor or I2P can be used to
guarantee that the Tahoe-LAFS clients use Tor or I2P to connect
(specifically, the server should only advertise Tor/I2P addresses in the
tub.location config key). This prevents misconfigured clients from
accidentally de-anonymizing themselves by connecting to your server through
the traceable Internet.
Clearly, a server which is available as both a Tor/I2P service and a regular TCP address is not itself anonymous: the .onion address and the real IP address of the server are easily linkable.
Also, interaction, through Tor, with a Tor Hidden Service may be more protected from network traffic analysis than interaction, through Tor, with a publicly traceable TCP/IP server.
XXX is there a document maintained by Tor developers which substantiates or refutes this belief? If so we need to link to it. If not, then maybe we should explain more here why we think this?
Linkability¶
As of 1.12.0, the node uses a single persistent Tub key for outbound connections to the Introducer, and inbound connections to the Storage Server (and Helper). For clients, a new Tub key is created for each storage server we learn about, and these keys are not persisted (so they will change each time the client reboots).
Clients traversing directories (from rootcap to subdirectory to filecap) are likely to request the same storage-indices (SIs) in the same order each time. A client connected to multiple servers will ask them all for the same SI at about the same time. And two clients which are sharing files or directories will visit the same SIs (at various times).
As a result, the following things are linkable, even with
reveal-IP-address
= false:
- Storage servers can link recognize multiple connections from the same not-yet-rebooted client. (Note that the upcoming Accounting feature may cause clients to present a persistent client-side public key when connecting, which will be a much stronger linkage).
- Storage servers can probably deduce which client is accessing data, by looking at the SIs being requested. Multiple servers can collude to determine that the same client is talking to all of them, even though the TubIDs are different for each connection.
- Storage servers can deduce when two different clients are sharing data.
- The Introducer could deliver different server information to each subscribed client, to partition clients into distinct sets according to which server connections they eventually make. For client+server nodes, it can also correlate the server announcement with the deduced client identity.
Performance¶
A client connecting to a publicly traceable Tahoe-LAFS server through Tor incurs substantially higher latency and sometimes worse throughput than the same client connecting to the same server over a normal traceable TCP/IP connection. When the server is on a Tor Hidden Service, it incurs even more latency, and possibly even worse throughput.
Connecting to Tahoe-LAFS servers which are I2P servers incurs higher latency and worse throughput too.
Positive and negative effects on other Tor users¶
Sending your Tahoe-LAFS traffic over Tor adds cover traffic for other Tor users who are also transmitting bulk data. So that is good for them – increasing their anonymity.
However, it makes the performance of other Tor users’ interactive sessions – e.g. ssh sessions – much worse. This is because Tor doesn’t currently have any prioritization or quality-of-service features, so someone else’s ssh keystrokes may have to wait in line while your bulk file contents get transmitted. The added delay might make other people’s interactive sessions unusable.
Both of these effects are doubled if you upload or download files to a Tor Hidden Service, as compared to if you upload or download files over Tor to a publicly traceable TCP/IP server.
Positive and negative effects on other I2P users¶
Sending your Tahoe-LAFS traffic over I2P adds cover traffic for other I2P users who are also transmitting data. So that is good for them – increasing their anonymity. It will not directly impair the performance of other I2P users’ interactive sessions, because the I2P network has several congestion control and quality-of-service features, such as prioritizing smaller packets.
However, if many users are sending Tahoe-LAFS traffic over I2P, and do not have their I2P routers configured to participate in much traffic, then the I2P network as a whole will suffer degradation. Each Tahoe-LAFS router using I2P has their own anonymizing tunnels that their data is sent through. On average, one Tahoe-LAFS node requires 12 other I2P routers to participate in their tunnels.
It is therefore important that your I2P router is sharing bandwidth with other routers, so that you can give back as you use I2P. This will never impair the performance of your Tahoe-LAFS node, because your I2P router will always prioritize your own traffic. | https://tahoe-lafs.readthedocs.io/en/latest/anonymity-configuration.html | 2020-02-16T21:59:17 | CC-MAIN-2020-10 | 1581875141430.58 | [] | tahoe-lafs.readthedocs.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.