content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Intro Examples¶ Usage¶ This page contains introductory examples of bifacial_radiance usage. We recommend to look at the Jupyter Notebook tutorials for more updated and better documented examples. from bifacial_radiance import RadianceObj # the main container for working with radiance Now that the module is loaded, let’s use it. demo = RadianceObj(name = 'Testrun', path = 'myfolder') #create a new demo run. Files will have the Testrun prefix, and be saved to 'myfolder' demo.setGround(0.3) # input albedo number or material name like 'concrete'. To see options, run this without any input. # Now download an EPW climate file for any global lat/lon value : epwfile = demo.getEPW(37.5,-77.6) # pull EPW data for any global lat/lon # let's load this epw file into our MetObj container class. metdata = demo.readEPW(epwfile) # read in the EPW weather data as metdata object. Run this with no input parameters to load a graphical picker # if you'd rather use a TMY3 file, select one that you've already downloaded: metdata = demo.readTMY() # select an existing TMY3 climate file. return metdata object. Now that we have ground albedo and a climate file loaded, we need to start designing the PV system. Fixed tilt systems can have hourly simulations with gendaylit, or annual simulations with gencumulativesky # create cumulativesky skyfiles and save it to the \skies\ directory, along with a .cal file in root demo.genCumSky(demo.epwfile) — optionally —- demo.gendaylit(metdata,4020) # pass in the metdata object, plus the integer number of the hour in the year you want to run (0 to 8759) # note that for genCumSky, you pass the *name* of the EPW file. for gendaylit you pass the metdata object. The nice thing about the RadianceObject is that it keeps track of where all of your skyfiles and calfiles are being saved. Next let’s put a PV system together. The details are saved in a dictionary and passed into makeScene. Let’s start with a PV module: # Create a new moduletype: Prism Solar Bi60. width = .984m height = 1.695m. demo.makeModule(name='Prism Solar Bi60',x=0.984,y=1.695) #x is assumed module width, y is height. # Let's print the available module types demo.printModules() the module details are stored in a module.json file in the bifacial_radiancedata directory so you can re-use module parameters. Each unit module generates a corresponding .RAD file in objectswhich is referenced in our array scene. There are some nifty module generation options including stacking them (e.g. 2-up or 3-up but any number) with a gap, and torque tube down the middle of the string. See the makeModule() documentation for all the options, or the Jupyter Notebook tutorials for examples and visualizations of what is possible. To define the orientation it has to be done in the makeModule step, assigning the correct values to the x and y of the module. x is the size of the module along the row, therefore for a landscape module x > y. # make a 72-cell module 2m x 1m arranged 2-up in portrait with a 10cm torque tube behind. # a 5cm offset between panels and the tube, along with a 5cm array gap between the modules: demo.makeModule(name = '1axis_2up', x = 1.995, y = 0.995, torquetube = True, tubetype = 'round', diameter = 0.1, zgap = 0.05, ygap = 0.05, numpanels = 2) Now we make a sceneDict with details of our PV array. We’ll make a rooftop array of Prism Solar modules in landscape at 10 degrees tilt. module_name = 'Prism Solar Bi60' sceneDict = {'tilt':10,'pitch':1.5,'clearance_height':0.2,'azimuth':180, 'nMods': 20, 'nRows': 7} # this is passed into makeScene to generate the RADIANCE .rad file scene = demo.makeScene(module_name,sceneDict) #makeScene creates a .rad file with 20 modules per row, 7 rows. OK, we’re almost done. RADIANCE has to combine the skyfiles, groundfiles, material (*.mtl) files, and scene geometry (.rad) files into an OCT file using makeOct. Instead of having to remember where all these files are, the RadianceObj keeps track. Or call .getfilelist() octfile = demo.makeOct(demo.getfilelist()) # the input parameter is optional - maybe you have a custom file list you want to use The final step is to query the front and rear irradiance of our array. The default is a 9-point scan through the center module of the center row of the array. The actual scan values are set up by .makeScene and returned in your sceneObj (sceneObj.frontscan, sceneObj.backscan). To do this we use an AnalysisObj. analysis = AnalysisObj(octfile, demo.name) # return an analysis object including the scan dimensions for back irradiance analysis.analysis(octfile, demo.name, scene.frontscan, scene.backscan) # compare the back vs front irradiance print('Annual bifacial ratio average: %0.3f' %( sum(analysis.Wm2Back) / sum(analysis.Wm2Front) ) ) We can also query specific scans along the array: # Do a 4-point scan along the 5th module in the 2nd row of the array. scene = demo.makeScene(module_name,sceneDict) octfile = demo.makeOct() analysis = AnalysisObj(octfile, demo.name) frontscan, backscan = analysis.moduleAnalysis(scene, sensorsy = 4, modWanted = 5, rowWanted = 2) frontresults,backresults = analysis.analysis(octfile, demo.name, scene.frontscan, scene.backscan) print('Annual bifacial ratio on 5th Module average: %0.3f' %( sum(analysis.Wm2Back) / sum(analysis.Wm2Front) ) ) # And you can run the scanning for another module. frontscan, backscan = analysis.moduleAnalysis(scene, sensorsy = 4, modWanted = 1, rowWanted = 2) frontresults,backresults = analysis.analysis(octfile, demo.name, scene.frontscan, scene.backscan) print('Annual bifacial ratio average on 1st Module: %0.3f' %( sum(analysis.Wm2Back) / sum(analysis.Wm2Front) ) ) For more usage examples including 1-axis tracking examples, carport examples, and examples of scenes with multiple sceneObjects (different trackers/modules/etc) see the Jupyter notebooks in docs Functions¶ RadianceObj(basename,path): This is the basic container for radiance projects. Pass in a basename string to name your radiance scene and append to various result and image files. path points to an existing or empty Radiance directory. If the directory is empty it will be populated with appropriate ground.rad and view files. Default behavior: basename defaults to current date/time, and path defaults to current directory RadianceObj.getfilelist() : return list of material, sky and rad files for the scene RadianceObj.returnOctFiles() : return files in the root directory with .oct extension RadianceObj.setGround(material_or_albedo, material_file): set the ground to either a material type (e.g. ‘litesoil’) or albedo value e.g. 0.25. ‘material_file’ is a filename for a specific material RAD file to load with your material description RadianceObj.getEPW(lat,lon) : download the closest EnergyPlus EPW file for a give lat / lon value. return: filename of downloaded file RadianceObj.readWeatherFile(weatherFile) : call readEPW or readTMY functions to read in a epw or tmy file. Return: metdata RadianceObj.readEPW(epwfilename) : use pyepw to read in a epw file. Return: metdata RadianceObj.readTMY(tmyfilename) : use pvlib to read in a tmy3 file. Return: metdata RadianceObj.gendaylit(metdata,timeindex) : pass in data read from a EPW file. Select a single time slice of the annual timeseries to conduct gendaylit Perez model for that given time RadianceObj.gencumsky(epwfilename, startdt, enddt) : use gencumulativesky.exe to do an entire year simulation. If no epwfilename is passed, the most recent EPW file read by readEPW will be used. startdt and enddt are optional start and endtimes for the gencumulativesky. NOTE: if you don’t have gencumulativesky.exe loaded, look in bifacial_radiance/data/ for a copy RadianceObj.makeOct(filelist, octname): create a .oct file from the scene .RAD files. By default this will use RadianceObj.getfilelist() to build the .oct file, and use RadianceObj.basename as the filename. - RadianceObj.makeScene(moduletype, sceneDict) : create a PV array scene with nMods modules per row and nRows number of rows. moduletype specifies the type of module which be one of the options saved in module.JSON (makeModule adds a customModule to the Json file). Pre-loaded module options are ‘simple_panel’, which generates a simple 0.95m x 1.59m module, or ‘monopanel’ which looks for ‘objects/monopanel_1.rad’. sceneDict is a dictionary containing the following keys: ‘tilt’,’pitch’,’clearance_height’,’azimuth’, ‘nMods’, ‘nRows’. - Return: SceneObj which includes details about the PV scene including frontscan and backscan details - RadianceObj.getTrackingGeometryTimeIndex(metdata, timeindex, angledelta, roundTrackerAngleBool, backtrack, gcr, hubheight, sceney): returns tracker tilt and clearance height for a specific point in time. - Return: tracker_theta, tracker_height, tracker_azimuth_ang AnalysisObj(octfile,basename) : Object for conducting analysis on a .OCT file. AnalysisObj.makeImage(viewfile,octfile, basename) : create visual render of scene ‘octfile’ from view ‘views/viewfile’ AnalysisObj.makeFalseColor(viewfile,octfile, basename) : create false color Wm-2 render of scene ‘octfile’ from view ‘views/viewfile’ AnalysisObj.analysis(octfile, basename, frontscan, backscan) : conduct a general front / back ratio analysis of a .oct file. frontscan, backscan: dictionary input for linePtsMakeDict that is passed from AnalysisObj.makeScene.
https://bifacial-radiance.readthedocs.io/en/stable/introexamples.html
2020-11-23T22:37:27
CC-MAIN-2020-50
1606141168074.3
[]
bifacial-radiance.readthedocs.io
User Identity and Access This section covers user authentication, access control, and multi-tenancy. User authentication Commander is fully integrated with AD/LDAP so that you can leverage your existing group hierarchies. It also provides single-sign-on (SSO) with SAML2 or Windows Session Authentication. For more information, see User Authentication. Access control Commander has both an administrative console and a separate, web-based Service Portal interface. The Service Portal provides users with an information-rich view of resources without allowing any access to the underlying private or public cloud infrastructure. To control access to the administrative console and the Service Portal, distinct roles are used to govern where users are permitted to sign in. By assigning roles, you can ensure that administrators have the right level of access to the various parts of your virtual infrastructure and users that aren't administrators, but do consume IT services and resources, are appropriately segregated. For more information, see Access Control. Organizations and Multi-tenancy Multi-tenancy allows you to share your cloud resources effectively and securely amongst users. Organizations form the basis of a multi-tenant model — they are defined groups of users with a common business purpose. Using organizations allows you to: - ensure that user groups can access only the resources assigned to them - set up distinct cloud automation configurations for your user groups - delegate administrative tasks to consumers, allowing you to lighten the load on administrators For more information, see Organizations and Multi-Tenancy.
https://docs.embotics.com/commander/user_mgmt.htm
2020-11-23T21:38:22
CC-MAIN-2020-50
1606141168074.3
[]
docs.embotics.com
You must configure ONTAP RBAC on the storage system if you want to use it with SnapCenter Plug-in for VMware vSphere. From within ONTAP, you must perform the following tasks: ONTAP 9 Administrator Authentication and RBAC Power Guide This storage system credential is needed to allow you to configure the storage systems for the Plug-in for VMware vSphere. You do this by entering the credentials in the Plug-in for VMware vSphere..
https://docs.netapp.com/ocsc-41/topic/com.netapp.doc.ocsc-con/GUID-D5A4E3CA-3946-471A-9707-C630BD13024E.html?lang=en
2020-11-23T23:01:57
CC-MAIN-2020-50
1606141168074.3
[]
docs.netapp.com
The StorageGRID Webscale system provides you with the capabilities to monitor the daily activities of the system including its health. Alarms and notifications help you evaluate and quickly resolve trouble spots that sometimes occur during the normal operation of a StorageGRID Webscale system. The StorageGRID Webscale system also includes support for NetApp’s AutoSupport feature. The StorageGRID Webscale system also includes an auditing feature that retains a record of all system activities through audit logs. Audit logs are managed by the Audit Management System (AMS) service, which is found on Admin Nodes. The AMS service logs all audited system events to a text file on the Admin Node. For more information about auditing, see the Audit Message Reference Guide.
https://docs.netapp.com/sgws-110/topic/com.netapp.doc.sg-admin/GUID-CBFD5F02-517A-44BD-88BA-71FC6ECB0F67.html?lang=en
2020-11-23T23:06:57
CC-MAIN-2020-50
1606141168074.3
[]
docs.netapp.com
CurveExpert Professional 2.7.3 documentation In some cases, the data to be analyzed is in the form of a picture or image. In CurveExpert Professional, you can use the built-in digitization capabilities in order to extract the data present on an image into a dataset (to “digitize”, in this context, means to convert content from an image into a usable dataset). To successfully digitize information from an image into data, information about how pixels in an image map to real data must be collected by the application, and the digitizer leads you through this process. The digitizer supports Cartesian (XY) and polar plots; XY plots can have semilog or log-log axes. Skewed axes are also supported intrinsically by the transformations performed by the digitizer engine. In order to invoke the digitizer, choose Data->Digitize from image from the main toolbar. This will begin the digitization process; first, you will be able to select the image, and then the dialog leads you through the digitization steps as documented in the following sections. In general, there are three primary steps to creating a successful digitization: * select the image * calibrate * select data points Selecting the image is straightforward; the image can either be read from a file on disk, or pasted from the clipboard. Calibration is the means by which you describe where the axes of the plot are located, and how those axes map to real numbers. Selecting of the data points tells the digitizer which locations you would like to have digitized into data. The layout of the digitizer dialog is shown below: The toolbar is shown across the top, with the appropriate buttons only being enabled at the appropriate times as you move through the digitization process. Beneath the toolbar, an informational message will appear that is intended to remind you what you are working on in the digitizing process. The data values that have been digitized so far are shown in the “Dataset” section. The image, current calibration, and currently selected points are shown in the canvas area in the bottom right. The bottom left area is a magnifier, where the contents of the image and overlay underneath the mouse crosshairs are magnified by a factor of 2X of the 100% image size. Finally, the path to the currently selected image is shown along the bottom of the window, to the left of the OK and Cancel buttons. To exit the digitizer and save the current dataset, click OK; to exit the digitizer while discarding your current work on the dataset, click Cancel. First, the image can either be read from a file, or pasted from the clipboard. To read from file, click the first button in the toolbar, and select your file. Supported file formats are PNG, BMP, JPEG, GIF, PCX, PPM, TGA, TIFF, WMF, XBM, XPM, and Photoshop 2.0/3.0 PSD files. If the image on the clipboard is desired, either click the Paste icon in the toolbar, or right-click the image and select Paste Image. Note that once the image is imported into the digitizer, the file that it was imported from (if it was not pasted) does not need to remain on disk. The image data will be saved as part of the digitizer, and is also persistent in the CurveExpert Professional document. Once the image is in place, you may select the type of the graph in the image from the toolbar. Most commonly, plots are simple XY plots with linear x and y axes. However, CurveExpert Professional supports digitizing from XY plots with the x and/or y axis on a log scale, and also supports Polar plots. The calibration may now be performed. Click the Calibrate… button, and follow the prompts that are given beneath the toolbar. For an XY plot, you will first click on the leftmost tick mark possible on the x axis, and then click on the rightmost tick mark possible on the x axis. You will soon be asked which data values (in the x direction) these two tick marks correspond to. Also note that you can fine-tune the location of the point you chose by hovering the mouse over the red dot that you just created, and using the keyboard arrows to shift the dot until it adequately lines up with both the axis and tick mark. Use the magnifier to aid in this fine tuning. A red dot will be placed to indicate each of your calibration points, connected by a red line. The canvas will look similar to the screenshot below. Note You may fine-tune the calibration points at a later time as well. After specifying the X axis calibration, we will now specify the calibration for the Y axis. In the same manner as the X axis was calibrated, click on the Y axis at the lowest possible tick mark on the Y axis, and then at the highest possible tick mark on the Y axis. The Y axis calibration points will be indicated by green points connected with a green line. Once the second Y calibration point is selected, the calibration dialog will appear to allow you to input the data values that corresponding to each of the calibration points; this is shown in the screenshot below. For XY plots, you will specify the data values for the two points selected on the X axis, and then for the two points selected on the Y axis. After filling in this information, your calibration data values will display in red or green next to the calibration points in the image. Remember that you can still fine-tune the location of the calibration points by pointing at them with the mouse and using the arrow keys to move the calibration point to the desired location. Any data points that you have already created will be automatically adjusted as the calibration changes. Also, if you desire to replace the current calibration with a completely new one (as opposed to fine-tuning the existing calibration), simply click the Calibrate… button again. As you move the mouse over the image, you will notice that the Image Coordinates and Data Coordinates information bars will begin to show information concerning the current mouse location. The image coordinate is the pixel location within the image. The data coordinate is the transformed data value subject to the current calibration (i.e., the mapping between coordinates in the image and the real data coordinates). The procedure for calibrating polar plots is slightly different than XY style plots; in the case of a polar plot, you will only be asked to click on the center of the polar plot, and also to click on the zero-degree axis at the tick farthest from the center. The line formed by these two points define the angle on the image that corresponds to zero degrees, and you provide the two data values (radii, in this case) that correspond to the two clicked points. The calibration dialog will query you for the Rmin and Rmax values. Now that the calibration is in place, click on the image to create data points where desired. The most common method is to simply click on the middle of each data point. As you click on the image to create data points, the points are both displayed on the image as a ‘+’ marker, and the data values show in the area of the dialog as well. Like calibration points, you can fine-tune a point’s exact location in the image by pointing at its marker in the image (it will turn magenta when active), and using the keyboard arrows. The fine-tuning arrows also work immediately after the mouse click that creates the marker. As the mouse passes over a marker, both the marker and its corresponding data value in the dataset are highlighted in magenta. To delete an existing marker (and by extension the data value), point at it with the mouse, and press the Del key. If in polar mode, the X value (first column) in the dataset is filled in with the angle in degrees, and the Y value (second column) in the data set is populated with the distance from center corresponding to your Rmin/Rmax calibration. In some cases, there are too many data points to manually pick, in these cases, the point autodetection should be utilized. There are two controls in the digitizer toolbar that control how autodetection behaves: the sensitivity setting (normal, aggressive, or conservative), and the autodetect button. The idea behind the point autodetection is that you provide the digitizer with a small piece of the image that represents the data point (called the template), and then the digitizer searches the image for other instances of that template in the image. Therefore, you will want to provide a data point marker that is well separated from any other elements in the image, such as background text, axes, other data markers, or curves. Also, it helps for accurate selection to maximize the digitizer window (in order to maximize the size of the image canvas), or right-click the image canvas and select a larger zoom value. During the point autodetection, the toolbar will turn red. To provide the data point marker template, click the autodetect button on the toolbar. Draw a box around a data point marker, and as soon as the box is finished, the digitizer will start the computation to detect other similar data point markers in the image. These will be marked with a ‘+’ overlay marker just as if you selected them yourself. For best results, try to draw a box tightly around the marker, such that the marker is centered in the box. Inevitably, the automatically detected markers need to be cleaned up. At this time, you can clean up the results by pointing at markers that aren’t quite right, and using the arrow keys to fine tune them. Markers that are missed can be added simply by clicking on the marker. Markers that are completely incorrect can be selected by pointing at them, and pressing the Del key. Simply press OK in order to import the digitization results into the CurveExpert Professional spreadsheet. At any time, you can modify the digitization by again selecting Data->Digitize from image and modifying your existing digitization of the image. The image canvas is the primary means by which interaction takes place to define the digitization. You will use the canvas to select points for calibration, pick points for digitization, and interact with those points. The image is displayed in “Fit to Window” mode by default. In this mode, you will always be able to see the entire image; however, since it is resized to fit the window, the image quality will not be optimal. Use of the magnifier at the bottom left mostly mitigates this shortcoming. In order to view the image at fixed magnification settings, right click the image canvas and select the desired magnification of 50%, 100%, or 200%. A magnification of 100% shows the image at its native resolution, pixel-for-pixel. As mentioned several times above, the calibration settings are shown in red and green, overlaying the base image being digitized. Digitized data points are shown with a plus (‘+’) marker. You may delete a marker by pointing at it and pressing the Del key. Fine-tuning a marker is performed by pointing at a marker, and pressing the arrow keys on the keyboard to move the marker into the desired position.
https://docs.curveexpert.net/curveexpert/pro/html/digitizing.html
2020-11-23T22:40:06
CC-MAIN-2020-50
1606141168074.3
[]
docs.curveexpert.net
1.0.0.RC1 Table of Contents This project provides support for orchestrating long-running (streaming) and short-lived (task/batch) data microservices to Kubernet. Of note, the docker-compose-kubernetes is not among those options, but it was also used by the developers of this project to run a local Kubernetes cluster using Docker Compose.; Determine the location of your Kubernetes Master URL, for example: $ kubectl cluster-info Kubernetes master is running at ...other output omitted.... The settings are specified in the src/etc/kubernetes/scdf-controller.yml file. Modify the <<URL-for-Kubernetes-master>> setting to match your output from the command above. Also modify <<mysql-username>>, <<mysql-password>> and DB schema name to match what you used when creating the service.-controller.yml $ kubectl create -f src/etc/kubernetes/scdf-service.0.0.RC1.jar Configure the Data Flow server URI with the following command (use the IP address from previous step and at the moment we are using port 9393): ____ ____ _ __ / ___| _ __ _ __(_)_ __ __ _ / ___| | ___ _ _ __| | \___ \| '_ \| '__| | '_ \ / _` | | | | |/ _ \| | | |/ _` | ___) | |_) | | | | | | | (_| | | |___| | (_) | |_| | (_| | |____/| .__/|_| |_|_| |_|\__, | \____|_|\___/ \__,_|\__,_| ____ |_| _ __|___/ __________ | _ \ __ _| |_ __ _ | ___| | _____ __ \ \ \ \ \ \ | | | |/ _` | __/ _` | | |_ | |/ _ \ \ /\ / / \ \ \ \ \ \ | |_| | (_| | || (_| | | _| | | (_) \ V V / / / / / / / |____/ \__,_|\__\__,_| |_| |_|\___/ \_/\_/ /_/_/_/_/_/ 1.0.0.BUILD-SNAPSHOT dataflow:>task execution list ╔═════════╤══╤════════════════════════════╤════════════════════════════╤═════════╗ ║Task Name│ID│ Start Time │ End Time │Exit Code║ ╠═════════╪══╪════════════════════════════╪════════════════════════════╪═════════╣ ║task1 │1 │Fri Jun 03 18:12:05 EDT 2016│Fri Jun 03 18:12:05 EDT 2016│0 ║ ╚═════════╧══╧════════════════════════════╧════════════════════════════╧═════════╝ Destroy the task dataflow:>task destroy --name task1 goes into more detail about how you can work with Spring Cloud Tasks. It covers topics such as creating and running task applications. If you’re just starting out with Spring Cloud Data Flow, you should probably read the Getting Started guide before diving into this section. A task executes a process on demand. In this case a task is a Spring Boot application that is annotated with @EnableTask. Hence a user launches a task that performs a certain process, and once complete the task ends. An example of a task would be a boot application that exports data from a JDBC repository to an HDFS instance. Tasks record the start time and the end time as well as the boot exit code in a relational database. The task implementation is based on the Spring Cloud Task project. Before we dive deeper into the details of creating Tasks, we need to understand the typical lifecycle for tasks in the context of Spring Cloud Data Flow: Register a Task App with the App Registry using the Spring Cloud Data Flow Shell app register command. You must provide a unique name and a URI that can be resolved to the app artifact. For the type, specify "task". Here are a, this would be a valid properties file: task.foo= task.bar= Then use the app import command and provide the location of the properties file via --uri: will not be overridden by default. If you would like to override the pre-existing task app, then include the --force option. Create a Task Definition from a Task App by providing a definition name as well as properties that apply to the task execution. Creating a task definition can be done via the restful API or the shell. To create a task definition using the shell, use the task create command to create the task definition. For example: dataflow:>task create mytask --definition "timestamp --format=\"yyyy\"" Created new task 'mytask' A listing of the current task definitions can be obtained via the restful API or the shell. To get the task definition list using the shell, use the task list command.: A user can check the status of their task executions via the restful API or by the shell. To display the latest task executions via the shell use the task execution list command. To get a list of task executions for just one task definition, add --name and the task definition name, for example task execution list --name foo. To retrieve full details for a task execution use the task display command with the id of the task execution , for example task display --id 549. Destroying a Task Definition will remove the definition from the definition repository. This can be done via the restful API or via the shell. To destroy a task via the shell use the task destroy command. For Example: dataflow:>task destroy mytask Destroyed task 'mytask' The task execution information for previously launched tasks for the definition will remain in the task repository. Note: This will not stop any currently executing tasks for this definition, this just removes the definition. Out of the box Spring Cloud Data Flow offers an embedded instance of the H2 database. The H2 is good for development purposes but is not recommended for production use. To add a driver for the database that will store the Task Execution information, a dependency for the driver will need to be added to a maven pom file and the Spring Cloud Data Flow will need to be rebuilt. Since Spring Cloud Data Flow is comprised of an SPI for each environment it supports, please review the SPI’s documentation on which POM should be updated to add the dependency and how to build. This document will cover how to setup the dependency for local SPI. also tap into various task/batch events when the task is launched. If the task is enabled to generate task and/or batch events (with the additional dependencies spring-cloud-task-stream and spring-cloud-stream-binder-kafka, in the case of Kafka as the binder), those events are published during the task lifecycle. By default, the destination names for those published events on the broker (rabbit, kafka etc.,) are the event names themselves (for instance: task-events, job-execution-events etc.,). dataflow:>task create myTask --definition “myBatchJob" dataflow:>task launch myTask dataflow:>stream create task-event-subscriber1 --definition ":task-events > log" --deploy You can control the destination name for those events by specifying explicit names when launching the task such as: dataflow:>task launch myTask --properties "spring.cloud.stream.bindings.task-events.destination=myTaskEvents" dataflow:>stream create task-event-subscriber2 --definition ":myTaskEvents > log" --deploy The default Task/Batch event and destination names on the broker are enumerated below:..).
https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.0.0.RC1/reference/htmlsingle/
2020-11-23T22:26:24
CC-MAIN-2020-50
1606141168074.3
[]
docs.spring.io
Performance Optimization is a complex topic because different optimizations can be enacted at each layer of the web stack. It's not a "front-end problem" or "back-end problem" or a "devops problem". Everyone involved in building and deploying the app should analyze their slice of the stack and work together to adhere to a performance budget. Cache as much of the things as you can! ⚠️ TODO: The caching page in Docs has addition content and examples on this topic. Need to merge together (probably under guides). Sitecore cache Enable Sitecore HTML cache (aka output cache or component cache) on components or groups of components that get reused often (ex. Header, Footer), and on process-intensive components. Avoid caching personalized components, unless a custom way to fetch personalized data has been added. Follow the official guidelines for enabling and configuring HTML caching . - Use HTTP cache headers to make browsers cache static assets that don’t change often Reference: “Increasing Application Performance with HTTP Cache Headers” by Heroku - Avoid using JavaScript Renderings unless there’s absolutely no other way. This type of rendering initializes a new Node instances for each rendering, for even having more than 2-3 on a page can noticeably influence load times. Node Server Cache Warning: Node is a 3rd party software, so using Node caching modules is not something supported by Sitecore. Since the output from Node is the HTML for the entire page, Node caching is all-or-nothing. In other words, if you are using personalization on parts of the page, there is no way to tell Node to exclude parts of the page from the cache. Node caching modules: Considerations: implement a cache-invalidation strategy Service worker and HTTP cache Refer to this guide from Google Developers Minimize Requests to the Server A common pitfall we've noticed teams stumble on is setting up routing in such a way that every route-change by the end-user loads an SSR-ed page. This makes node a bottle-neck for your app, and wastes all those great built-in optimizations that front-end frameworks contain for working with the DOM in a browser. The expected behavior for a SPA is that, only the first page that the end-user loads is SSR-ed. Afterwards, client-side rendering should take over, and as the end-user navigates through the site, routes load via CSR. If you're not sure whether your app is set up correctly, inspect the Network tab as you change routes. If the response from the server is Layout Service JSON only, then all is good. But if you're getting back a full HTML page each time, then you are experiencing the above issue. For instructions on how to resolve this, see the routing guide. Infrastructure Node clusters By default, Node executes code on a single thread (for memory efficiency). The Cluster module allows you to handle a greater processing load by taking advantage of multi-core systems and create child processes that each run on their own single thread. Generally speaking, it's a good idea to have at least as many node processes as physical/virtual/hyper-threaded cores. Performance testing will give the best guidance for fine-tuning this in a specific codebase/environment. Node docs: how to run a cluster Azure WebApps It's technically possible to have the CD server and Node.js running on a single WebApp, but this is not recommended. Sharing the service plan means sharing the Azure CPU resources between Node and CD, which can have unpredictable results for scalability. The best solution for scaling is to have dedicated app service plans for the headless Node rendering farm and the CDs. Integrated Mode in Prod To clarify some confusion about application modes, we want to reiterate that in Production, - Content Management servers must use Integrated mode (this is required to support Experience Editor) - Content Delivery servers are recommended to avoid using Integrated Mode because it does not scale well and cannot support high traffic. Enable Keep-alive keep-alive is an HTTP mechanism that allows the same TCP connection to be kept open and reused across many HTTP requests. It reduces the amount of hand-shaking, and thus latency, for every request that’s kept alive by reusing one TCP connection. - For responses handled by Node: keep-aliveis not enabled by default in Node servers, unless the agentkeepalive package is installed. - For all other responses (for example, API calls that bypass the Node proxy): add Connection: keep-aliveto the response headers. Reference and additional information: Microsoft Docs - Troubleshooting intermittent outbound connection errors in Azure App Service Note: keep alive will be included in future versions of the node-proxy sample. Git reference. Special thank you to Una Verhoeven for insights on this topic. Reference Framework-Specific Guides Every framework provides guidance on configuration changes that should be turned on in production that enable additional optimizations. - React's 'Optimizing Performance' Guide - Vue's 'Production Deployment' Guide - Angular's 'Production optiimizations' Guide Don't forget the browser layer All the little rules we've heard over the years - avoid JS bloat, avoid render-blocking scripts above the fold, defer image loading, etc - individually, they may seem insignificant compared to the huge boosts seen by something like enabling keep-alive, - but together they do add up and they do impact your front-end load speeds. Modern browsers come with amazing developer tools that not only identify these issues, but tell us how to fix them. Reference Google Developers' guide "Fast load times" for a large collection of guides on this topic.
https://jss-docs-preview3.herokuapp.com/guides/performance/
2020-11-23T21:19:27
CC-MAIN-2020-50
1606141168074.3
[]
jss-docs-preview3.herokuapp.com
CloudBees Feature Management offers multiple subscriptions, including a free full-featured Community to get you started. For each plan, the number of users refers to users within your organization. The number of active monthly users (MAU) refers to users that interact with flag configurations for which impression data is captured. Refer to Flag impressions to learn more. You can upgrade your subscription at any time. CloudBees Feature Management offers several subscription plans to choose from: Community: A free, full-featured edition for up to 15 users, and 250k MAU. Team: Monthly and annual pricing based on a range of users (up to 25) and a range of MAU (up to 1M). Enterprise: Custom pricing based on an unlimited number of users, custom MAU, SSO/SAML, and dedicated support. Refer to CloudBees Feature Management Pricing for more information. Verifying your role for managing an existing account If you are unsure whether you have access to manage your billing plan, you can verify your role in Organization management. To verify that you are an administrator: From the CloudBees Feature Management Home page, select your account from the top right corner, and then select the organization. Select your account from the top right corner again, and then select Organization management. From the center of your screen, in the Settings tab, verify your Organization role to the far right of your username. If your Organization role is Admin, you can make changes to the billing plan. Refer to Managing teams and permissions for information on how to change your team permissions to administrator. Subscribing to the free Community edition The Pricing page provides access to create your account. Alternatively, you can sign up directly from the CloudBees Feature Management Sign Up tab. To sign up for the Community edition: From the Pricing page, select Sign Up for Free!. Enter your contact information and password, and select Submit. (Alternative) From CloudBees Feature Management, in the Sign Up tab, choose Sign up with GitHub, Sign up with Google, or enter your email, password, first, and last name. Then select Log in. An email is sent to your account titled CloudBees Email Verification. Open the email and select Verify email address to verify your account and return to the Log in screen. Your account is created. Sign in to begin using the CloudBees Feature Management Community edition. Managing a Team plan You can sign up for the Team plan from the Pricing page. If you already have an account, you can sign up from within CloudBees Feature Management. To sign up for a Team subscription: From within CloudBees Feature Management, select Upgrade Plan in the banner. You are now on the Organization management page. Select the Team Plan tab in the center of your screen. Enter the following: Users: Select the number of seats (up to 25). MAU: Select the number of monthly active users. Choose Billing Cycle: Select Pay by the month or Pay by the year. Billing Details: Enter your First Name, Last Name, and Email. Payment Details: Enter your payment information. Order Summary: Review the details of your order summary. If you have a promo code. Click I have a promo code and enter the promo code. Click Apply. Select Pay or Update info. The Thank You screen and the Billing Summary page appears. Managing an Enterprise plan If you would like to upgrade to an unlimited number of users, enable SAML/single sign-on, and receive support, you can sign up for the Enterprise plan. To sign up for the Enterprise subscription: From the CloudBees Feature Management Home page, select your account from the top right corner, and then select the organization. Select your account from the top right corner again, and then select Organization management. Select the Billing Plan tab from the center of your screen. In Plan Details, select the Enterprise Plan tab. Select Contact Sales to begin a conversation and request a quote. If you do not receive an immediate response, you can close the conversation, and someone will reach out to you by email. Managing billing information You can update your billing information at any point after signing up for a subscription. To update your billing information: From the CloudBees Feature Management Home page, select your account from the top right corner, and then select the organization that you want to manage. Select your account from the top right corner again, and then select Organization management. Select the Billing Plan tab from the center of your screen. Under Plan Details select Edit and enter the following: Billing First Name and Last Name Billing Email Card Information Company Name Country Billing Street Address, City, and Zip / Postal Code Select Update info. Your billing plan information is updated.
https://docs.cloudbees.com/docs/cloudbees-feature-management/latest/administration/billing
2022-08-08T00:57:10
CC-MAIN-2022-33
1659882570741.21
[array(['../_images/admin-lookup.ef833da.png', 'Viewing the Organizaton role'], dtype=object)]
docs.cloudbees.com
Introduction As part of our December 2021 release, we have revamped how service checklists work within VGM and the embeddable online booking system. This feature is only available to those using our latest booking system (shown below). If you are using one of our older booking systems, get in touch to migrate you across as this is a free upgrade. Configuring a service checklist You can now have a different service checklist for each slot type group containing service slot types. Service checklists are a great way to allow your customers to compare the different service options available. To create a new checklist, navigate to Config > Vehicle Checklists > Checklist Templates. From this grid, click the new button. From this screen, you’ll be able to start building up your checklist. Give it a name and add a description and the estimated hours optionally. Once this is done, navigate to the template items tab. You will first want to create top-level groups for your items (e.g. Under Bonnet Checks). You can then add individual items to the groups by selecting the group name and clicking Add Item. Once you’ve added all of the possible items to the checklist, click save. Next, we’ll need to navigate to the slot type group that we want to link to this checklist. Navigate to config > slots > slot type groups. Each checklist can only be linked to one slot type group. Double click the slot type group and select your newly created checklist next to the checklist. Click save, and this will now create a link to that checklist. We need to select which service types check each item in the checklist. So navigate to config > slots > slot types. Find the first service in the slot type group you just linked the checklist to, and double click it to open the edit slot type window. Within that window, if you navigate to web > checklist templates, you’ll now have a list of all the groups and items, and you can select which items are included using the checkboxes to the left. Repeat this process for each service type. Viewing it as part of the booking system Once you have configured the checklist, there will be a link next to that slot type group within your booking system, which will launch a complete comparison checklist. Note: If you don’t see this, get in touch as you may be using a legacy booking system.
https://docs.motasoft.co.uk/adding-service-checklists-to-your-online-booking-system/
2022-08-08T01:27:09
CC-MAIN-2022-33
1659882570741.21
[array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image-45-1024x778.png', None], dtype=object) array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image-46-1024x566.png', None], dtype=object) array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image-47-1024x567.png', None], dtype=object) array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image-48.png', None], dtype=object) array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image-49.png', None], dtype=object) array(['https://docs.motasoft.co.uk/wp-content/uploads/2021/12/image-45-1024x778.png', None], dtype=object) ]
docs.motasoft.co.uk
Use Azure Key Vault secrets in Azure Pipelines Azure DevOps Services | Azure DevOps Server 2020 | Azure DevOps Server 2019 Azure Key Vault enables developers to securely store and manage secrets such as API keys, credentials or certificates. Azure Key Vault service supports two types of containers: vaults and managed HSM (hardware security module) pools. Vaults support storing software and HSM-backed keys, secrets, and certificates, while managed HSM pools only support HSM-backed keys. In this tutorial, you will learn how to: - Create an Azure Key Vault using Azure CLI - Add a secret and configure access to Azure key vault - Use secrets in your pipeline Prerequisites - An Azure DevOps organization. If you don't have one, you can create one for free. - An Azure subscription. Create an Azure account for free if you don't have one already. Create an Azure Key Vault Sign in to the Azure Portal, and then select the Cloud Shell button in the upper-right corner. If you have more than one Azure subscription associated with your account, use the command below to specify a default subscription. You can use az account listto generate a list of your subscriptions. az account set --subscription <your_subscription_name_or_ID> Set your default Azure region. You can use az account list-locationsto generate a list of available regions. az config set defaults.location=<your_region> For example, this command will select the westus2 region: az config set defaults.location=westus2 Create a new resource group. A resource group is a container that holds related resources for an Azure solution. az group create --name <your-resource-group> Create a new key vault. az keyvault create \ --name <your-key-vault> \ --resource-group <your-resource-group> Create a new secret in your Azure key vault. az keyvault secret set \ --name "Password" \ --value "mysecretpassword" \ --vault-name <your-key-vault-name> Create a project If you don't have any projects in your organization yet, select Create a project to get started. Otherwise, select New project in the upper-right corner. Create a repo We will use YAML to create our pipeline but first we need to create a new repo. Sign in to your Azure DevOps organization and navigate to your project. Select Repos, and then select Initialize to initialize a new repo with a README. Create a new pipeline Select Pipelines, and then select New Pipeline. Select Azure Repos Git (YAML). Select the repository you created in the previous step. Select the Starter pipeline template. The default pipeline will include a few scripts that run echo commands. Those are not needed so we can delete them. Your new YAML file should look like this: trigger: - main pool: vmImage: 'ubuntu-latest' steps: Select Show assistant to expand the assistant panel. This panel provides convenient and searchable list of pipeline tasks. Select your Azure subscription and then select Authorize. Select your Key vault from the dropdown menu, and then select Add to add the task to your YAML pipeline. Note The Make secrets available to whole job feature is not supported in Azure DevOps Server 2019 and 2020. Your YAML file should look like the following: trigger: - main pool: vmImage: ubuntu-latest steps: - task: AzureKeyVault@2 inputs: azureSubscription: 'Your-Azure-Subscription' KeyVaultName: 'Your-Key-Vault-Name' SecretsFilter: '*' RunAsPreJob: false - task: CmdLine@2 inputs: script: 'echo $(Your-Secret-Name) > secret.txt' - task: CopyFiles@2 inputs: Contents: secret.txt targetFolder: '$(Build.ArtifactStagingDirectory)' - task: PublishBuildArtifacts@1 inputs: PathtoPublish: '$(Build.ArtifactStagingDirectory)' ArtifactName: 'drop' publishLocation: 'Container' Don't save or queue your pipeline just yet. We must first give our pipeline the right permissions to access Azure Key Vault. Keep your browser tab open, we will resume the remaining steps once we set up the key vault permissions. Set up Azure Key Vault access policies In order to access our Azure Key Vault, we must first set up a service principal to give access to Azure Pipelines. Follow this guide to create your service principal and then proceed with the next steps in this section. Navigate to Azure portal. Use the search bar to search for the key vault you created earlier. Under Settings Select Access policies. Select Add Access Policy to add a new policy. For Secret permissions, select Get and List. Select the option to select a service principal and search for the one you created in the beginning of this section. A security principal is an object that represents a user, group, service, or application that's requesting access to Azure resources. Select Add to create the access policy, then select Save when you are done. Run and review the pipeline Return to the previous tab where we left off. Select Save, and then select Save again to commit your changes and trigger the pipeline. You may be asked to allow the pipeline access to Azure resources, if prompted select Allow. You will only have to approve your pipeline once. Select the CmdLine task to view the logs. Return to pipeline summary and select the published artifact. Select the secret.txt artifact to open it. The text file should contain our secret: mysecretpassword. Warning This tutorial is for educational purposes only. For security best practices and how to safely work with secrets, see Manage secrets in your server apps with Azure Key Vault. Clean up resources Follow the steps below to delete the resources you created: If you created a new organization to host your project, see how to delete your organization, otherwise delete your project. All Azure resources created during this tutorial are hosted under a single resource group PipelinesKeyVaultResourceGroup. Run the following command to delete the resource group and all of its resources. az group delete --name PipelinesKeyVaultResourceGroup Q: I'm getting the following error: "the user or group does not have secrets list permission" what should I do? A: If you encounter an error indicating that the user or group does not have secrets list permission on key vault, run the following commands to authorize your application to access the key or secret in the Azure Key Vault: $ErrorActionPreference="Stop"; $Credential = Get-Credential; Connect-AzAccount -SubscriptionId <YOUR_SUBSCRIPTION_ID> -Credential $Credential; $spn=(Get-AzureRmADServicePrincipal -SPN <YOUR_SERVICE_PRINCIPAL_ID>); $spnObjectId=$spn.Id; Set-AzureRmKeyVaultAccessPolicy -VaultName key-vault-tutorial -ObjectId $spnObjectId -PermissionsToSecrets get,list; Next steps Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/azure/devops/pipelines/release/azure-key-vault?view=azure-devops
2022-08-08T02:52:03
CC-MAIN-2022-33
1659882570741.21
[]
docs.microsoft.com
This section describes how to setVirtualize up for accessing IBM MQ. Access to IBM MQ is achieved by configuring Parasoft tools. Sections include: To use the MQ option, you must add jars from a WebSphere MQ client or server, complete the following: After selecting Websphere MQ from the Transport drop-down menu within the Transport tab of an appropriate tool, the following options display in the left panel: When a failure occurs, MQ returns a reason code for a failure. SOAtest & Virtualize error messages report these same reason codes in order for users to interpret them. For a list of MQ reason codes and their meaning, please refer to the IBM Knowledge Center at. The following are best practices for sending and receiving character data with client tools (e.g., SOAP Client, REST Client, Messaging Client, etc.) When sending character data such as XML, CSV, fixed-length, or plain text, the format type must be set to the value of the MQFMT_STRING constant which is MQSTR. For the SOAP client tool, the character set used to encode the request is specified using the tool’s Misc tab> Outgoing Message Encoding option. Other applicable tools (e.g., REST Client, Messaging Client) use the character encoding configured in the product’s Misc preferences. The applicable options are: If you specify a different encoding, then the character set of the MQ messages will be defaulted to MQCCSI_Q_MGR, which means "Character data in the message is in the queue manager’s character set." The MQGMO_CONVERT box should be enabled under the client tool’s MQGetMessageOptions (in the Transport tab). This will instruct the queue manager to convert the message to the client tool's character set. This is important if the message's original characterSet is not one of those supported by the client (IBM_037, IBM_437, etc.). The character set used to perform the conversion is configured the same as for put messages (described above).
https://docs.parasoft.com/plugins/viewsource/viewpagesrc.action?pageId=51918182
2022-08-08T02:04:31
CC-MAIN-2022-33
1659882570741.21
[]
docs.parasoft.com
This topic provides a checklist of software and hardware requirements for creating a vSAN cluster. You can also use the checklist to verify that the cluster meets the guidelines and basic requirements. Requirements for vSAN Cluster Before you get started, verify specific models of hardware devices, and specific versions of drivers and firmware in the VMware Compatibility Guide website at. The following table lists the key software and hardware requirements supported by vSAN. Caution: Using uncertified software and hardware components, drivers, controllers, and firmware might cause unexpected data loss and performance issues. For detailed information about vSAN cluster requirements, see Requirements for Enabling vSAN. For in-depth information about designing and sizing the vSAN cluster, see the VMware vSAN Design and Sizing Guide.
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan-planning.doc/GUID-D2AAEC0C-D5C3-4885-A2C2-789DC0212850.html
2022-08-08T01:23:15
CC-MAIN-2022-33
1659882570741.21
[]
docs.vmware.com
NAT gateways A NAT gateway is a Network Address Translation (NAT) service. You can use a NAT gateway so that instances in a private subnet can connect to services outside your VPC but external services cannot initiate a connection with those instances. When you create a NAT gateway, you specify one of the following connectivity types: Public – (Default) Instances in private subnets can connect to the internet through a public NAT gateway, but cannot receive unsolicited inbound connections from the internet. You create a public NAT gateway in a public subnet and must associate an elastic IP address with the NAT gateway at creation. You route traffic from the NAT gateway to the internet gateway for the VPC. Alternatively, you can use a public NAT gateway to connect to other VPCs or your on-premises network. In this case, you route traffic from the NAT gateway through a transit gateway or a virtual private gateway. Private – Instances in private subnets can connect to other VPCs or your on-premises network through a private NAT gateway. You can route traffic from the NAT gateway through a transit gateway or a virtual private gateway. You cannot associate an elastic IP address with a private NAT gateway. You can attach an internet gateway to a VPC with a private NAT gateway, but if you route traffic from the private NAT gateway to the internet gateway, the internet gateway drops the traffic. The NAT gateway replaces the source IP address of the instances with the IP address of the NAT gateway. For a public NAT gateway, this is the elastic IP address of the NAT gateway. For a private NAT gateway, this is the private IP address of the NAT gateway. When sending response traffic to the instances, the NAT device translates the addresses back to the original source IP address. When you provision a NAT gateway, you are charged for each hour that your NAT gateway is available and each Gigabyte of data that it processes. For more information, see Amazon VPC Pricing most traffic through your NAT gateway is to AWS services that support interface endpoints or gateway endpoints, consider creating an interface endpoint or gateway endpoint for these services. For more information about the potential cost savings, see AWS PrivateLink pricing . Contents NAT gateway basics Each NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone. There is a quota on the number of NAT gateways that you can create in each Availability Zone. For more information, see Amazon VPC quotas.. The following characteristics and rules apply to NAT gateways: A NAT gateway supports the following protocols: TCP, UDP, and ICMP. NAT gateways are supported for IPv4 or IPv6 traffic. For IPv6 traffic, NAT gateway performs NAT64. By using this in conjunction with DNS64 (available on Route 53 resolver), your IPv6 workloads in a subnet in Amazon VPC can communicate with IPv4 resources. These IPv4 services may be present in the same VPC (in a separate subnet) or a different VPC, on your on-premises environment or on the internet. A NAT gateway supports 5 Gbps of bandwidth and automatically scales up to 100 Gbps. If you require more bandwidth, you can split your resources into multiple subnets and create a NAT gateway in each subnet. A NAT gateway can process one million packets per second and automatically scales up to ten million packets per second. Beyond this limit, a NAT gateway will drop packets. To prevent packet loss, split your resources into multiple subnets and create a separate NAT gateway for each subnet. A NAT gateway can support up to 55,000 simultaneous connections to each unique destination. This limit also applies if you create approximately 900 connections per second to a single destination (about 55,000 connections per minute). Monitor NAT gateways with Amazon CloudWatch. You can associate exactly one Elastic IP address with a public private NAT gateway receives an available private IP address from the subnet in which it is configured. The assigned private IP address persists until you delete the private NAT gateway. You cannot detach the private IP address and you cannot attach additional private IP addresses. You cannot associate a security group with a NAT gateway. You can associate security groups with your instances to control inbound and outbound traffic. You can use a network ACL to control the traffic to and from the subnet for your NAT gateway. NAT gateways use ports 1024–65535. For more information, see Control traffic to subnets using Network ACLs. A NAT gateway receives a network interface that's automatically assigned a private IP address from the IP address range of the subnet. You can view the network interface for the NAT gateway using the Amazon EC2 console. For more information, see Viewing details about a network interface. You cannot modify the attributes of this network interface. A NAT gateway cannot be accessed through a ClassicLink connection that is associated with your VPC. You cannot route traffic to a NAT gateway through a VPC peering connection, a Site-to-Site VPN connection, or AWS Direct Connect. A NAT gateway cannot be used by resources on the other side of these connections. Control the use of NAT gateways By default, IAM users do not have permission to work with NAT gateways. You can create an IAM user policy that grants users permissions to create, describe, and delete NAT gateways. For more information, see Identity and access management for Amazon VPC. Work with NAT gateways You can use the Amazon VPC console to create and manage your NAT gateways. Create a NAT gateway To create a NAT gateway, enter an optional name, a subnet, and an optional connectivity type. With a public NAT gateway, you must specify an available elastic IP address. A private NAT gateway receives a primary private IP address selected at random from its subnet. You cannot detach the primary private IP address or add secondary private IP addresses. To create a NAT gateway Open the Amazon VPC console at . In the navigation pane, choose NAT Gateways. Choose Create NAT Gateway and do the following: (Optional) Specify a name for the NAT gateway. This creates a tag where the key is Nameand the value is the name that you specify. Select the subnet in which to create the NAT gateway. For Connectivity type, select Private to create a private NAT gateway or Public (the default) to create a public NAT gateway. (Public NAT gateway only) For Elastic IP allocation ID, select an Elastic IP address to associate with the NAT gateway. (Optional) For each tag, choose Add new tag and enter the key name and value. Choose Create a NAT Gateway. The initial status of the NAT gateway is Pending. After the status changes to Available, the NAT gateway is ready for you to use. Be sure to update your route tables as needed. For examples, see NAT gateway use cases. If the status of the NAT gateway changes to Failed, there was an error during creation. For more information, see NAT gateway creation fails. Tag User Guide. For more information about setting up a cost allocation report with tags, see Monthly cost allocation report in About AWS Account Billing. Delete a NAT gateway If you no longer need a NAT gateway, you can delete it. After you delete a NAT gateway, its entry remains visible in the Amazon VPC console for about an hour, after which it's automatically removed. You cannot remove this entry yourself. Deleting a NAT gateway disassociates its Elastic IP address, but does not release the address from your account. If you delete a NAT gateway, the NAT gateway routes remain in a blackhole status until you delete or update the routes. To delete a NAT gateway Open the Amazon VPC console at . In the navigation pane, choose NAT Gateways. Select the radio button for the NAT gateway, and then choose Actions, Delete NAT gateway. When prompted for confirmation, enter deleteand then choose Delete. If you no longer need the Elastic IP address that was associated with a public NAT gateway, we recommend that you release it. For more information, see Release an Elastic IP address. API and CLI overview You can perform the tasks described on this page using the command line or API. For more information about the command line interfaces and a list of available API operations, see Working with) Tag a NAT gateway create-tags (AWS CLI) New-EC2Tag (AWS Tools for Windows PowerShell) CreateTags (Amazon EC2 Query API) Delete a NAT gateway delete-nat-gateway (AWS CLI) Remove-EC2NatGateway (AWS Tools for Windows PowerShell) DeleteNatGateway (Amazon EC2 Query API)
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
2022-08-08T02:52:05
CC-MAIN-2022-33
1659882570741.21
[]
docs.aws.amazon.com
CoreStack Product Overview Get a better understanding of all the product offerings CoreStack has to offer. Introducing CoreStack CoreStack is a next-generation multi-cloud governance platform that empowers enterprises to rapidly achieve autonomous and continuous cloud governance and compliance at scale. CoreStack is used today by many leading global enterprises across multiple industries. CoreStack is delivered to end users in the from of multiple product offerings, bundled for a specific set of cloud governance pillars based on your needs. This guide provides an overview of the different CoreStack offerings and associated cloud governance pillars that are being offered as part of it. Product offerings CoreStack FinOps is a solution offering that is designed to help you develop a culture of financial accountability and realize the benefits of the cloud faster. CoreStack SecOps is a solution offering designed to help keep your cloud assets secure and compliant. CoreStack CloudOps is a solution offering designed to help optimize cloud operations and cost management in order to provide accessibility, availability, flexibility, and efficiency while also boosting business agility and outcomes. CoreStack Compass is designed to help you adopt best practices according to well-architected frameworks, gain continuous visibility, and manage risk for your cloud workloads with assessments, policies, and reports that allow you to review the state of your applications and get a clear understanding of risk trends over time. Cloud Governance Pillars. Getting started with CoreStack Now that you are more familiar with the basic of CoreStack, its core products, and best practices -- you're ready to start your journey with the platform! Updated 2 months ago Please refer to the links below to start taking the next steps towards cloud optimization with CoreStack:
https://docs.corestack.io/docs/product-overview-20
2022-08-08T01:07:33
CC-MAIN-2022-33
1659882570741.21
[]
docs.corestack.io
IShellLinkDataList interface (shobjidl_core.h) Exposes methods that allow an application to attach extra data blocks to a Shell link. These methods add, copy, or remove data blocks. Inheritance The IShellLinkDataList interface inherits from the IUnknown interface. IShellLinkDataList also has these types of members: Methods The IShellLinkDataList interface has these methods. Remarks The data blocks are in the form of a structure. The first two members are the same for all data blocks. The first member gives the structure's size. The second member is a signature that identifies the type of data block. The remaining members hold the block's data. There are five types of data block currently supported. This interface is not implemented by applications. Use this interface if your application needs to add extra data blocks to a Shell link.
https://docs.microsoft.com/en-us/windows/win32/api/shobjidl_core/nn-shobjidl_core-ishelllinkdatalist
2022-08-08T02:32:52
CC-MAIN-2022-33
1659882570741.21
[]
docs.microsoft.com
Yellow Network Search… Introduction Background The Problem The Solution Design Network Architecture Finance Business model Whitepaper Future Release Conclusions GitBook The Problem The market challenges Yellow Network is addressing. The Problem Scalability A monolithic business structure of current crypto exchanges isn't scalable. Traditional finance vs. Crypto finance Many Independents Blockchains Currently, the cryptocurrency industry is highly fragmented. There are over 200 notable exchanges and over 6000 cryptocurrencies. Around 100 of them use their own blockchain, making it hard to achieve secure interoperability. Low Liquidity Centralized and decentralized exchanges suffer from low liquidity, spread over dozens of different markets and exchanges that are forced to compete with one another. Every new blockchain project is adding more to this chaos and significantly slowing down the growth and scaling of the cryptocurrency industry. CEX Security Centralized exchanges fully control the assets you deposit on the platform. You have to trust the platform entirely to store your funds securely. Big exchanges take security seriously, especially recently, but it comes at a high cost; we still read news about exchanges being hacked and users' funds being drained by attackers. CEX Complicated Regulations Small exchanges prefer to target only a single market by complying with a single regulator and miss out on inter-market financial operations. Exchange registration in an unregulated country does not protect end-users and is perceived as a risk. CEX High Operational Costs The security aspect requires the full-time focus of a dedicated team. Each of the multiple blockchains supported by the platform needs to be monitored 24/7 to make sure it keeps processing blocks. DEX Limitation of Blockchains Throughput Blockchains technologies are not scaling well. The throughput of blockchains is minimal; users may experience network congestion, resulting in high fees, delays in transactions, or even in transactions being entirely dropped. DEX Front-Run Problem All on-chain transactions are visible publicly before they are mined in a block. A bot can "front-run" a transaction by setting a higher gas price, while the user's transaction would execute with a worse price than expected or would even be rejected. DEX Multi-Step Asset Movements It is too complicated and expensive for the end-user to move assets from one staking protocol to another or from one blockchain to another, requiring multiple complex steps and incurring fees. Background The Solution Last modified 23d ago Copy link Outline The Problem
https://docs.yellow.org/the-problem
2022-08-08T01:04:45
CC-MAIN-2022-33
1659882570741.21
[]
docs.yellow.org
Enable TLS/SSL for HiveServer You can secure client-server communications using symmetric-key encryption in the TLS/SSL (Transport Layer Security/Secure Sockets Layer) protocol. To encrypt data exchanged between HiveServer and its clients, you can use Cloudera Manager to configure TLS/SSL. - HiveServer has the necessary server key, certificate, keystore, and trust store set up on the host system. - The hostname variable ( $(hostname -f)-server.jks) was used with Java keytool commands to create keystore, as shown in this example: $ ssl=true;sslTrustStore=<path_to_truststore>. Truststore password requirements depend on the version of Java running in the cluster: - Java 11: the truststore format has changed to PKCS and the truststore password is required; otherwise, the connection fails. - Java 8: The trust store password does not need to be specified. - In Cloudera Manager, navigate to . - In Filters, select HIVE for the scope. - Select Security for the category. - Accept the default Enable TLS/SSL for HiveServer2, which is checked for Hive (Service-Wide). - Enter the path to the Java keystore on the host system. /opt/cloudera/security/pki/keystore_name.jks - Enter the password for the keystore you used on the Java keytool command-line when the key and keystore were created.The password for the keystore must match the password for the key. - Enter the path to the Java trust store on the host system. - Click Save Changes. - Restart the Hive service. - Construct a connection string for encrypting communications using TLS/SSL. jdbc:hive2://#<host>:#<port>/#<dbName>;ssl=true;sslTrustStore=#<ssl_truststore_path>; \ trustStorePassword=#<truststore_password>;#<otherSessionConfs>?#<hiveConfs>#<hiveVars>
https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/security-encrypting-data-in-transit/topics/hive-enable-tls.html
2022-08-08T01:58:05
CC-MAIN-2022-33
1659882570741.21
[]
docs.cloudera.com
5. Revision Log¶ 5.1. pypi version 1.60¶ - added feature: ability to update NamedRanges wb.update_nr(name, val), see issue #72 - added feature: ability to find where a NamedRange is wb.nr_loc(name) - added feature: ability to fill a range with a single value: wb.ws(‘Sheet1’).update_range(address=’A1:B3’, val=10) - update: NamedRanges now add the worksheets if they are not already in the workbook. Note that using readxl with worksheet names specified will also ignore NamedRanges from being read in from the sheet that are not read in. - update: updated quickstart docs with the new feature demo scripts 5.2. pypi version 1.59¶ 5.3. pypi version 1.58¶ 5.4. pypi version 1.57¶ 5.5. pypi version 1.56¶ - imporvement: added support for non-standard excel file xml tags, see issue #44 - bug fix: fixed keyrow bug, see issue #47 - bug fix: addressed csv writing issue related to cells that contain ‘n’ that previous started a new row. New version replaces ‘n’ with ‘’, see issue #49 - bug fix: newly written workbooks written by pylightxl could not create new worksheets within excel after opening. The fix was to removed sheetView xml tag, see issue #50 - improvement: added encoding=’utf-8’ to write altworksheets to support chinese encoding error, see issue #51 5.6. pypi version 1.55¶ - added comment parsing, see issue #41 - DEPRECATION WARNING: all indexing method that use “formula” as an argument will be replaced with “output” in future version. Please update your codebase to use “output” instead of “formula”. This was done to simplify indexing the value ( output='v'), the formula ( output='f') or the comment ( output='c'). - added file stream reading for readxlthat now supports with blockfor reading. See issue #25 5.8. pypi version 1.53¶ - bug fix: writing to existing file previously would only write to the current working directory, it now can handle subdirs. In addition inadvertently discovered a bug in python source code ElementTree.iterparse where sourcepassed as a string was not closing the file properly. We submitted a issue to python issue tracker. 5.9. pypi version 1.52¶ - updated reading error’ed cells “#N/A” - updated workbook indexing bug from program generated workbooks that did not index from 1 5.12. pypi version 1.49¶ - bug-fix: updated encoding for string cells that contained xml-like data (ex: cell A1 “<cell content>”) 5.13. pypi version 1.48¶ - add feature to writecsvto be able to handle pathlibobject and io.StreamIOobject - refactored readxl to remove regex, now readxl is all cElementTree - refactored readxl/writexl to able to handle excel files written by openpyxl that is generated differently than how excel write files. 5.14. pypi version 1.47¶ - added new function: db.nr('table1')returns the contents of named range “table1” - added new function: db.ws('Sheet1').range('A1:C3')that returns the contents of a range it also has the ability to return the formulas of the range - updated db.ws('Sheet1').row()and db.ws('Sheet1').col()to take in a new argument formualthat returns the formulas of a row or col - bugfix: write to existing without named ranges was throwing a “repair” error. Fixed typo on xml for it and added unit tests to capture it - added new function: xl.readcsv(fn, delimiter, ws)to read csv files and create a pylightxl db out of it (type converted) - added new function: xl.writecsv(db, fn, ws, delimiter)to write out a pylightxl worksheet as a csv 5.15. pypi version 1.46¶ - bug fix: added ability to input an empty string into the cell update functions (previously entering val=’’) threw and error 5.16. pypi version 1.45¶ added support for cell values that have multiple formats within a single cell. previous versions did not support this functionality since it is logged differently in sharedString.xml added support for updating formulas and viewing them: - view formula: db.ws('Sheet1').address('A1', formula=True) - edit formula: db.ws('Sheet1').update_address('A1', val='=A1+10') updated the following function arguments to drive commonality: - was: readxl(fn, sheetnames)new: readxl(fn, ws) - was: writexl(db, path)new: writexl(db, fn) - was: db.ws(sheetname)new: db.ws(ws) - was: db.add_ws(sheetname, data)new: db.add_ws(ws, data) added new feature to be able to read-in NamedRanges, store it in the Database, update it, remove it, and write it. NamedRanges were integrated with existing function to handle semi-structured-data db.add_nr(name'range1', ws='sheet1', address='A1:C2') db.remove_nr(name='range1') db.nr_names add feature to remove worksheet: db.remove_ws(ws='Sheet1') add feature to rename worksheet: db.rename_ws(old='sh1', new='sh2') added a cleanup function upon writing to delete _pylightxl_ temp folder in case an error left them added feature to write to file that is open by excel by appending a “new_” tag to the file name and a warning message that file is opened by excel so a file was saved as “new_” + filename 5.17. pypi version 1.44¶ - bug fix: accounted for num2letter roll-over issue - new feature: added a pylightxl native function for handling semi-structured data 5.18. pypi version 1.43¶ - bug fix: accounted for reading error’ed out cell “#N/A” - bug fix: accounted for bool TRUE/FALSE cell values not registering on readxl - bug fix: accounted for edge case that was prematurely splitting cell tags <c r /> by formula closing bracket <f /> - bug fix: accounted for cell address roll-over 5.19. pypi version 1.42¶ - added support for pathlib file reading - bug fix: previous version did not handle merged cells properly - bug fix: database updates did not update maxcol maxrow if new data addition was larger than the initial dataset - bug fix: writexl that use linefeeds did not read in properly into readxl (fixed regex) - bug fix: writexl filepath issues 5.20. pypi version 1.41¶ - new-feature: write new excel file from pylightxl.Database - new-feature: write to existing excel file from pylightxl.Database - new-feature: db.update_index(row, col, val) for user defined cell values - new-feature: db.update_address(address, val) for user defined cell values - bug fix for reading user defined sheets - bug fix for mis-alignment of reading user defined sheets and xml files 5.21. pypi version 1.3¶ - new-feature: add the ability to call rows/cols via key-value ex: db.ws('Sheet1').keycol('my column header')will return the entire column that has ‘my column header’ in row 1 - fixed-bug: fixed leading/trailing spaced cell text values that are marked <t xml:in the sharedString.xml 5.22. pypi version 1.2¶ - fixed-bug: fixed Sheet number to custom Sheet name matching for 10+ sheets that were previously only sorting alphabetical which resulted with sorting: Sheet1, Sheet10, Sheet11, Sheet2… and so on.
https://pylightxl.readthedocs.io/en/latest/revlog.html
2022-08-08T01:32:11
CC-MAIN-2022-33
1659882570741.21
[]
pylightxl.readthedocs.io
Turn Based Tanks Demo¶ The goal of this demo is to serve as an example of one approach to an asynchronous, turn based game using Colyseus. This demo is designed to work with Colyseus version 0.14.7 and Unity version 2019.4.20 will need to change the Colyseus Server Address and Colyseus Server Port values accordingly. Demo Overview¶ Room Metadata¶ This demo makes use of the room's metadata to track the players in the game with their username. When a player joins or creates a room their username will be stored in a property called either team0 or team1 where team0 represents the player that created the room and team1 represents the player that has joined an available room to challenge the creator. this.metadata.team0 this.metadata.team1 this.setMetadata({"team0": options["creatorId"]}); The usernames set in the metadata are then used to filter the available rooms displayed in the lobby. Within the lobby users are able to see any rooms they have created or are available based on whether the room is waiting for a challenger to join the game. Rooms that you have not created and have two players will not be shown in the lobby. private TanksRoomsAvailable[] TrimRooms(TanksRoomsAvailable[] originalRooms) { List<TanksRoomsAvailable> trimmedRooms = new List<TanksRoomsAvailable>(); for (int i = 0; i < originalRooms.Length; ++i) { //Check a rooms metadata. If its one of our rooms OR waiting for a player, we show it TanksRoomMetadata metadata = originalRooms[i].metadata; if (metadata.team1 == null || (metadata.team1.Equals(ExampleManager.Instance.UserName) || metadata.team0.Equals(ExampleManager.Instance.UserName))) { trimmedRooms.Add(originalRooms[i]); } } return trimmedRooms.ToArray(); } Keeping the Room Alive¶ In order to make this demo an asynchronous turn based game we need to keep the room alive even after both players have left the room. The room is kept alive by setting the autoDispose flag to false. (You can see this in the TanksRoom server code within the onCreate handler). this.autoDispose = false; We know to disconnect the room after the boolean flag inProcessOfQuitingGame has been set true after performing checks to determine if the room should be closed. These checks are performed when a user has quit the game. // Check if creator has quit before anyone else has joined if(this.metadata.team0 && this.metadata.team1 == null) { disconnectRoom = true; } // No other users are in the room so disconnect if(this.inProcessOfQuitingGame && this.state.networkedUsers.size <= 1 && this.connectedUsers <= 1) { disconnectRoom = true; } // Should the room disconnect? if(disconnectRoom) { this.disconnect(); } Pausing the Room¶ Since this is an example of an asynchronous game, our room could have no users connected to it for any amount of time. When there are no users connected to the room the server doesn't need to update the simulation loop. When users disconnect from the room a check is performed to look if there are no more users connected to the room. When no more users are connected to the room the simulation interval effectively gets paused by setting the delay to a high value. In this case the value is a little more than 24 days. // Within the room's `onLeave` handler // Check if the server should pause the simulation loop because // there are no users connected to the room let anyConnected: boolean = false; this.state.players.forEach((player, index) => { if(player.connected) { anyConnected = true; } }); if(anyConnected == false) { // There are no users connected so pause the server updates this.setServerPause(true); } private setServerPause(pause: boolean) { if(pause) { this.setSimulationInterval(dt => this.gameLoop(dt), this.pauseDelay); } else { // Set the Simulation Interval callback this.setSimulationInterval(dt => this.gameLoop(dt)); } this.serverPaused = pause; } When users rejoin a room that has been paused, the simulation interval is restored. // Within the room's `onJoin` handler // Check if the server needs to be unpaused if(this.serverPaused) { // The server is currently paused so unpause it since a player has connected this.setServerPause(false); } Playing the Demo¶ Start the player in the scene "TanksLobby" located at Assets\TurnBasedTanks\Scenes\TanksLobby. Input your username and create a room to begin. If you cannot reach the room creation screen, confirm your local server is working properly and check the Unity Editor for error logs. If you are successful, the client will load the "TankArena" scene. This demo is an asynchronous turn based game. You can leave a room at any point and later return to a game in progress and it will pick up where it last left off. Only two players can play in a game. Goal being to destroy your opponent's tank. Each player has 3 Hit Points displayed in the top corners of the screen. When you create a room you can immediately take your turn without another player having joined yet. All controls are displayed in the ESC menu. You have the ability to leave the room at any time using the Exit option in the ESC menu, or you can Surrender the game to your opponent. You have 3 Action Points for your turn. Moving left/right consumes one AP and firing consumes two AP. Movement can be blocked by terrain that is too tall. To fire your tank's weapon you left click with the mouse and hold to charge the shot. Releasing the left mouse will fire. You have 3 weapons of varying range to choose from. Select your weapon with the 1-3 numbered keys. Between each movement or firing action there is a 2 second delay before another action can be taken. When a game ends due to a player's tank getting destroyed, or someone surrendering, a game over menu, showing a win/loss message, will be displayed with the options to either request a rematch or to quit the game. If the other player requests a rematch before you leave, a message will be displayed on the game over menu. There is an "online indicator" next to your opponents name to signal whether they are in the room at the same time with you. - Red = offline - Green = online. You have the option to skip your remaining turn by pressing the SPACEBAR. Adjusting the Demo¶ As you play around with this demo, you may want to make some adjustments to better familiarize yourself with what is happening. Below, you'll learn how to make these minor adjustments. Game Rules and Weapon Data¶ Both the Game Rules and Weapon Data values can be found in the server code at ArenaServer\src\rooms\tanks\rules.ts. The Game Rules control movement and firing costs as well as how many action points players get. The data in weaponList specifies the max charge, charge time, impact radius, and impact damage of each weapon. const GameRules = { MaxActionPoints: 3, MovementActionPointCost: 1, FiringActionPointCost: 2, ProjectileSpeed: 30, MaxMovement: 3, MaxHitPoints: 3, MovementTime: 2, } const weaponList = [ { name: "Short Range", maxCharge: 5, chargeTime: 1, radius: 1, impactDamage: 1, index: 0 }, { name: "Mid Range", maxCharge: 8, chargeTime: 2, radius: 1, impactDamage: 1, index: 1 }, { name: "Long Range", maxCharge: 10, chargeTime: 5, radius: 1, impactDamage: 1, index: 2 } ]
https://docs.colyseus.io/colyseus/demo/turn-based-tanks/
2022-08-08T01:25:06
CC-MAIN-2022-33
1659882570741.21
[array(['../common-images/scriptable-object.png', 'ScriptableObject'], dtype=object) array(['GameplayWithLabels.png', 'Lobby'], dtype=object)]
docs.colyseus.io
Tests that backdrop_html_to_text() wraps before 1000 characters. RFC 3676 says, "The Text/Plain media type is the lowest common denominator of Internet email, with lines of no more than 998 characters." RFC 2046 says, "SMTP [RFC-821] allows a maximum of 998 octets before the next CRLF sequence." RFC 821 says, "The maximum total length of a text line including the <CRLF> is 1000 characters." File Class - BackdropHtmlToTextTestCase - Unit tests for backdrop_html_to_text(). Code function testVeryLongLineWrap() { $input = 'Backdrop<br /><p>' . str_repeat('x', 2100) . '</p><br />Backdrop'; $output = backdrop_html_to_text($input); // This awkward construct comes from includes/mail.inc lines 8-13. $eol = settings_get('mail_line_endings', MAIL_LINE_ENDINGS); // We must use strlen() rather than backdrop_strlen() in order to count // octets rather than characters. $line_length_limit = 1000 - backdrop_strlen($eol); $maximum_line_length = 0; foreach (explode($eol, $output) as $line) { // We must use strlen() rather than backdrop_strlen() in order to count // octets rather than characters. $maximum_line_length = max($maximum_line_length, strlen($line . $eol)); } $verbose = 'Maximum line length found was ' . $maximum_line_length . ' octets.'; $this->pass($verbose); $this->assertTrue($maximum_line_length <= 1000, $verbose); }
https://docs.backdropcms.org/api/backdrop/core%21modules%21simpletest%21tests%21mail.test/function/BackdropHtmlToTextTestCase%3A%3AtestVeryLongLineWrap/1
2022-08-08T00:20:38
CC-MAIN-2022-33
1659882570741.21
[]
docs.backdropcms.org
. Please note that if you would like for deepforest to save the config file on reload (using deepforest.save_model), the config.yml must be updated instead of updating the dictionary of an already loaded model. # val_accuracy_interval:. batch_size¶ Number of images per batch during training. GPU memory limits this usually between 5-10 = <> Epochs¶ The number of times to run a full pass of the dataloader during model training. fast_dev_run¶ A useful pytorch lightning flag that will run a debug run to test inputs. See preload_images¶ For large training runs, the time spent reading each image and passing it to the GPU can be a significant performance bottleneck. If the training dataset is small enough to fit into GPU memory, pinning the entire dataset to memory before training will increase training speed. Warning, if the pinned memory is too large, the GPU will overflow/core dump and training will crash. val_accuracy_interval¶ Compute and log the classification accuracy of the predicted results computed every X epochs. This incurs some reductions in speed of training and is most useful for multi-class models. To deactivate, set to an number larger than epochs.
https://deepforest.readthedocs.io/en/latest/ConfigurationFile.html
2022-08-08T01:28:32
CC-MAIN-2022-33
1659882570741.21
[]
deepforest.readthedocs.io
Connect to Prevedere with Power BI Gain access to exclusive and critical financial information to confidently and proactively drive your business forward. Connect to the Prevedere content pack for Power BI. Note If you are not an existing Prevedere user, please use the sample key to try it out. How to connect. System requirements This content pack requires access to a Prevedere API key or the sample key (see below). Finding parameters Existing customers can access their data using their API key. If you are not yet a customer, you can see a sample of the data and analyses using the sample key. Troubleshooting The data may take some time to load depending on the size of your instance.
https://docs.microsoft.com/en-us/power-bi/service-connect-to-prevedere
2017-12-11T07:23:34
CC-MAIN-2017-51
1512948512584.10
[]
docs.microsoft.com
Sometimes, the backend implementation of a web service does not exactly match your desired API design. In that case, you do various message transformations and orchestrate multiple backend services to achieve the design you want. In this tutorial, you create a custom sequence using the WSO2 Tooling Plug-in and use it in your APIs to mediate the incoming API calls. The API Cloud comes with a powerful mediation engine that can transform and orchestrate API calls on the fly. It is built on WSO2 ESB and supports a variety of mediators that you can use as building blocks for your sequences. See the list of mediators supported in the API Cloud and WSO2 ESB. You can extend the API Gateway's default mediation flow to do custom mediation by providing an extension as a synapse mediation sequence. You need to design all sequences using a tool like the WSO2 Tooling Plug-in and then store the sequence in the Gateway's registry. Let's get started. See the video tutorial here or a step-by-step walk-through of the video tutorial below. Here's a step-by-step walk-though of the video tutorial: Log in to the API Publisher. Click Add New API, Design a new API, and Start creating to create an API with the following information and then click Implement. The Implement tab opens. Select Managed API, provide the information given in the table below and click Next: Manage >. Provide the following information in the Manage tab and click Save & Publish once you are done. Download and install the WSO2 API Manager Tooling Plug-in if you have not done so already. Start Eclipse by double clicking on the Eclipse application, which is inside the downloaded folder. On the Window menu click Perspective, Open Perspective, Other to open the Eclipse perspective selection window. - On the dialog box that appears, click WSO2 API Manager and click OK. On the APIM perspective, click the Login icon as shown below. Enter your cloud username (in the format <email@company_name>) and password, and click OK in the dialog box that appears. - WSO2 API. - Navigate to the File menu, and click Save to save the sequence. - Right-click on the sequence and click Commit File, and thereafter click Yes to push the changes to the Publisher server. - Sign in to WSO2 API Publisher again, search for the API that you created earlier, and click the Edit link to go to the edit wizard. Click Implement and Manage API. Thereafter, click the Enable Message Mediation checkbox, and select the sequence that you created for the In flow. Next, Save the API. Tip : It might take a few minutes for the sequence to be uploaded into the API Publisher. If it isn't there, please check again later. When selecting a mediator, make sure that it is a non-blocking mediator as blocking mediators are not supported in API Gateway custom mediations. For more details, see Adding Mediation Extensions. -" } } } } } } In this tutorial, you created a sequence to change the default mediation flow of API requests, deployed it in the API Gateway and invoked an API using the custom mediation flow.
https://docs.wso2.com/display/APICloud/Change+the+Default+Mediation+Flow+of+API+Requests
2017-12-11T07:18:57
CC-MAIN-2017-51
1512948512584.10
[]
docs.wso2.com
LazyList¶ - class menpo.base. LazyList(callables)[source]¶ Bases: Sequence, Copyable An immutable sequence that provides the ability to lazily access objects. In truth, this sequence simply wraps a list of callables which are then indexed and invoked. However, if the callable represents a function that lazily access memory, then this list simply implements a lazy list paradigm. When slicing, another LazyList is returned, containing the subset of callables. copy()[source]¶ Generate an efficient copy of this LazyList - copying the underlying callables will be lazy and shallow (each callable will not be called nor copied) but they will reside within in a new list. index(value) → integer -- return first index of value.¶ Raises ValueError if the value is not present. - classmethod init_from_index_callable(f, n_elements)[source]¶ Create a lazy list from a callable that expects a single parameter, the index into an underlying sequence. This allows for simply creating a LazyList from a callable that likely wraps another list in a closure. - classmethod init_from_iterable(iterable, f=None)[source]¶ Create a lazy list from an existing iterable (think Python list) and optionally a callable that expects a single parameter which will be applied to each element of the list. This allows for simply creating a LazyList from an existing list and if no callable is provided the identity function is assumed. map(f)[source]¶ Create a new LazyList where the passed callable fwraps each element. fshould take a single parameter, x, that is the result of the underlying callable - it must also return a value. Note that mapping is lazy and thus calling this function should return immediately. Alternatively, fmay be a list of callable, one per entry in the underlying list, with the same specification as above. repeat(n)[source]¶ Repeat each item of the underlying LazyList ntimes. Therefore, if a list currently has Ditems, the returned list will contain D * nitems and will return immediately (method is lazy). Examples >>> from menpo.base import LazyList >>> ll = LazyList.init_from_list([0, 1]) >>> repeated_ll = ll.repeat(2) # Returns immediately >>> items = list(repeated_ll) # [0, 0, 1, 1]
http://docs.menpo.org/en/stable/api/base/LazyList.html
2017-12-11T07:24:37
CC-MAIN-2017-51
1512948512584.10
[]
docs.menpo.org
. 1 Commentcomments.show.hide Sep 21, 2012 Ondrej Zizka How can one use org.jboss.resteasy.annotations? It's marked jboss.api=private.
https://docs.jboss.org/author/display/AS71/JAX-RS+Reference+Guide
2017-12-11T07:24:34
CC-MAIN-2017-51
1512948512584.10
[]
docs.jboss.org
Designing and Implementing Reports Using Report Builder 1.0 Reporting Services, Report Builder leverages the full reporting platform to bring ad hoc reporting to all users. Report Builder can run on any platform that supports the Microsoft .NET Framework 2.0. If the .NET Framework 2.0 is not installed on the client computer, users will be prompted to install it. Report Builder Model Designer is the user application for defining, editing and publishing report models. Modelers can launch the Model Designer and start designing directly against a data source or they can auto-generate the model based on a set of predefined rules as the starting point for model design. Model Designer can generate models against SQL Server 2000 or later databases and Oracle databases running version 9.2.0.3 or later. Note Report models based on SQL Server 2005 or later Analysis Services databases are generated by using Report Manager or SharePoint Services. Integration with Reporting Services. Report Manager Integration The user interface for helping to secure and manage models and model items is integrated in Report Manager. Models are managed here similar to other Report Server items. Model security and management through Report Manager. New management APIs for models. In This Section See Also
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms159750%28v%3Dsql.100%29
2019-06-15T22:38:48
CC-MAIN-2019-26
1560627997501.61
[]
docs.microsoft.com
Functional Dependency Profile Request Options (Data Profiling Task) SQL Server, including on Linux Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse Use the Request Properties pane of the Profile Requests page to set the options for the Functional Dependency Profile Request selected in the requests pane. A Functional Dependency profile reports the extent to which the values in one column (the dependent column) depend on the values in another column or set of columns (the determinant column). This profile can also help you identify problems in your data such as invalid values. For example, you profile the dependency between a Zip Code/Postal Code column and a United States state column. In this profile, the same Zip Code should always have the same state, but the profile discovers violations of the dependency. Note The options described in this topic appear on the Profile Requests page of the Data Profiling Task Editor. For more information about this page of the editor, see Data Profiling Task Editor (Profile Requests Page). For more information about how to use the Data Profiling Task, see Setup of the Data Profiling Task. For more information about how to use the Data Profile Viewer to analyze the output of the Data Profiling Task, see Data Profile Viewer. Understanding the Selection of Determinant and Dependent Columns A Functional Dependency Profile Request computes the degree to which the determinant side column or set of columns (specified in the DeterminantColumns property) determines the value of the dependent side column (specified in the DependentColumn property). For example, a United States state column should be functionally dependent on a United States Zip Code column. That is, if the Zip Code (determinant column) is 98052, the state (dependent column) should always be Washington. For the determinant side, you can specify a column or a set of columns in the DeterminantColumns property. For example, consider a sample table that contains columns A, B, and C. You make the following selections for the DeterminantColumns property: When you select the (*) wildcard, the Data Profiling task tests each column as the determinant side of the dependency. When you select the (*) wildcard and another column or columns, the Data Profiling task tests each combination of columns as the determinant side of the dependency. For example, consider a sample table that contains columns A, B, and C. If you specify (*) and column C as the value of the DeterminantColumns property, the Data Profiling task tests the combinations (A, C) and (B, C) as the determinant side of the dependency. For the dependent side, you can specify a single column or the (*) wildcard in the DependentColumn property. When you select (*), the Data Profiling task tests the determinant side column or set of columns against each column. Note If you select (*), this option might result in a large number of computations and decrease the performance of the task. However, if the task finds a subset that satisfies the threshold for a functional dependency, the task does not analyze additional combinations. For example, in the sample table described above, if the task determines that column C is a determinant column, the task does not continue to analyze the composite candidates. Request Properties Options For a Functional Dependency Profile Request, the Request Properties pane displays the following groups of options: Data, which includes the DeterminantColumns and DependentColumn options General Options Data Options ConnectionManager Select the existing ADO.NET connection manager that uses the .NET Data Provider for SQL Server (SqlClient) to connect to the SQL Server database that contains the table or view to be profiled. TableOrView Select the existing table or view to be profiled. DeterminantColumns Select the determinant column or set of columns. That is, select the column or set of columns whose values determine the value of the dependent column. For more information, see the sections, "Understanding the Selection of Determinant and Dependent Columns" and "DeterminantColumns and DependentColumn Options," in this topic. DependentColumn Select the dependent column. That is, select the column whose value is determined by the value of the determinant side column or set of columns. For more information, see the sections, "Understanding the Selection of Determinant and Dependent Columns" and "DeterminantColumns and DependentColumn Options," in this topic. DeterminantColumns and DependentColumn Options The following options are presented for each column selected for profiling in DeterminantColumns and in DependentColumn. For more information, see the section, "Understanding the Selection of Determinant and Dependent Columns," earlier in this topic. IsWildCard Specifies whether the (*) wildcard has been selected. This option is set to True if you have selected (*) to profile all columns. It is False if you have selected an individual column to be profiled. This option is read-only. ColumnName Displays the name of the selected column. This option is blank if you have selected (*) to profile all columns. This option is read-only. StringCompareOptions Select options for comparing string values. This property has the options listed in the following table. The default value of this option is Default. Note When you use the (*) wildcard for ColumnName, CompareOptions is read-only and is set to the Default setting. If you select DictionarySort, you can also select any combination of the options listed in the following table. By default, none of these additional options are selected. General Options RequestID Type a descriptive name to identify this profile request. Typically, you do not have to change the autogenerated value. Options ThresholdSetting Specify the threshold setting. The default value of this property is Specified. FDStrengthThreshold Specify the threshold (by using a value between 0 and 1) above which the functional dependency strength should be reported. The default value of this property is 0.95. This option is enabled only when Specified is selected as the ThresholdSetting. MaxNumberOfViolations Specify the maximum number of functional dependency violations to report in the output. The default value of this property is 100. This option is disabled when Exact is selected as the ThresholdSetting. See Also Data Profiling Task Editor (General Page) Single Table Quick Profile Form (Data Profiling Task)
https://docs.microsoft.com/en-us/sql/integration-services/control-flow/functional-dependency-profile-request-options-data-profiling-task?view=sql-server-2017
2019-06-15T23:55:04
CC-MAIN-2019-26
1560627997501.61
[]
docs.microsoft.com
sp_droprolemember (Transact-SQL) SQL Server (starting with 2008) Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse Removes a security account from a SQL Server role in the current database. Important This feature is in maintenance mode and may be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature. Use ALTER ROLE instead. Transact-SQL Syntax Conventions Syntax Syntax for both SQL Server and Azure SQL Database sp_droprolemember [ @rolename = ] 'role' , [ @membername = ] 'security_account' Syntax for both Azure SQL Data Warehouse and Parallel Data Warehouse sp_droprolemember 'role' , 'security_account' Arguments [ . Return Code Values 0 (success) or 1 (failure) Remarks ALTER ROLE to add a member to a role. Permissions Requires ALTER permission on the role. Examples The following example removes the user JonB from the role Sales. EXEC sp_droprolemember 'Sales', 'Jonb'; Examples: Azure SQL Data Warehouse and Parallel Data Warehouse The following example removes the user JonB from the role Sales. EXEC sp_droprolemember 'Sales', 'JonB' See Also Security Stored Procedures (Transact-SQL) sp_addrolemember (Transact-SQL) sp_droprole (Transact-SQL) sp_dropsrvrolemember (Transact-SQL) sp_helpuser (Transact-SQL) System Stored Procedures (Transact-SQL) Feedback Send feedback about:
https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-droprolemember-transact-sql?view=sql-server-2017
2019-06-15T23:21:41
CC-MAIN-2019-26
1560627997501.61
[]
docs.microsoft.com
InkToolbar control for Windows Forms and WPF The InkToolbar control provides an interface to manage an InkCanvas for Windows Ink-based user interaction InkToolbar control. This control wraps an instance of the UWP Windows.UI.Xaml.Controls.InkToolbar control. Requirements Before you can use this control, you must follow these instructions to configure your project to support XAML Islands. Known issues and limitations See our list of known issues for WPF and Windows Forms controls in the Windows Community Toolkit repo. Syntax <Window x: <controls:InkToolbarCustomToolButton x: </controls:InkToolbar> Properties The following properties wrap corresponding properties of the wrapped UWP Windows.UI.Xaml.Controls.InkToolbar object. See the links in this table for more information about each property. Events The following events wrap corresponding events of the wrapped UWP Windows.UI.Xaml.Controls.InkToolbar object. See the links in this table for more information about each property. Requirements API source code Related topics Feedback Send feedback about:
https://docs.microsoft.com/en-us/windows/communitytoolkit/controls/wpf-winforms/inktoolbar
2019-06-15T22:57:57
CC-MAIN-2019-26
1560627997501.61
[array(['../../resources/images/controls/inkcanvas.png', 'InkToolbar example'], dtype=object) ]
docs.microsoft.com
Connecting to a Socket For a client to communicate on a network, it must connect to a server. To connect to a socket Call the connect function, passing the created socket and the sockaddr structure as parameters. Check for general errors. // Connect to server. iResult = connect( ConnectSocket, ptr->ai_addr, (int)ptr->ai_addrlen); if (iResult == SOCKET_ERROR) { closesocket(ConnectSocket); ConnectSocket = INVALID_SOCKET; } // Should really try the next address returned by getaddrinfo // if the connect call failed // But for this simple example we just free the resources // returned by getaddrinfo and print an error message freeaddrinfo(result); if (ConnectSocket == INVALID_SOCKET) { printf("Unable to connect to server!\n"); WSACleanup(); return 1; } The getaddrinfo function is used to determine the values in the sockaddr structure. In this example, the first IP address returned by the getaddrinfo function is used to specify the sockaddr structure passed to the connect. If the connect call fails to the first IP address, then try the next addrinfo structure in the linked list returned from the getaddrinfo function. The information specified in the sockaddr structure includes: - the IP address of the server that the client will try to connect to. - the port number on the server that the client will connect to. This port was specified as port 27015 when the client called the getaddrinfo function. Next Step: Sending and Receiving Data on the Client Related topics
https://docs.microsoft.com/en-us/windows/desktop/WinSock/connecting-to-a-socket
2019-06-15T23:29:10
CC-MAIN-2019-26
1560627997501.61
[]
docs.microsoft.com
Proxy Authentication IP Authentication The easiest and most secure authentication method is IP Authentication. To use it: - Add your IP from your account dashboard That's it! Now any requests you make to a proxy server from the IPs that you've added will successfully authenticate. Note: There is no limit on authenticated IPs. Although your dashboard form has room for only a few entries, once you submit an entry you can fill in the form again to add more. If for any reason IP authentication fails, it could be that your IP has changed. You can update your IPs again from your account dashboard. However, if your access is temporarily blocked because of too many 407 errors (indicating incorrect authentication), be aware that an IP update performed during a ban does not immediately lift the ban. You will need to wait until the ban expires, typically after 4 hours. IP Authentication for Multiple Accounts The system does not allow the same IP to be authenticated for multiple accounts. An attempt to do so will trigger a message that the IP is already in the system. An authenticated IP becomes a primary key in the ProxyMesh database and is not unique to a single account. With multiple accounts authorizing the same IP, a proxy server could not determine which account was making a request. Domain Authentication If you have a dynamic, frequently changing IP, then you may want to use domain authentication with a dynamic DNS service such as No-IP. Once you have dynamic DNS setup, you can add your domain on the same page where you add IP addresses. Domains are resolved every 10 minutes, causing a potential lag between the DNS IP change, and ProxyMesh's receipt of the update, but this method can work well for you. Username:Password Authentication The typical HTTP proxy authentication method is with the Proxy-Authorization header using the Basic access authentication method. Most HTTP client libraries support this authentication method. But if you need to create the header yourself, the steps are - Base64 encode your username:password - Send a header that looks like Proxy-Authorization: Basic base64-encoded-username:password HTTPS Authentication Although Python Requests supports username:password authentication with HTTPS URLs, most other libraries do not. The reason is that the Proxy-Authorization header must be sent with the initial CONNECT method, instead of with the rest of the request headers. Otherwise there is no way for the proxy server to read the header. For HTTPS requests, IP authentication is the most reliable method.
https://docs.proxymesh.com/article/10-proxy-authentication
2019-06-15T22:50:37
CC-MAIN-2019-26
1560627997501.61
[]
docs.proxymesh.com
Contents Now Platform Custom Business Applications Previous Topic Next Topic Server test step: Record Update Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Server test step: Record Update Changes field values on a record on the server. Note: To ensure that the changes were applied, follow this step with a Record ValidationCheck to enforce ACLs. TableThe table containing the record to be updated. Record ID for the record to update. ConditionsSpecific field values to set when the test runs this step. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-application-development/page/administer/auto-test-framework/reference/atf-record-update.html
2019-06-15T23:05:44
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
If there is less than 10% of free heap memory available during publication of item then Tigase pubsub component will trigger Garbage Collection and it there is still very little amount of free memory will slow down delivery of notifications for published items (waiting about 1 second before continuing). If you have assigned a lot of memory to Tigase XMPP Server or in your case this delay is not acceptable you can adjust it by pubsub component properties: - setting pubsub-high-memory-usage-levelto percentage of heap memory accepted as near OOM state - setting pubsub-low-memory-delayto number of milliseconds to wait to throttle delivery of notifications pubsub () { pubsub-high-memory-usage-level = 95 pubsub-low-memory-delay = 100 }
https://docs.tigase.net/tigase-server/snapshot/Administration_Guide/webhelp/_tune_handling_of_low_memory.html
2019-06-15T22:49:30
CC-MAIN-2019-26
1560627997501.61
[]
docs.tigase.net
Setup Face API registration Once you have your Azure sandbox subscription, you can add a Face API resource and obtain the Face API URL and key needed to connect to this API. Go to the Azure Cognitive Services Face Service and click on Try Face. Click on the Sign In button in the Existing Azure Account box. This will take you to the Azure Portal and open the Create Face dialog box. Enter the following information: - a name for the Azure Face resource Concierge Subscriptionfor the subscription - leave the location at its default value F0for pricing tier Sandbox Resource Groupfor the Resource Group Once the Azure resource has been deployed successfully, go to this resource and make a note of the API Endpoint and keys. You will need one of the keys to call the Face API. You can use either of the keys. Slack workspace To create a Slack command, you need administrator privileges for a Slack workspace. - If you already have a Slack workspace where you have admin privileges, you can use that. You can also create a brand new Slack workspace. Local setup You do most of the exercises in this module on your local machine, deploying to your Azure sandbox as the final step. Install Visual Studio Code If you don't have it already, download and install Visual Studio Code Add Azure Extensions to Visual Studio Code If you don't have them already, install these VS Code extensions: Install node and npm If you don't already have them installed locally on your machine, Install node and npm for your operating system. Clone the starter code Cloning the starter code will help you get the most out of this module. All of the code you need to complete the app, and some initial bootstrap code, is available for free to get you started. git clone You should be able to complete the project using the code provided, but if you can't figure something out, then you can find the completed code in the completedbranch. Change into the directory containing the cloned source repository, then install the required packages: cd mslearn-the-mojifier npm install Compile the TypeScript code into JavaScript You are going to write your app using TypeScript. TypeScript has support for type-checking and class definitions, which we will make use of in our Mojifier code. Node.js does not know how to run TypeScript, so as you develop, you'll need to convert your TypeScript code to JavaScript. The TypeScript compiler tscis installed when you run the npm installcommand above, and the package.jsonis configured to run it in the build stage. Run the following command: npm run build Keep this command running in a terminal shell. This watches for any changes to the TypeScript files and converts them to JavaScript files. If you aren't seeing the behavior you expect in the exercises, check the console output in the terminal window as there may be errors in your TypeScript code. Need help? See our troubleshooting guide or provide specific feedback by reporting an issue.
https://docs.microsoft.com/en-us/learn/modules/replace-faces-with-emojis-matching-emotion/3-setup
2019-06-15T22:47:19
CC-MAIN-2019-26
1560627997501.61
[]
docs.microsoft.com
@ MasterType Provides a way to create a strongly typed reference to the ASP.NET master page when the master page is accessed from the Master property. <%@ MasterType attribute="value" [attribute="value"...] %> Attributes - TypeName Specifies the type name for the master page. - VirtualPath Specifies the path to the file that generates the strong type. Remarks Use the @ MasterType directive to set the strong type for the master page, as accessed through the Master property. Note if VirtualPath is not defined, the type must exist in one of the currently linked assemblies, such as App_Bin or App_Code. If both attributes, TypeName and VirtualPath, are defined, the @ MasterType directive will fail. Example The following code example demonstrates how to set the virtual path to an ASP.NET master page. <%@ MasterType VirtualPath="~/masters/SourcePage.master”" %>
https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-2.0/ms228274%28v%3Dvs.80%29
2019-06-15T23:14:31
CC-MAIN-2019-26
1560627997501.61
[]
docs.microsoft.com
GraphicsPath::GetPointCount method The GraphicsPath::GetPointCount method gets the number of points in this path's array of data points. This is the same as the number of types in the path's array of point types. Syntax INT GetPointCount( ); Parameters This method has no parameters. Return Value Type: Type: INT This method returns the number of points in the path's array of data points.. Examples The following example creates a path that has one ellipse and one line. The code calls the GraphicsPath::GetPointCount method to determine the number of data points stored in the path. Then the code calls the GraphicsPath::GetPathPoints method to retrieve those data points. Finally, the code fills a small ellipse at each of the data points. VOID GetPointCountExample(HDC hdc) { Graphics graphics(hdc); // Create a path that has one ellipse and one line. GraphicsPath path; path.AddEllipse(10, 10, 200, 100); path.AddLine(220, 120, 300, 160); // Find out how many data points are stored in the path. INT count = path.GetPointCount(); // Draw the path points. SolidBrush redBrush(Color(255, 255, 0, 0)); PointF* points = new PointF[count]; path.GetPathPoints(points, count); for(INT j = 0; j < count; ++j) graphics.FillEllipse( &redBrush, points[j].X - 3.0f, points[j].Y - 3.0f, 6.0f, 6.0f); delete [] points; } Requirements See Also Constructing and Drawing Paths GraphicsPath::GetPathData GraphicsPath::GetPathTypes
https://docs.microsoft.com/en-us/windows/desktop/api/gdipluspath/nf-gdipluspath-graphicspath-getpointcount
2019-06-15T22:45:11
CC-MAIN-2019-26
1560627997501.61
[]
docs.microsoft.com
The state is the current processing status for the resource. For example, the normal state for a resource on the primary system is ISP – in service, protected. The normal state for a resource on the secondary system is OSU – out of service, unimpaired. It is recommended that you accept the default initial state. If you set the Initial State to OSU, you must manually bring the resource into service. Feedback Thanks for your feedback. Post your comment on this topic.
http://docs.us.sios.com/sps/8.6.1/en/topic/initial-state
2019-06-15T23:37:32
CC-MAIN-2019-26
1560627997501.61
[]
docs.us.sios.com
The Keeper Bridge is an enterprise-class service application that supports the ability to automatically sync Nodes, Users, Roles and Teams to your Keeper Enterprise account from an Active Directory or LDAP service. To activate and install the Keeper Bridge, follow the below steps: Login to the Admin Console and turn on Show Node Structure from Configurations Create a Node to sync with your Active Directory Visit the Provisioning tab and select "Add Method" and select Active Directory Sync. Download the Keeper Bridge and proceed with setup. Keeper Bridge supports single and multi-domain, multiple forest domains and other complex environments. The Bridge also supports high-availability mode and a variety of custom configuration options based on your AD/LDAP environment. The Keeper AD Bridge Guide documents the full setup process. The Keeper Bridge does not authenticate users into their vault with their Active Directory password. For seamless user authentication, consider our Keeper SSO Connect add-on as described in the next section which authenticates against Active Directory via AD FS. Automated Team provisioning requires the Keeper Administrator to authenticate on the Keeper Bridge. The Bridge will poll for users who have created their Keeper account after invitation, then the Bridge will encrypt the Team Key with the user's public key, and distribute the Team Key to the user. Once any member of the team logs into the Vault, all members of that team are approved. Once the Active Directory Bridge is syncing, we recommend not making manual user or team changes directly on the Admin Console. Delegate all user and team provisioning to the bridge through Active Directory. Role enforcement policy changes should still be made on the Admin Console. For detailed Bridge setup and install instructions see the Keeper Bridge Guide.
https://docs.keeper.io/enterprise-guide/user-and-team-provisioning/syncing-active-directory-or-ldap
2019-06-15T23:15:07
CC-MAIN-2019-26
1560627997501.61
[]
docs.keeper.io
Contents Now Platform Capabilities Previous Topic Next Topic Share a Visual Task Board in a Connect conversation Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Share a Visual Task Board in a Connect conversation You can share a Visual Task Board in a Connect Chat or Connect Support conversation. Before you beginRole required: none Procedure Navigate to Self-Service > Visual Task Boards. Drag a task board to a Connect mini window. A link to the task board appears in the conversation. The task board is also listed in the conversation tools, which are visible in the Connect workspace. Only conversation members who are members of the board can access it. If you share a task board in a record conversation, it appears as a URL in the record activity stream. Related ConceptsConnectVisual Task Boards On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/use/collaboration/task/t_ShareVTB.html
2019-06-15T23:14:08
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
Contents Security Operations Previous Topic Next Topic Check Session Status activity Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Check Session Status activity Determines the status of a Carbon Black session within the workflow. The Check Session Status activity can be used with any workflow to check Carbon Black session status within the workflow. Results Possible results for this activity are: Table 1. Results Result Description Success Session status returned. Failure Session status error. More error information is available in the activity output error. Input variables Input variables determine the initial behavior of the activity. Variable Description api_token Carbon Black API key. endpoint_base Base URL of the Carbon Black API. session_id Session Information on Carbon Black running processes. Capability Execution Tracking - Failure activityThe Capability Execution Tracking - Failure workflow activity records a failure to the audit record. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-security-management/page/product/security-operations-integrations/reference/check-session-status-activity.html
2019-06-15T23:06:30
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
To' } } Note Because some clients do not refresh a registration form after an unsuccessful attempt, this option allows 3 retries with the same CAPTCHA. 3 unsuccessful attempts will result in the captcha being invalidated and a client will receive an error message.
https://docs.tigase.net/tigase-server/snapshot/Administration_Guide/webhelp/XEP0077CAPCHA.html
2019-06-15T22:38:41
CC-MAIN-2019-26
1560627997501.61
[]
docs.tigase.net
ProvisionByoipCidr. Request Parameters The following parameters are for this specific action. For more information about required and optional parameters that are common to all actions, see Common Query Parameters. - Cidr The public IPv4 address range, in CIDR notation. The most specific prefix that you can specify is /24. The address range cannot overlap with another address range that you've brought to this or another Region. Type: String Required: Yes - CidrAuthorizationContext A signed document that proves that you are authorized to bring the specified IP address range to Amazon using BYOIP. Type: CidrAuthorizationContext object Required: No - Description A description for the address range and the address pool. Type: String Required: No - Errors. See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ProvisionByoipCidr.html
2019-06-15T23:38:02
CC-MAIN-2019-26
1560627997501.61
[]
docs.aws.amazon.com
Amazon Corretto 11 Installation Instructions for macOS 10.13 or later This topic describes how to install and uninstall Amazon Corretto 11 on a host running the Mac OS version 10.13 or later. You must have administrator permissions to install and uninstall Amazon Corretto 11. Install Amazon Corretto 11 Download the Mac .pkgfile from the Downloads page. Double-click the downloaded file to begin the installation wizard and follow the steps in the wizard. Once the wizard completes, Amazon Corretto 11 is installed in /Library/Java/JavaVirtualMachines/. You can run the following command in a terminal to get the complete installation path. /usr/libexec/java_home --verbose Run the following command in the terminal to set the JAVA_HOMEvariable to the Amazon Corretto 11 version of the JDK. If this was set to another version previously, it is overridden. export JAVA_HOME=/Library/Java/JavaVirtualMachines/amazon-corretto-11.jdk/Contents/Home Uninstall Amazon Corretto 11 You can uninstall Amazon Corretto 11 by running the following commands in a terminal. cd /Library/Java/JavaVirtualMachines/ sudo rm -rf amazon-corretto-11.jdk
https://docs.aws.amazon.com/corretto/latest/corretto-11-ug/macos-install.html
2019-06-15T23:29:04
CC-MAIN-2019-26
1560627997501.61
[]
docs.aws.amazon.com
Contents Now Platform Administration Previous Topic Next Topic Zing removes stop words from queries Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Zing removes stop words from queries Remove common words from search queries that do not produce meaningful results. Stop words are common words that are not indexed because they are not meaningful in search results. Articles, conjunctions, personal pronouns, and prepositions are examples of stop words that are not used in keyword searches. Administrators can configure stop words for all indexed tables and for specific tables. By default, the system maintains two types of stop words. Table 1. Types of stop words Stop word type Description System-wide text index stop words The system always ignores system-wide text index stop words when generating text indexes. Any search for a system-wide stop word returns no search results. Table-specific stop words The system uses the table-specific Text Index record to determine whether to index the stop word or to just remove it from keyword search queries against the table. By default, the system has stops words for common English words. Search administrators typically create stop words from search terms that produce too many search results such as articles, conjunctions, personal pronouns, and prepositions. Configure a global stop wordConfigure stop words that should not be indexed by the search.Configure a table-specific stop wordYou can configure stop words for a specific table.Enable automatic stop words for a tableThe system can identify and generate stop words when a search term exceeds an occurrence threshold.Related ConceptsAvailable search optionsZing generates search results in four phasesZing filters search results with access controlsZing computes document scores using three componentsZing indexes wordsZing can include attachments in search resultsZing matches derived words with stemmingRelated ReferenceFeatures of Zing text indexing and search engineInstalled with Zing On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/search-administration/concept/stop-words-removed-from-queries.html
2019-06-15T23:02:03
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
. Is there a chance that it will be possible in the future to append more than one cronjob to one alert?
https://docs.splunk.com/Documentation/Splunk/7.2.6/Alert/CronExpressions
2019-06-15T22:53:57
CC-MAIN-2019-26
1560627997501.61
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Release notes and notices This section provides information about what is new or changed in TrueSight Smart Reporting for Server Automation, including urgent issues, documentation updates, feature packs, and fix packs. Tip - To stay informed of changes to this list, click the this page.icon on the top of Ready-made PDFs are available on the PDFs page. You can also create a custom PDF.Click here to see the steps Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/tssrsa/1901/release-notes-and-notices-862843470.html
2021-04-10T15:17:16
CC-MAIN-2021-17
1618038057142.4
[]
docs.bmc.com
Cardano tracking tools¶ Since Cardano is a public blockchain ledger, it is possible to easily track all recent transactions, block details, and epoch data using different tools. Exploring transactions and blocks¶ Cardano Explorer Cardano Explorer is a user-oriented tool that fetches data from the main database and reflects it in a straightforward and convenient web interface. The Explorer shows the latest epoch details. You can click the latest epoch and see: a number of blocks produced during this epoch time the epoch started time of last produced block number of processed transactions total output in ada Figure 1. Latest epoch summary By choosing a specific block, you can explore it in more detail to see its ID, size, epoch and block details, number of included transactions and confirmations: Figure 2. Block summary You can also search for specific epochs, transactions or blocks by pasting their IDs in the search field. Here is a list of other explorers to consider: Exploring assets¶ Cardano supports multi-asset creation and management. To see a list of created assets and tokens, you can use these tools: Exploring stake pools¶ To find a list of all registered stake pools, their tickers, pool names, and IDs, you can use these tools: Note: IOHK has developed a stake pool metadata aggregation server (SMASH) to provide the community with a list of verified stake pools with valid metadata. Smash is integrated with the Daedalus wallet, and users can see a list of valid stake pools in the delegation center tab.
https://docs.cardano.org/en/latest/explainers/getting-started-with-cardano/tools.html
2021-04-10T14:42:17
CC-MAIN-2021-17
1618038057142.4
[array(['../../_images/latest_epoch_summary.png', 'epoch_summary'], dtype=object) array(['../../_images/block_summary.png', 'block_summary'], dtype=object)]
docs.cardano.org
It gives individuals the right to access, edit and delete personal information stored by organisations. Individuals also have the right to be informed about the purpose of the data collection, how long the information will be stored, who has access to the data, and when it was last edited. Organisations that store personal data should also be able to export the data in a machine readable format. In order to support this Litium has introduced new functionality in version 4.8 and higher. In addition to the new features listed below, the functionality can be extended by changing the business logic for finding, filtering and exporting personal data. In Litium, this type of information is primarily stored in the Sales and Customers areas. For maintenance purposes historical orders can be deleted automatically based on the age and state of the order. This is done through a scheduled task that you set up in web.config in the solution. The orders that should be deleted can be specified in back office via Settings > Sales > Delete old orders. The following line in the taskSettings section in web.config will delete the specified orders once every day. <scheduledTask type="Litium.Foundation.Modules.ECommerce.Orders.OrderCleanupScheduler, Litium.Studio" startTime="00:40" interval="1d" /> The orders defined in the scheduled task will not be deleted through the GDPR features in the Sales and Customers areas. The personal information related to the orders will not be deleted when the scheduled task is run. Only the orders will be deleted. This is because the customer might have placed newer orders than the ones that are deleted through the scheduled task. If you want to extend the functionality to also delete customers that do not have any registered orders you can do one of the following: A person and all orders related to that person can be deleted. The function can be extended by the project to also be able to delete personal information in third party systems. Deletion of personal data will store information outside the database to be able to trigger deletion of the same person and orders in case a database restore occurs. Personal data can be exported from back office in JSON format. The export includes the personal data of the person object and any orders placed by the person. The function can be extended by the project to also be able to include personal information in third party systems. Extra audit information is added for all logins that are executed where the person has administration access to customers or orders. To view the audit information you can run the following SQL statement on the database: SELECT AT.TransactionDateTimeUtc, AT.IdentityName FROM Auditing.AuditTransactionItem ATI INNER JOIN Auditing.AuditTransaction AT ON AT.SystemId = ATI.AuditTransactionSystemId WHERE ATI.EntityType = 'Litium.Security.AuthenticationService, Litium.Abstractions' ORDER BY AT.TransactionDateTimeUtc There are two ways to delete a person and all orders related to that person manually in back office. You can't delete several people at the same time in back office: or Note that the deletion feature can be customised in the project and hence work differently in different projects. Below you can find the regulation in its entirety. The features above relate to articles 15, 16, 17 and 20 in particular. About Litium Join Litium Support System status
https://docs.litium.com/documentation/architecture/gdpr-support
2021-04-10T14:55:21
CC-MAIN-2021-17
1618038057142.4
[]
docs.litium.com
Conversations are a discussion forum for your Collective. They are public threads that everyone can read and participate to. Use them to get feedback from your community or to organize your actions! Conversations are enabled by default for new Collectives. If you want to enable conversations for an old Collective or an organization, go to Collective settings > Conversations. As a collective admin, you can delete any comment submitted in a conversation. You can also delete a conversation by deleting its root comment: Who can see conversations? All conversations are public. Anyone can see them and respond to them. Who is moderating? The administrators of this collective can remove conversations that are not appropriate for the community. Please be a good citizen of the collective. How can I find out when someone replied? You will receive an email notification whenever someone replies. You can unsubscribe from those notifications at any time.
https://docs.opencollective.com/help/collectives/conversations
2021-04-10T14:58:35
CC-MAIN-2021-17
1618038057142.4
[]
docs.opencollective.com
components. Indexer Splunk indexers provide data processing and storage for local and remote data and host the primary Splunk data store. See How indexing works in the Managing Indexers and Clusters manual for more information. Search head A search head is a Splunk Enterprise instance that distributes searches to indexers (referred to as "search peers" in this context). Search heads can be either dedicated or not, depending on whether they also perform indexing. Dedicated search heads don't have any indexes of their own, other than the usual internal indexes. Instead, they consolidate and display results that originate from remote search peers. To configure a search head to search across a pool of indexers, see What is distributed search in the Distributed Search Manual Forwarder Forwarders are Splunk instances that forward data to remote indexers for data processing and storage. In most cases, they do not index data themselves. See the About forwarding and receiving topic in the Forwarding Data manual. Deployment server A Splunk Enterprise instance can also serve as a deployment server. The deployment server is a tool for distributing configurations, apps, and content updates to groups of Splunk Enterprise instances. You can use it to distribute updates to most types of Splunk components: forwarders, non-clustered indexers, and non-clustered search heads. See About deployment server and forwarder management in the Updating Splunk Enterprise Instances manual. Functions at a glance Index replication and indexer clusters An indexer cluster is a group of indexers configured to replicate each others' data, so that the system keeps multiple copies of all data. This process is known as index replication. By maintaining multiple, identical copies of data, indexer clusters prevent data loss while promoting data availability for searching. Splunk Enterprise clusters feature automatic failover from one indexer to the next. This means that, if one or more indexers fail, incoming data continues to get indexed and indexed data continues to be searchable. In addition to enhancing data availability, clusters have other features that you should consider when you are scaling a deployment, for example, a capability to coordinate configuration updates easily across all indexers in the cluster. Clusters also include a built-in distributed search capability. See About!
https://docs.splunk.com/Documentation/Splunk/8.1.2/Capacity/ComponentsofaSplunkEnterprisedeployment
2021-04-10T14:46:54
CC-MAIN-2021-17
1618038057142.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
The scope of a function - where it is "visible" and may be called from - can be altered with one of the following attributes after the NAME attribute in its declaration, in decreasing order of visibility: PUBLICA PUBLICfunction is visible everywhere - to the file it is declared in, to other linked-in modules or scripts (see here for a discussion of library modules), and to users, i.e. it may be the start function for a script. EXPORTAn EXPORTfunction (not to be confused with the EXPORTdirective, here) is visible to the file it is declared in, and to other linked-in modules or scripts. However it is not visible to the user, and therefore cannot be the start function. The EXPORTattribute is used in library modules to make sensitive functions available to other scripts but not to the outside world. The EXPORTattribute is available in version 2.6.936300000 19990902 and later. PRIVATEA PRIVATEfunction is visible only to the file it is declared in. It cannot be a start function, nor can other linked modules or scripts see it. Indeed other modules could redeclare their own distinct function with the same name. An attempt to call a function outside its scope will have the same result as if the function doesn't exist. For example, trying to enter a script at a PRIVATE or EXPORT function will start at main instead. PRIVATE functions provide a measure of security by preventing web users from entering a script at an unintended point. For example, a function such as this: <A NAME=deluser PRIVATE> <SQL NOVARS "delete from users where User = $User"> </SQL> User $User was deleted. </A> could be dangerous if invoked by the user at a point not controlled by the script: the $User variable might not have been verified. For similar reasons, all user and builtin functions are inherently PRIVATE. However, the script function main must always be PUBLIC, as it is the default start point. If a function does not have its scope declared, Vortex will try to default it to PRIVATE, as an additional security measure. However, this is not always possible, for back-compatibility reasons. Thus it is wise to declare explicitly the scope of all functions, and to use the lowest scope possible (e.g. PUBLIC only if specifically required). A function is PRIVATE if the following is true: PRIVATE, or otherwise it is PUBLIC. These arcane rules maintain back-compatibility with Vortex versions prior to 2.1.895000000 19980513, where all script functions were PUBLIC (and had no parameters). Again, it's easier to simply always declare function scopes explicitly.
https://docs.thunderstone.com/site/vortexman/function_scope.html
2021-04-10T15:21:30
CC-MAIN-2021-17
1618038057142.4
[]
docs.thunderstone.com
Currently available for NetSuite integrations, Celigo NetSuite Integrator 2.0 is built as a SuiteApp on SuiteCloud Development Framework (SDF). This technology upgrade offers several advantages over the NetSuite bundles that are coded for the popular but less robust SuiteScript v1.0. Contents - What’s new - Install the SuiteApp - Apply SuiteScript 2.0 to a NetSuite export - Apply SuiteScript 2.0 to a NetSuite import - Update hooks What’s new Customers who switch to NetSuite Integrator 2.0 will have access to the following features: - You’re now able to write hooks in SuiteScript 2.0, when implementing Celigo Integrator 2.0 - Faster SuiteApp upgrades - Native functionality that takes advantage of the new NetSuite SDF architecture, now and going forward - Support for newer NetSuite record types that might be supported only in SuiteScript 2.0 - Unlimited dynamic lookups for NetSuite imports – that is, you won’t have to worry about exceeding SuiteScript point restrictions in common import operations Known differences SuiteScript 2.0 differences may have a slight effect on your current integration logic: - Checkboxes have the standard value of true or false (instead of the earlier "T" or "F") - SuiteScript hooks have a slightly different sequence Beta release limitations We are committed to an aggressive timeline to be able to offer you integration features and performance at least comparable with the NetSuite bundles. For the initial rollout, NetSuite Integrator 2.0 does not support the following functions: - Subrecords on import (sublists work as expected) - NetSuite realtime (listener) exports - Automatic migration – you must follow the steps below - Integration Apps (options available only to custom flows) We will continue to update this page with fully tested and released enhancements. Keep in mind that the above limitations would work as expected in SuiteScript 1.0. Install the SuiteApp At present, Celigo NetSuite Integrator 2.0 relies on the integrator.io NetSuite (SuiteScript 1.0) bundle to return additional information. Before you begin, install the current NetSuite bundle (20038). To install any SuiteApp, you must be signed in to NetSuite as an administrator. Then, you can choose either installation method, simply as a matter of convenience: When creating an export or import (see below) using SuiteScript 2.0, integrator.io will try to confirm whether Celigo Integrator on SuiteScript 2.0 SuiteApp is already installed in your NetSuite account. If it cannot be confirmed, you are presented with a hyperlink to suggest you install the SuiteApp. Once logged into NetSuite, you will be redirected you to the page below. Click Install to proceed. Apply SuiteScript 2.0 to a NetSuite export - Open a new export from NetSuite to modify its settings. - Expand the Advanced section. - For the NetSuite API version, select SuiteScript 2.0 (beta). - Save the export, and verify that any SuiteScript hooks in your integration are written for the corresponding version (SuiteScript 1.0 and 2.0 are incompatible). Tip: Depending on your flows’ requirements, you may apply different SuiteScript versions to each flow step and they will work correctly. For example, most of your imports may stay at the default SuiteScript 1.0, while another import is set to SuiteScript 2.0. Or, an export using SuiteScript 2.0 may send records to an import using SuiteScript 1.0. Apply SuiteScript 2.0 to a NetSuite import - Open a new import to NetSuite, and modify its settings. - Expand the Advanced section. - For the NetSuite API version, select SuiteScript 2.0 (beta). - Save the import, and verify that any SuiteScript hooks in your integration are written for the corresponding version (SuiteScript 1.0 and 2.0 are incompatible). Updates to integrator.io SuiteScript hooks Your integration hooks will differ slightly after you upgrade to NetSuite Integrator 2.0. integrator.io SuiteScript hooks (for NetSuite SuiteScript 2.0) can be written in SuiteScript 2.0 and can use completely native NetSuite modules. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360050643132-Celigo-NetSuite-Integrator-SuiteApp-2-0-beta-
2021-04-10T14:20:00
CC-MAIN-2021-17
1618038057142.4
[array(['/hc/article_attachments/360076123591/ss2-install-5.png', None], dtype=object) array(['/hc/article_attachments/360073313351/SuiteScript2-export.png', None], dtype=object) array(['/hc/article_attachments/360073046612/SuiteScript2.png', None], dtype=object) ]
docs.celigo.com
Using services in a flow); ServiceHub.cordaServiceshould not be called during initialisation of a flow and should instead be called in line where needed or set after the flow’s callfunction has been triggered. Starting Flows from a Service.
https://docs.corda.net/docs/corda-enterprise/4.6/cordapps/api-service-classes.html
2021-04-10T14:58:09
CC-MAIN-2021-17
1618038057142.4
[]
docs.corda.net
The Purpose of an Enterprise Architecture Framework Can. The purpose of a model is support a given reasoning process: - We want to guide thought in ourselves. - We want to guide thought in others. - We want to answer a question asked of us. - We want to examine the results of asking a question that we find particularly interesting.. The Purpose of a Framework. And the Credit goes to….
https://docs.microsoft.com/en-us/archive/blogs/nickmalik/the-purpose-of-an-enterprise-architecture-framework
2021-04-10T15:44:05
CC-MAIN-2021-17
1618038057142.4
[]
docs.microsoft.com
UIElement. Visibility Property Definition Gets or sets the visibility of a UIElement. A UIElement that is not visible is not rendered and does not communicate its desired size to layout. Equivalent WinUI property: Microsoft.UI.Xaml.UIElement.Visibility. public: property Visibility Visibility { Visibility get(); void set(Visibility value); }; Visibility Visibility(); void Visibility(Visibility value); public Visibility Visibility { get; set; } var visibility = uIElement.visibility; uIElement.visibility = visibility; Public Property Visibility As Visibility <uiElement Visibility="Visible"/> -or- <uiElement Visibility="Collapsed"/> Property Value A value of the enumeration. The default value is Visible.. <VisualState x: <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard. <DiscreteObjectKeyFrame KeyTime="0" Value="Visible"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState>.
https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.uielement.visibility?view=winrt-19041
2021-04-10T14:38:08
CC-MAIN-2021-17
1618038057142.4
[]
docs.microsoft.com
Best practices for basic scheduler features in Azure Kubernetes Service (AKS) As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. The Kubernetes scheduler lets you control the distribution of compute resources, or limit the impact of maintenance events. This best practices article focuses on basic Kubernetes scheduling features for cluster operators. In this article, you learn how to: - Use resource quotas to provide a fixed amount of resources to teams or workloads - Limit the impact of scheduled maintenance using pod disruption budgets - Check for missing pod resource requests and limits using the kube-advisortool Enforce resource quotas Best practice guidance Plan and apply resource quotas at the namespace level. If pods don't define resource requests and limits, reject the deployment. Monitor resource usage and adjust quotas as needed. Resource requests and limits are placed in the pod specification. Limits are used by the Kubernetes scheduler at deployment time to find an available node in the cluster. Limits and requests work at the individual pod level. For more information about how to define these values, see Define pod resource requests and limits To provide a way to reserve and limit resources across a development team or project, you should use resource quotas. These quotas are defined on a namespace, and can be used to set quotas on the following basis: - Compute resources, such as CPU and memory, or GPUs. - Storage resources, including the total number of volumes or amount of disk space for a given storage class. - Object count, such as maximum number of secrets, services, or jobs can be created. Kubernetes doesn't overcommit resources. Once your cumulative resource request total passes the assigned quota, all further deployments will be unsuccessful. When you define resource quotas, all pods created in the namespace must provide limits or requests in their pod specifications. If they don't provide these values, you can reject the deployment. Instead, you can configure default requests and limits for a namespace. The following example YAML manifest named dev-app-team-quotas.yaml sets a hard limit of a total of 10 CPUs, 20Gi of memory, and 10 pods: apiVersion: v1 kind: ResourceQuota metadata: name: dev-app-team spec: hard: cpu: "10" memory: 20Gi pods: "10" This resource quota can be applied by specifying the namespace, such as dev-apps: kubectl apply -f dev-app-team-quotas.yaml --namespace dev-apps Work with your application developers and owners to understand their needs and apply the appropriate resource quotas. For more information about available resource objects, scopes, and priorities, see Resource quotas in Kubernetes. Plan for availability using pod disruption budgets Best practice guidance To maintain the availability of applications, define Pod Disruption Budgets (PDBs) to make sure that a minimum number of pods are available in the cluster. There are two disruptive events that cause pods to be removed: Involuntary disruptions Involuntary disruptions are events beyond the typical control of the cluster operator or application owner. Include: - Hardware failure on the physical machine - Kernel panic - Deletion of a node VM Involuntary disruptions can be mitigated by: - Using multiple replicas of your pods in a deployment. - Running multiple nodes in the AKS cluster. Voluntary disruptions Voluntary disruptions are events requested by the cluster operator or application owner. Include: - Cluster upgrades - Updated deployment template - Accidentally deleting a pod Kubernetes provides pod disruption budgets for voluntary disruptions,letting you plan for how deployments or replica sets respond when a voluntary disruption event occurs. Using pod disruption budgets, cluster operators can define a minimum available or maximum unavailable resource count. If you upgrade a cluster or update a deployment template, the Kubernetes scheduler will schedule extra pods on other nodes before allowing voluntary disruption events to continue. The scheduler waits to reboot a node until the defined number of pods are successfully scheduled on other nodes in the cluster. Let's look at an example of a replica set with five pods that run NGINX. The pods in the replica set are assigned the label app: nginx-frontend. During a voluntary disruption event, such as a cluster upgrade, you want to make sure at least three pods continue to run. The following YAML manifest for a PodDisruptionBudget object defines these requirements: apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: nginx-pdb spec: minAvailable: 3 selector: matchLabels: app: nginx-frontend You can also define a percentage, such as 60%, which allows you to automatically compensate for the replica set scaling up the number of pods. You can define a maximum number of unavailable instances in a replica set. Again, a percentage for the maximum unavailable pods can also be defined. The following pod disruption budget YAML manifest defines that no more than two pods in the replica set be unavailable: apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: nginx-pdb spec: maxUnavailable: 2 selector: matchLabels: app: nginx-frontend Once your pod disruption budget is defined, you create it in your AKS cluster as with any other Kubernetes object: kubectl apply -f nginx-pdb.yaml Work with your application developers and owners to understand their needs and apply the appropriate pod disruption budgets. For more information about using pod disruption budgets, see Specify a disruption budget for your application. Regularly check for cluster issues with kube-advisor Best practice guidance Regularly run the latest version of kube-advisoropen source tool to detect issues in your cluster. If you apply resource quotas on an existing AKS cluster, run kube-advisorfirst to find pods that don't have resource requests and limits defined. The kube-advisor tool is an associated AKS open source project that scans a Kubernetes cluster and reports identified issues. kube-advisor proves useful in identifying pods without resource requests and limits in place. While the kube-advisor tool can report on resource request and limits missing in PodSpecs for Windows and Linux applications, the tool itself must be scheduled on a Linux pod. Schedule a pod to run on a node pool with a specific OS using a node selector in the pod's configuration. Tracking pods without set resource requests and limits in an AKS cluster hosting multiple development teams and applications can be difficult. As a best practice, regularly run kube-advisor on your AKS clusters, especially if you don't assign resource quotas to namespaces. Next steps This article focused on basic Kubernetes scheduler features. For more information about cluster operations in AKS, see the following best practices:
https://docs.microsoft.com/nb-no/azure/aks/operator-best-practices-scheduler
2021-04-10T15:04:26
CC-MAIN-2021-17
1618038057142.4
[]
docs.microsoft.com
An Organization is a profile that represents a company or entity instead of an individual. Companies that become Financial Contributors, as well as legal entities that are Fiscal Hosts, are Organizations on Open Collective. Have your company show up as a Financial Contributor to Collectives Enable your employees to support Collectives on behalf of your company Make bulk transfers so you can send money once and distribute it to Collectives as you wish Go to your profile menu (top right) and look for the My Organizations section. Click "+". Once set up, you will be able to select your individual or organization profile when making a contribution. Go through the process of contributing to a Collective, and you'll be able to create both the individual and organization profile during checkout. Use the cog icon next to the organization that you want to edit. From there you can change your Info, add or remove Team members,. Sometimes, the Manage contributions button can be nested inside the Actions menu
https://docs.opencollective.com/help/financial-contributors/organizations
2021-04-10T14:23:53
CC-MAIN-2021-17
1618038057142.4
[]
docs.opencollective.com
Changes related to "Resources" This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. 19 March 2021 - (diff | hist) . . Template:CheahaTflops; 15:42 . . (-1) . . [email protected] (Talk | contribs) (update ghz) - (diff | hist) . . Template:CheahaTflops; 15:41 . . (-594) . . [email protected] (Talk | contribs) (Update tables for new generations; remove retired generations.)
https://docs.uabgrid.uab.edu/w/index.php?title=Special:RecentChangesLinked&hideminor=1&days=30&limit=500&target=Resources
2021-04-10T14:59:56
CC-MAIN-2021-17
1618038057142.4
[]
docs.uabgrid.uab.edu
9.0.006.07 Genesys Knowledge Center Plug-in for Workspace Desktop Edition Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release contains the following new feature and enhancement: - The Genesys Knowledge Center (GKC) now enables you to enable/disable Proactive Knowledge in the Chat feature associated with the Genesys Knowledge plugin for WDE. (GK-8425) Resolved Issues This release contains the following resolved issues: When working in an environment with Genesys Knowledge plugin for WDE, the Category tree loading speed and the speed and stability of Category tree browsing has significantly improved. Previously, loading speed took longer than expected and documentation hierarchy was not available in the Browse tab. (GK-8420, GK-8416) Upgrade Notes No special procedure is required to upgrade to release 9.0.006.07. This page was last edited on January 21, 2020, at 10:58.
https://docs.genesys.com/Documentation/RN/latest/gkc-pl-wde90rn/gkc-pl-wde9000607
2021-04-10T16:07:08
CC-MAIN-2021-17
1618038057142.4
[]
docs.genesys.com
This is the guide for building a sample ERP connector integration application. ERP Connect integration application would connect to a Litium server instance, through web api registration. The process is described below. ERP Connector Sample demo integration application and sample postman collection could be download in Litium github here >>. This sample integration application would be running its own life-cycle as a seperate web application. Also please note that the samples in litium github are not maintained and updated when new versions of Litium are released. In web.config we could config the endpoint for Litium Connect server, here is an example appSettings section in the demo application to connect to Litium Connect server: <appSettings> <add key="MS_WebHookReceiverSecret_Litium" value="<Secret value>" /> <add key="WebHookSelfRegistrationCallbackHost" value="<Application call back url>"/> <add key="WebHookSelfRegistrationHost" value="<Litium Server instance>"/> <add key="WebHookSelfRegistrationClientId" value="<ServiceAccount username>"/> <add key="WebHookSelfRegistrationClientSecret" value="<ServiceAccount password>"/> </appSettings> ERP Connector application sample has a build in auto registration (file name Global.asax). These code will be executed at startup and will try to subscribe to OrderConfirmed event for a given host using the info we added in step #1. Optional: One may manually register/ double check the webhooks registration using web api endpoints as described here >>. Any error that happens during the seft-registration process will be logged in src\ErpDemo.log file. Litium Connect server uses Asp.net Webhook (more info here) to broacast messages when an given event is triggered. Erp connector application is acting as a webhook recever so we need to follow the same pattern to receive and parse the message. Litium Receiver implement WebHookReceiver and has ReceiveAsync as the main processing method. If the incoming message to receiver is an Get request , it should be an webhook Verification request and the receiver must echo the message back to confirm what was just received. If the incoming message was a Post request then that was an webhook update from server, the code will trigger Litium Handler to process the message. Litium Handler is defined to process the message which was sent over from Connect server, currently we wrote all the raw data into log files that looks like this: Litium use OpenAPI Specification to define the Litium Api so you can generate the client code easily. You can go to Litium server to get the new specification files Or you can use the existing one which are included to Litium.SampleApps.Erp project. And then go to the Service References to add/update the options for generated code. The generated code will be generated under obj folder There's also another way to generate the client code by using NSwagStudio. More detail could be found here. The application could be tested by importing the sample query collection to Postman, which could be dowloaded here >>. For setting up postman, please see this article. There are 12 steps in the postman collection, which will calling to application to do these steps: 1: Retrieve the order, to run this step order externalId must be provided. We could try to book an order on Litium Connect for any two items with the quality = 2 each and copy the order number to Postman variable: 2: Create partial shipment: Create a partial shipment for the first two items of the order. Shipping quality = 1 for each item. 3: Notify the first shipment delivered: Update the first shipment status to delivered. This will make Connect raise ReadyToShip event. 4: Build Rma from return split: Start return process to return the first item in the delivered shipment. More info on sales return management process could be found here>>. 5 to 7: Change Rma states all the way to Approved. 8 and 9: Retrieve and Confirm the Sales Return Order. 10: Refund the money for the sales return order. 11: Create a second shipment, ship another two items of the order. 12: Notify the second shipment as delivered. If these was the last shipment of the order then the order state should be changed to Completed after this step. About Litium Join Litium Support System status
https://docs.litium.com/documentation/litium-connect/develop-integration-applications/demo-erp-connector-application
2021-04-10T14:39:11
CC-MAIN-2021-17
1618038057142.4
[]
docs.litium.com
Method. Partial support in Safari refers to support for object-fit but not object-position. Partial support in Edge refers to object-fit only supporting <img> (see this comment) Data by caniuse.com Licensed under the Creative Commons Attribution License v4.0.
https://docs.w3cub.com/browser_support_tables/object-fit
2021-04-10T15:18:02
CC-MAIN-2021-17
1618038057142.4
[]
docs.w3cub.com
iOS SDK v3.0.2 Release Notes - March 19th, 2019 We have released version 3.0.2 of the Remote Pay SDK for iOS. Added tipSuggestions transaction setting tipSuggestionstransaction setting The SDK now allows each transaction to include tip suggestions (amounts and text) that override the default and merchant-configured tips. Improved messages for onTxStartResponse and onAuthTipAdjustedResponse failures onTxStartResponseand onAuthTipAdjustedResponsefailures Instead of a basic “Failure” indicator, the SDK now passes complete failure messages back to consuming apps. Improved Interac card support Added Interac to the list of supported card types to ensure transactions with these cards process correctly. Updated example app The example POS now shows errors when the user tries to partially refund an auth that has not been closed. Custom per-transaction tip suggestions can also be configured using the app. Bug fixes See the GitHub release page for information about bug fixes in this release.
https://docs.clover.com/docs/ios-sdk-v302-release-notes-march-19th-2019
2022-06-25T08:29:57
CC-MAIN-2022-27
1656103034877.9
[]
docs.clover.com
] Associates a link to a device. A device can be associated to multiple links and a link can be associated to multiple devices. The device and link must be in the same global network and the same site. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. associate-link --global-network-id <value> --device-id <value> --link-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --global-network-id (string) The ID of the global network. --device-id (string) The ID of the device. --link-id (string) The ID of the link The following associate-link example associates link link-11112222aaaabbbb1 with device device-07f6fd08867abc123. The link and device are in the specified global network. aws networkmanager associate-link \ --global-network-id global-network-01231231231231231 \ --device-id device-07f6fd08867abc123 \ --link-id link-11112222aaaabbbb1 \ --region us-west-2 Output: { "LinkAssociation": { "GlobalNetworkId": "global-network-01231231231231231", "DeviceId": "device-07f6fd08867abc123", "LinkId": "link-11112222aaaabbbb1", "LinkAssociationState": "PENDING" } } For more information, see Device and Link Associations in the Transit Gateway Network Manager Guide.
https://docs.aws.amazon.com/cli/latest/reference/networkmanager/associate-link.html
2022-06-25T09:23:28
CC-MAIN-2022-27
1656103034877.9
[]
docs.aws.amazon.com
PutPartnerEventsRequestEntry The details about an event generated by an SaaS partner. Contents - Detail A valid JSON string. There is no other schema imposed. The JSON string may contain fields and nested subobjects. Type: String Required: No - DetailType A free-form string, with a maximum of 128 characters, used to decide what fields to expect in the event detail. Type: String Required: No - Resources AWS resources, identified by Amazon Resource Name (ARN), which the event primarily concerns. Any number, including zero, may be present. Type: Array of strings Length Constraints: Maximum length of 2048. Required: No - Source The event source that is generating the entry. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: aws\.partner(/[\.\-_A-Za-z0-9]+){2,} Required: No - Time The date and time of the event. Type: Timestamp Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutPartnerEventsRequestEntry.html
2022-06-25T09:38:04
CC-MAIN-2022-27
1656103034877.9
[]
docs.aws.amazon.com
Changing the name of the links legend To change the name of the links legend, navigate to the Links menu, and change the text in the Legend Title option. The default value is Link colors. Note that you can change this value only after selecting the Add link legend option, as described in Adding the Link Legend.
https://docs.cloudera.com/data-visualization/7/howto-customize-visuals/topics/viz-change-link-legend.html
2022-06-25T08:50:10
CC-MAIN-2022-27
1656103034877.9
[array(['../images/viz-links-legend-title.png', None], dtype=object)]
docs.cloudera.com
Base Object Configuration Contents The screenshot shows the link to the Base Object Configuration page on the Administration module navigation pane. Access Permissions Visibility of the agent groups and queues on the Base Object Configuration page is determined by the tenants to which the administrator has access. Note that the access permission is determined only at the tenant level. If the administrator has access to a given tenant, all the objects under that tenant are displayed in the Base Object Configuration page, irrespective of whether the administrator has access to individual objects in it. For an administrator to be able to view objects to publish in the Base Object Configuration page, either the user, or the user’s access group, must be granted at least Read access permission to the tenants under which the administrator will be publishing the objects. Configuring Genesys Objects Statistics distribution is handled automatically by the Data Manager. The associations that display on the Base Object Configuration page are no longer tied to a selected adapter, but instead represent a global configuration for CCAdv/WA. For more information, see Performance Management Advisors Deployment Guide. Starting in release 8.5.0, you must deploy the Contact Center Advisor application (including XML Generator) and configure the Genesys metric sources before you can use the Base Object Configuration page in the Administration module. Data manager requests no statistics for pre-configured objects until the CCAdv module, XML Generator, and Genesys metric data sources are deployed and working. On the Base Object Configuration page, you can: - configure objects (queues and agent groups): - assign objects to filters on the Base Object to Filter Mapping tab - assign filters to an object on the Mapping to Base Object tab - identify and filter objects by object type - view the count of configured objects - search each listbox You require Read access to one or more tenants to use the Base Object Configuration page. You see only agent groups and queues in the Base Object Configuration page for the tenant(s) to which you have Read access permission. The Base Object Configuration page prevents contradictory configuration. For example, if you select No Filter for an object, and later attempt to assign a filter, you receive an error message. You must de-select No Filter before a filter can be assigned to that object. Filter categorization is not applicable for interaction queue statistics. No Filter is the only option you can successfully apply to interaction queues. If you attempt to combine filters with an interaction queue, the filters are discarded and the No Filter option is automatically selected again. For detailed information about the filters and objects that display on the Base Object Configuration page, see Data Manager content in the Performance Management Advisors Deployment Guide. Working with Filters on the Base Object Configuration Page You can map filters to base objects on the Base Object to Filter Mapping and Mapping to Base Object tabs to segment a selected queue or agent group into one or more application or agent groups. The filters that are specified in the Advisors Business Attributes section in Genesys Configuration Server display on the Base Object Configuration page. On the CCAdv dashboard, each filtered combination displays on a separate line. For example, if you select a queue, you can then use filter selection to achieve one of the following results: - If you select No Filter and save the No Filter/queue combination, you create an unfiltered application object. If you select a specific filter and save the filter/queue combination, you create a filtered application object. All application-level metrics are automatically filtered in the Stat Server based on the configured filtering criteria. Statistic values are reported for the filter conditions that are satisfied. For example, if the filter expression is "Agents in a Not Ready state with a reason code of Break", then only agents who satisfy that filter condition are considered when reporting the statistic value. These types of filters are applied to all metrics; therefore, your filter needs to be applicable to all or most metrics of a particular object type. If there are metrics that you want to exclude from the filter, then go to the Report Metrics administration page and select the Exclude Base Object Filter check box for those metrics. (Any metric that is excluded from the base object configuration filter is shown on a separate line as an unfiltered metric for the selected agent group or queue.) You can use this method to segment the queue into multiple application line items on the CCAdv/WA dashboard. For example, you might have filters that divide the queues into segments based on the service that is provided by the agents in those queues. Another example is the use of filters to segment the queues based on the call type. Typically, Stat Server filters are specified in terms of the call-level attached data, for which Stat Server can count the statistic values when the specific filter condition is satisfied. For more information about the filter expression syntax that you can use with Stat Server, consult the Stat Server User's Guide. If you select multiple filters, the result is multiple segments. In addition, both tabs on the Base Object Configuration page include a Filters panel with which you can refine the list of filters and objects you view on the page. For example, if you want to view only filters that are assigned to objects, select the box beside Selected under Filter and ensure the box beside Unselected is not checked. The list of object filters now shows only filters that have been assigned to objects. Unassigned filters are hidden. The Filters panel also includes a Search field. Use the Search field to quickly find a filter or object by typing its name in the field and clicking the icon beside the field. Procedure: Map Objects to a Filter Purpose: On the Base Object to Filter Mapping tab, you select a filter and map objects to it. Use this procedure to quickly assign multiple objects to one filter. If you select No Filter for an object, and later attempt to assign a filter, the system prevents you from proceeding. You must de-select No Filter before a filter can be assigned to that object. Steps - Open the Base Object to Filter Mapping tab. - Select a filter. The list of available agent groups and queues displays in the pane to the right. The list of filters and available objects is configured in the Genesys Configuration Server. If you do not see a filter or object that you require, contact your system administrator. Object visibility is controlled by permissions. - Click the checkbox beside an object to select it and assign it to the filter. - After you have selected the objects to assign to the filter, click Save to save the assignments or click Cancel to discard the assignments. Procedure: Map Filters to an Object Purpose: On the Mapping to Base Object tab, you can select an object and map filters to it. Use this procedure to quickly assign multiple filters to an object, and to discover what filters are assigned to an object. Steps - Open the Mapping to Base Object tab. - Select an object from the list of available agent groups or queues. The list of relevant filters displays in the pane to the right. Filters that are already assigned to the selected object have a checkmark beside the filter name. The list of filters and available objects is configured in the Genesys Configuration Server. If you do not see a filter or object that you require, contact your system administrator. Object visibility is controlled by permissions. - Click the checkbox beside a filter to select it and assign it to the object. - After you have selected the filters to assign to the object, click Save to save the assignments or click Cancel to discard the assignments.
https://docs.genesys.com/Documentation/PMA/8.5.1/CCAWAUser/BaseObjectConfiguration
2022-06-25T08:27:29
CC-MAIN-2022-27
1656103034877.9
[]
docs.genesys.com
중요 The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy. Notes Exception list We've improved exception message handling for applications running in high security mode. Enabling 'high_security' now removes exception messages entirely rather than simply obfuscating any SQL. By default this feature affects all exceptions, although you can configure an allow list of exceptions for messages you want to remain intact. More details: Fix a race condition affecting some Rails applications at startup Some Rails applications using 'newrelic_rpm' were affected by a race condition at startup that manifested as an error when model classes with associations were first loaded. The cause of these errors has been addressed by moving the generation of the agent's 'EnvironmentReport' on startup from a background thread to the main thread.
https://docs.newrelic.com/kr/docs/release-notes/agent-release-notes/ruby-release-notes/ruby-agent-364122/
2022-06-25T07:44:57
CC-MAIN-2022-27
1656103034877.9
[]
docs.newrelic.com
Device Control Protect your Windows endpoints from connecting to malicious USB-connected removable devices. By default, all external USB devices are allowed to connect to your Cortex XDRendpoints. To protect endpoints from connecting USB-connected removable devices—such as disk drives, CD-ROM drives, floppy disk drives, and other portable devices—that can contain malicious files, Cortex XDRprovides device control. For example, with device control, you can: - Block all supported USB-connected devices for an endpoint group. - Block a USB device type but add to your allow list a specific vendor from that list that will be accessible from the endpoint. - Temporarily block only some USB device types on an endpoint. Depending on your defined user scope permissions, creating device profiles, policies, exceptions, and violations may be disabled. The following are prerequisites to enforce device control policy rules on your endpoints: If you are running Cortex XDRagents 7.3 or earlier releases, device control rules take effect on your endpoint only after the Cortex XDRagent deploys the policy. If you already had a USB device connected to the endpoint, you have to disconnect it and connect it again for the policy to take effect. Device Control Profiles To apply device control in your organization, define device control profiles that determine which device types Cortex XDRblocks and which it permits. There are two types of profiles: Device Configuration and Device Exceptions profiles are set for each operating system separately. After you configure a device control profile, Apply Device Control Profiles to Your Endpoints. Add a New Configuration Profile - Log in toCortexXDR.Go toand selectEndpointsPolicy managementExtensionProfiles+ New ProfileorImport from File. - SelectPlatformand click.Device ConfigurationNext - Fill in the General Information.Assign the profileNameand add an optionalDescription. The profile Type and Platform are set byCortexXDR. - Configure the Device Configuration.For each group of device types, select whether toAlloworBlockthem on the endpoints. For Disk Drives only, you can also choose to allow to connect inRead-onlymode. To use the default option defined by Palo Alto Networks, leaveUse Defaultselected.Currently, the default is set to Use Default (Allow) however Palo Alto Networks may change the default definition at any time. - Save your profile.When you’re done,Createyour device profile definitions.If needed, you can edit, delete, or duplicate your profiles.You cannot edit or delete the default profiles pre-defined inCortexXDR. - (Optional) To define exceptions to your Device Configuration profile, Add a New Exceptions Profile. Add a New Exceptions Profile - Log in toCortexXDR.Go toand selectEndpointsPolicy managementExtensionProfiles+ New ProfileorImport from File. - SelectPlatformand clickDevice ExceptionsNext - Fill in the General Information.Assign the profileNameand add an optionalDescription. The profileTypeandPlatformare set by the system. - Configure Device Exceptions.You can add devices to your allow list according to different sets of identifiers-vendor, product, and serial numbers. - (Disk Drives only)Permission—Select the permissions you want to grant:Read onlyorRead/Write. - Type—Select the Device Type you want to add to the allow list (Disk Drives, CD-Rom, Portable, or Floppy Disk). - Vendor—Select a specific vendor from the list or enter the vendor ID in hexadecimal code. - (Optional)Product—Select a specific product (filtered by the selected vendor) to add to your allow list, or add your product ID in hexadecimal code. - (Optional)Serial Number—Enter a specific serial number (pertaining to the selected product) to add to your allow list. Only devices with this serial number are included in the allow list. - Save your profile.When you’re done,Createyour device exceptions profile.If needed, you can later edit, delete, or duplicate your profiles.You cannot edit or delete the predefined profiles inCortexXDR. Apply Device Control Profiles to Your Endpoints After you define the required profiles for Device Configuration and Exceptions, you must configure Device Control Policies and enforce them on your endpoints. Cortex XDRapplies Device Control policies on endpoints from top to bottom, as you’ve ordered them on the page. The first policy that matches the endpoint is applied. If no policies match, the default policy that enables all devices is applied. - Log in toCortexXDR.Go toand selectEndpointsPolicy managementExtensionPolicy Rules+ New PolicyorImport from File.When importing a policy, select whether to enable the associated policy targets. Rules within the imported policy are managed as follows: - New rules are added to top of the list. - Default rules override the default rule in the target tenant. - Rules without a defined target are disabled until target is specified. - Configure settings for the Device Control policy. - Assign a policy name and select the platform. You can add a description.The platform will automatically be assigned to Windows. - Assign the Device Type profile you want to use in this rule. - ClickNext. - Select the target endpoints on which to enforce the policy.Use filters or manual endpoint selection to define the exact target endpoints of the policy rules. If exists, theGroup Nameis filtered according to the groups within your defined user scope. - ClickDone. - Configure policy hierarchy.Drag and drop the policies in the desired order of execution. The default policy that enables all devices on all endpoints is always the last one on the page and is applied to endpoints that don’t match the criteria in the other policies. - Savethe policy hierarchy.After the policy is saved and applied to the agents,CortexXDRenforces the device control policies on your environment. - (Optional) Manage your policy rules.In theProtection Policy Rulestable: you can view and edit the policy you created and the policy hierarchy. - View your policy hierarchy. - Right-click toView Policy Details,Edit,Save as New,Disable, andDelete. - Select one ore more policies, right-click and selectExport Policies. You can choose to include the associatedPolicy Targets,Global Exceptions, and endpoint groups. - Monitor device control violations.After you apply Device Control rules in your environment, use thepage to monitor all instances where end users attempted to connect restricted USB-connected devices andEndpointsDevice Control ViolationsCortexXDRblocked them on the endpoint. All violation logs are displayed on the page. You can sort the results, and use the filters menu to narrow down the results. For each violation eventCortexXDRlogs the event details, the platform, and the device details that are available.If you see a violation for which you’d like to define an exception on the device that triggered it, right-click the violation and select one of the following options: - Add device to permanent exceptions—To ensure this device is always allowed in your network, select this option to add the device to the Device Permanent Exceptions list. - Add device to temporary exceptions—To allow this device only temporarily on the selected endpoint or on all endpoints, select this option and set the allowed time frame for the device. - Allow device to a profile exception—Select this option to allow the device within an existing Device Exceptions profile. - Tune your device control exceptions.To better deploy device control in your network and allow further granularity, you can add devices on your network to your allow list and grant them access to your endpoints. Device control exceptions are configured per device and you must select the device category, vendor, and type of permission that you want to allow on the endpoint. Optionally, to limit the exception to a specific device, you can also include the product and/or serial number.CortexXDRenables you to configure the following exceptions: - Create a Permanent Exception.Permanent device control exceptions are managed in the Permanent Exception list and are applied to all devices regardless of the endpoint platform. - If you know in advance which device you’d like to allow throughout your network, create a general exception from the list: - Go toand selectEndpointsPolicy ManagementExtensionsDevice Permanent Exceptionson the left menu. The list of existing Permanent Exceptions is displayed. - Select:Type,Permission, andVendor. - (Optional) Select a specific product and/or enter a specific serial number for the device. - Click the adjacent arrow andSave. The exception is added to the Permanent Exceptions list and will be applied in the next heartbeat. - Otherwise, you can create a permanent exception directly from the violation event that blocked the device in your network: - On theDevice Control Violationspage, right-click the violation event triggered by the device you want to permanently allow. - SelectAdd device to permanent exceptions. Review the exception data and change the defaults if necessary. - ClickSave. - Create a Temporary Exception. - On theDevice Control Violationspage, right-click the violation event triggered by the device you want to temporarily allow. - SelectAdd device to temporary exceptions. Review the exception data and change the defaults if necessary. For example, you can configure the exception to this endpoint only or to all endpoints in your network, or set which device identifiers will be included in the exception. - Configure the exceptionTIME FRAMEby defining the number of days or number of hours during which the exception will be applied, up to 30 days. - ClickSave. The exception is added to the Device Temporary Exceptions list and will be applied in the next heartbeat. - Create an Exception within a Profile. - On theDevice Control Violationspage, right-click the violation event triggered by the device you want to add to a Device Exceptions profile. - Select thePROFILEfrom the list. - ClickSave. The exception is added to the Exceptions Profile and will be applied in the next heartbeat. Add a Custom Device Class ( Windows only) You can include custom USB-connected device classes beyond Disk Drive, CD-ROM, Windows Portable Devices and Floppy Disk Drives, such as USB connected network adapters. When you create a custom device class, you must supply Cortex XDRthe official ClassGuid identifier used by Microsoft. Alternatively, if you configured a GUID value to a specific USB connected device, you must use this value for the new device class. After you add a custom device class, you can view it in Device Management and enforce any device control rules and exceptions on this device class. To create a custom USB-connected device class: - Go to.EndpointsPolicy ManagementSettingsDevice ManagementThis is the list of all your custom USB-connected devices. - Create the new device class.Select+New Device. Set aNamefor the new device class, supply a valid and unique GUIDIdentifier. For each GUID value you can define one class type only. - Save.The new device class is now available inCortexXDRas all other device classes. Add a Custom User Notification ( Requires a) You can personalize the Cortex XDRagent 7.5 or a later release for Windows Cortex XDRnotification pop-up on the endpoint when the user attempts to connect a USB device that is either blocked on the endpoint or allowed in read-only mode. To edit the notifications, refer to the Agent Settings Profile. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/cortex/cortex-xdr/cortex-xdr-prevent-admin/endpoint-security/hardened-endpoint-security/device-control
2022-06-25T07:37:14
CC-MAIN-2022-27
1656103034877.9
[]
docs.paloaltonetworks.com
Deep Linking What are Deep Links? Deep links are a type of link that sends users directly to an app instead of a website or a store. They are used to send users straight to specific in-app locations, saving users time and energy locating a particular page themselves. For instance, if a Youtube Link is shortened and enabled with the Deep Link feature, it will open the desired live stream, video, or profile inside the YouTube application whenever clicked on. But If the application is not installed, the user will get the web preview of the destination URL. Deep links are allowed on up to 70 applications under Social Media, e-Commerce, Music, Video, and Productivity categories. The use of deep linking increases the chances to engage and convert your mobile users by +44% on average. 🚀 Indeed, it provides a better user experience, redirecting to an in-app version of a page, where the navigation is easier, and where the user will be already logged-in and have all his information registered. Such an experience is more engaging because it has fewer frictions as it requires fewer actions from the user. Utilization of the Deep Links Feature requires 3 simple steps. Step 1 Login to the Replug Application. Navigate to the Replug Links page. Click on the New Link button and proceed with the Link Creation Steps. Step 2 Change the toggle state for Deep Links. If the destination URL can be used as a Deep Link, the following indication will appear. If the destination URL can not be used as a Deep Link, the following indication will appear. Note: Deep links only work with Shortener Campaigns and supported URLs. Step 3 - Once you are done with the above steps click on Save Link Button at the bottom.
https://docs.replug.io/article/883-deep-linking
2022-06-25T07:15:12
CC-MAIN-2022-27
1656103034877.9
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/619492d412c07c18afde8ae0/file-lCbtTOgX56.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/6195ea0764e42a671b6381f1/file-Y4oBKJr3h2.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/6195ea6b64e42a671b6381f3/file-HPTmcWEdDl.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/6195ec8d2b380503dfe061dd/file-cxFmF4MDPJ.png', None], dtype=object) ]
docs.replug.io
The Network Configuration Manager Geo Diverse solution provides a simple method for integrating Network Configuration Manager service management into a data replication system. Distributed as a Perl script, the solution automates the process of starting and stopping services, and re-homing Device servers, so they point to the active Application server. The script, which is compatible with Red Hat Enterprise Linux 6.x and 7.x, can easily be wrapped up into a larger Geo Diverse solution. The Network Configuration Manager Geo Diverse solution does not provide the data replication software, and only the replication of Application servers is supported. As offered, the Network Configuration Manager Geo Diverse script is not a hands-off solution, but may be integrated with a larger framework to provide automated fail-over.
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.1/ncm-installation-guide-10.1.1/GUID-33288558-F64E-4809-AFE8-B55E3C3D2E0B.html
2022-06-25T09:14:43
CC-MAIN-2022-27
1656103034877.9
[]
docs.vmware.com
One or more pages of device and primitive specific parameter overrides are displayed. Overriding Calculated Parameters The Clocking Wizard selects optimal settings for the parameters of the clocking primitive. You can override any of these calculated parameters as per your requirement. When the Allow Override mode is selected, the overridden values are used as primitive parameters instead of the calculated values. The wizard uses the settings for timing calculations, and any settings changed here are reflected in the summary pages. Important: It is important to verify that the values you are choosing to override are correct because the wizard implements what you have chosen even if it causes issues with the generated network.Parameters listed are relevant for the physical clocks on the primitive, rather than the logical clocks created in the source code. For example, to modify the settings calculated for the highest priority CLK_OUT1, you must modify CLKOUT0*parameters, and not the CLKOUT1*parameters for a MMCM or PLL.
https://docs.xilinx.com/r/en-US/pg321-clocking-wizard/Primitive-Overrides
2022-06-25T07:15:19
CC-MAIN-2022-27
1656103034877.9
[]
docs.xilinx.com
f5networks.f5_modules.bigip_lx_package module – Manages Javascript LX packages_lx_package. New in version 1.0.0: of f5networks.f5_modules Synopsis Manages Javascript LX packages on a BIG-IP. This module allows you to deploy LX packages to the BIG-IP and manage their lifecycle. Requirements The below requirements are needed on the host that executes this module. Requires BIG-IP >= 12.1.0 The ‘rpm’ tool installed on the Ansible controller Parameters Notes Note Requires the RPM tool be installed on the host. This can be accomplished in different ways on each platform. On Debian based systems with apt; apt-get install rpm. On Mac with brew; brew install rpm. This command is already present on RedHat based systems. Requires BIG-IP >= 12.1.0, because the required functionality is missing on prior versions. The module name bigip_iapplx_packagehas been deprecated in favor of bigip_lx_package.: Install AS3 bigip_lx_package: package: f5-appsvcs-3.5.0-3.noarch.rpm provider: password: secret server: lb.mydomain.com user: admin delegate_to: localhost - name: Add an LX package stored in a role bigip_lx_package: package: "{{ roles_path }}/files/MyApp-0.1.0-0001.noarch.rpm'" provider: password: secret server: lb.mydomain.com user: admin delegate_to: localhost - name: Remove an LX package bigip_lx_package: package: MyApp-0.1.0-0001.noarch.rpm state: absent provider: password: secret server: lb.mydomain.com user: admin delegate_to: localhost - name: Install AS3 and don't delete package file bigip_lx_package: package: f5-appsvcs-3.5.0-3.noarch.rpm retain_package_file: yes provider: password: secret server: lb.mydomain.com user: admin delegate_to: localhost Collection links Issue Tracker Homepage Repository (Sources)
https://docs.ansible.com/ansible/latest/collections/f5networks/f5_modules/bigip_lx_package_module.html
2022-06-25T08:13:32
CC-MAIN-2022-27
1656103034877.9
[]
docs.ansible.com
This page will help you get started with the InsightCloudSec API Welcome to the InsightCloudSec API documentation! Here you can learn how to interact with InsightCloudSec programmatically, enabling you to securely and simply automate your daily and/or most tedious workflows within the product. All endpoints can be used but we caution the use of prototype-namespaced endpoints as documentation and support may vary. Contact us through the Customer Support Portal if you have any questions or concerns, and feel free to use the "Suggest Edits" feature to provide any documentation feedback. Using. Understanding the API Docs Below the listed method, path, and short description for each endpoint, there is a list of parameters organized by type (path, body, query, and header). To the right of the parameters is the request and response examples. User Type Effects Access Only certain types of users have access to all endpoints documented here. Verify your user type and the endpoint description before testing anything out. Parameters - Each parameter will have a text field or drop-down menu (depending on the type of field) that you can interact with. Most endpoints should have a description and type for each parameter as well as indicate if it's required for the request (the field will be highlighted in red if it's required) - If the parameter has a default value, it will be displayed in inactive text or pre-selected (depending on the type of field) - If the endpoint has an example request associated with it (see Examples for more information), click the field to display example values that you can click to input automatically ("Use Example Value") Parameters Examples - Each endpoint has at least one request and response example. If the endpoint does not have a language-specific example, the request example will be autogenerated - If the request example is autogenerated: - You may select a language to see the autogenerated example in the desired language - Input your API key to update the request example with your API key; the browser will save this value so it will be available when you interact with other endpoints - Within the URL, click the domain to open a text field where you can input and update the request example with your custom InsightCloudSec domain; the browser will save this value so it will be available when you interact with other endpoints - Click the "Examples" drop-down menu in the upper right corner of the request box to display a list of pre-fabricated example requests. Select an example to update the parameter fields and request accordingly - Editing parameters will affect the request example - Click one of the listed examples or the "Examples" drop-down menu in the upper right corner of the response box to display a list of pre-fabricated example responses. Select an example to display the response accordingly Examples Authentication There are currently two methods of authenticating when using the InsightCloudSec API: - API Key: API Keys can be associated with all types of InsightCloudSec user accounts, e.g., basic users, domain admins, etc. An active API key allows the user to programmatically access InsightCloudSec. The API Key is the preferred method of authentication, and as such, is the only authentication type listed in the examples for each endpoint - Auth Token: Auth tokens are generated using the Login endpoint in conjunction with a user's username and password. This token can then be passed to subsequent endpoints to allow the user to programmatically access InsightCloudSec. This token is available per session, so when the user is logged out of the product for whatever reason, they must generate a new auth token. Note: Only one of API Key or Auth Token is required. If both are provided, only the API Key will be used. Single Sign On (SSO) Users If you're a customer that uses SSO to login to InsightCloudSec, we advise that you interact with the API using an API key instead, especially if you only want to create workflow automation scripts or you are planning to utilize API-only flows. API Key Endpoints are authenticated via a user's API key when it is explicitly passed in the header of a request. You can obtain an API key using the InsightCloudSec user interface or using the API (with an existing user's ID). Note: any existing API key for a user will be deactivated upon generating a new API key. Below is a sample of how you can use the API with an API key using Python or Bash/cURL. This example lists all of the organizations inside InsightCloudSec. # Script to list all organizations in DivvyCloud using an API Key import json import requests import getpass requests.packages.urllib3.disable_warnings() # verify=False throws warnings otherwise # API Key api_key = '' # API URL base_url = '' # Param validation if not api_key: key = getpass.getpass('API Key:') else: key = api_key if not base_url: base_url = input('Base URL (EX: or): ') headers = { 'Content-Type': 'application/json;charset=UTF-8', 'Accept': 'application/json', 'Api-Key': key } # Get Org info def get_org(): data = {} response = requests.get( url = base_url + '/v2/prototype/domain/organizations/detail/get', data = json.dumps(data), verify = False, headers = headers ) return response.json() # Execute functions org_info = get_org() print(org_info) # API key to authenticate against the API api_key="" # DivvyCloud URL EX: or base_url="" # Get org info org_url=`echo $base_url/v2/prototype/domain/organizations/detail/get` curl \ --request GET \ --header "content-type: application/json" \ --header "accept-encoding: gzip" \ --header "Api-Key: $api_key" \ $org_url | gunzip | jq # Sample output: # { # "organizations": [ # { # "status": "ok", # "smtp_configured": true, # "clouds": 63, # "name": "DivvyCloud Demo", # "resource_id": "divvyorganization:1", # "organization_id": 1, # "bots": 17, # "users": 21 # } # ] # } Auth Token Endpoints are authenticated via auth token when a user's session ID is passed in the header of a request. You can obtain this session ID from the object returned upon successfully using the Login endpoint with your InsightCloudSec username and password. Note: if the session expires or the user logs out, the auth token will no longer be valid and the user will have to start a new session/generate a new session ID. Below is a sample of how you can use the API with an auth token using Python. This example lists all of the organizations inside InsightCloudSec. # Script to list all organizations in DivvyCloud using an Auth Token import json import requests import getpass requests.packages.urllib3.disable_warnings() # verify=False throws warnings otherwise # Username & password username = '' password = '' # API URL base_url = '' # Param validation if not username: username = input('InsightCloudSec username: ') if not password: password = getpass.getpass('Password: ') else: password = password if not base_url: base_url = input('Base URL (EX: or): ') headers = { 'Content-Type': 'text/plain', 'Accept': 'application/json' } # Get auth token def get_token(): data = { 'username': username, 'password': password } print(data) response = requests.request( method = 'POST', url = base_url + '/v2/public/user/login', json = data, verify = False, headers = headers ) headers['x-auth-token'] = response.json().get('session_id') # Get Org info def get_org(): data = {} response = requests.get( url = base_url + '/v2/prototype/domain/organizations/detail/get', data = json.dumps(data), verify = False, headers = headers ) return response.json() # Execute functions get_token() org_info = get_org() print(org_info)
https://docs.divvycloud.com/reference/reference-getting-started
2022-06-25T08:12:36
CC-MAIN-2022-27
1656103034877.9
[array(['https://files.readme.io/d114658-parameters.png', 'parameters.png Parameters'], dtype=object) array(['https://files.readme.io/d114658-parameters.png', 'Click to close... Parameters'], dtype=object) array(['https://files.readme.io/384c121-examples.png', 'examples.png Examples'], dtype=object) array(['https://files.readme.io/384c121-examples.png', 'Click to close... Examples'], dtype=object) ]
docs.divvycloud.com
The AXI Chip2Chip core is characterized based on the benchmarking methodology described in the Vivado Design Suite User Guide: Designing with IP (UG896) [Ref 5] . Table: Maximum Frequencies for 7 Series Devices shows the results of the characterization runs for 7 series devices. IMPORTANT: Maximum frequencies for UltraScale ™ and Zynq®-7000 devices are expected to be similar to Kintex®-7 and Artix®-7 device numbers.
https://docs.xilinx.com/r/en-US/pg067-axi-chip2chip/Performance-Maximum-Frequencies
2022-06-25T07:50:26
CC-MAIN-2022-27
1656103034877.9
[]
docs.xilinx.com
Hard disks The Hard disks page contains settings related to the emulated machine’s fixed disks. Hard disk list All hard disks attached to the emulated system are listed, with the following information: Bus: storage bus the disk is attached to, as well as the disk’s bus channel or ID. These can be changed through the Bus and Channel/ID boxes below the list. File: path to the disk image file. C/H/S: disk size in cylinders, heads and sectors, respectively. MB: disk size in megabytes. Adding a new disk The New… button opens a new window allowing you to create an existing hard disk image file. File name: where to save the disk image file. See Hard disk images for a list of supported image formats. Cylinders/Heads/Sectors: CHS parameters for the disk image. These boxes control the Size (MB) box below. Size (MB): the disk image’s size in MB. This box controls the Cylinders, Heads and Sectors boxes above. There are limits to how big a hard disk image can be; see Hard disk size limits for more information. Bus: storage bus to attach the disk to. Channel/ID: where to attach the disk on the selected storage bus. On IDE disks, the first number corresponds to the IDE channel, and the second number corresponds to the Master/Slave position: On SCSI disks, the first number corresponds to the SCSI controller (starting at 0 instead of 1), and the second number is the SCSI ID within that controller: On MFM/RLL, XTA and ESDI disks, the second number is 0 for the first drive on the controller, and 1 for the second drive. Note If the disk is attached to a channel or controller that doesn’t exist, such as the tertiary IDE channel with no tertiary IDE controller present, it will be effectively disabled. Press the OK button to create the disk image file, or Cancel to close the window. Adding an existing disk The Existing… button opens a similar window to the New… button, except that it lets you select an existing disk image file. The CHS parameters are guessed from the image’s file size, or the file header if the image is of a format which contains a header. After selecting the image file and checking if the parameters are correct, select the Bus and Channel/ID for the hard disk and press OK to add it. Press Cancel to close the window. Removing a disk Select a disk on the list and press Remove to remove it.
https://86box.readthedocs.io/en/latest/settings/hdd.html
2022-06-25T08:30:04
CC-MAIN-2022-27
1656103034877.9
[]
86box.readthedocs.io
Announcing our new Clover device - Flex (2nd Generation) Clover is excited to announce that our latest device, the new and improved Flex (2nd Generation), launches in Canada on May 20, 2020. - 4G/LTE only Fingerprint reader - Yes See the complete list of specs in the documentation. Takeaways for developers - Order Flex (2nd generation) Dev Kits from cloverdevkit.com and EMV test cards ahead of time to ensure you have time available for testing your apps. - Use the emulator to test your apps if you need an alternative. - Submit your updated APKs for approval by May 15, 2020. See more information about testing Canadian payment flows in the documentation. The on May 18, 2020. Important - Dev Kits can only be shipped directly to US addresses. Developers outside the US must use a package forwarding service to ship Dev Kits to their final destination. - Due to COVID-19 delays, there may be a delay in delivering your Dev Kits. For questions and feedback, use [email protected] Updated almost 2 years ago
https://docs.clover.com/docs/announcing_our_new_clover_device_flex_2nd_generation
2022-06-25T07:43:04
CC-MAIN-2022-27
1656103034877.9
[array(['https://files.readme.io/6caf602-Flex2.jpeg', 'Flex2.jpeg'], dtype=object) array(['https://files.readme.io/6caf602-Flex2.jpeg', 'Click to close...'], dtype=object) ]
docs.clover.com
DateFilterControl.CustomDisplayText Event Allows you to specify the date picker text. Namespace: DevExpress.DashboardWin Assembly: DevExpress.Dashboard.v22.1.Win.dll Declaration public event EventHandler<CustomDisplayTextEventArgs> CustomDisplayText Public Event CustomDisplayText As EventHandler(Of CustomDisplayTextEventArgs) Event Data The CustomDisplayText event's data class is CustomDisplayTextEventArgs. The following properties provide information specific to this event: Remarks The CustomDisplayText event fires before a date picker caption is displayed and allows you to change it. Example This code snippet specifies the date picker button’s caption. using DevExpress.DashboardWin; using DevExpress.XtraEditors.Controls; using System; // ... dashboardViewer1.DashboardItemControlCreated += DashboardViewer1_DashboardItemControlCreated; // ... private void DashboardViewer1_DashboardItemControlCreated(object sender, DashboardItemControlEventArgs e) { if (e.DateFilterControl != null) { dateFilter.CustomDisplayText += DateFilter_CustomDisplayText; } } private void DateFilter_CustomDisplayText(object sender, CustomDisplayTextEventArgs e) { e.DisplayText = (e.Value is DateTime) ? string.Format("{0:d}", e.Value) : "Click for the Date Picker"; } Related GitHub Examples The following code snippet (auto-collected from DevExpress Examples) contains a reference to the CustomDisplayText event. Note The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue.
https://docs.devexpress.com/Dashboard/DevExpress.DashboardWin.DateFilterControl.CustomDisplayText
2022-06-25T07:21:09
CC-MAIN-2022-27
1656103034877.9
[]
docs.devexpress.com
mars.tensor.isfinite# - mars.tensor.isfinite(x, out=None, where=None, **kwargs)[source]# Test element-wise for finiteness (not infinity or not Not a Number). The result is returned as a boolean finite; otherwise the value is False (input is either positive infinity, negative infinity or Not a Number). For array input, the result is a boolean array with the same dimensions as the input and the values are True if the corresponding element of the input is finite; otherwise the values are False (element is either positive infinity, negative infinity or Not a Number). - Return type - Notes Not a Number, positive infinity and negative infinity are considered to be non-finite. Mars >>> import mars.tensor as mt >>> mt.isfinite(1).execute() True >>> mt.isfinite(0).execute() True >>> mt.isfinite(mt.nan).execute() False >>> mt.isfinite(mt.inf).execute() False >>> mt.isfinite(mt.NINF).execute() False >>> mt.isfinite([mt.log(-1.).execute(),1.,mt.log(0).execute()]).execute() array([False, True, False]) >>> x = mt.array([-mt.inf, 0., mt.inf]) >>> y = mt.array([2, 2, 2]) >>> mt.isfinite(x, y).execute() array([0, 1, 0]) >>> y.execute() array([0, 1, 0])
https://docs.pymars.org/en/latest/user_guide/tensor/generated/mars.tensor.isfinite.html
2022-06-25T08:38:56
CC-MAIN-2022-27
1656103034877.9
[]
docs.pymars.org
To retrieve data from MOG100, the following options are available: - If you have installed a SIM card in MOG100, it stores data continuously to the cloud service. Use Beacon Cloud to view the data. - If you are using NM10 or a third-party server, configure MOG100 to send data to your service and view the data there. - If you have a maintenance connection to MOG100 with your computer, data is stored on the computer. - If you have enabled logging in MOG100, the data is stored in the local memory in MOG100. Create a maintenance connection to transfer the data to your computer.
https://docs.vaisala.com/r/M212056EN-C/en-US/GUID-CDFC89E3-C9E6-4064-A057-62581586395C
2022-06-25T08:52:21
CC-MAIN-2022-27
1656103034877.9
[]
docs.vaisala.com
Machine The Machine page contains settings related to the emulated machine as a whole, such as the machine type, CPU type and amount of memory. Machine type / Machine Machine/motherboard model to emulate. The Machine box lists all available models for the machine class selected on the Machine type box. The Configure button opens a new window with settings specific to the machine’s onboard devices, such as the amount of installed video memory for an onboard video chip. CPU type / Speed Main processor to emulate. The Speed box lists all available speed grades for the processor family selected on the CPU type box. These boxes only list processor types and speed grades supported by the machine selected above. FPU Math co-processor to emulate. This box is not available if the processor selected above has an integrated co-processor or lacks support for an external one. Wait states Number of memory wait states to use on a 286- or 386-class processor. This box is not available if any other processor family is selected above. Memory Amount of RAM to give the emulated machine. The minimum and maximum allowed amounts of RAM will vary depending on the machine selected above. Dynamic Recompiler Enable the dynamic recompiler, which provides faster but less accurate CPU emulation. The recompiler is available as an option for 486-class processors, and is mandatory starting with the Pentium. Time synchronization Automatically copy your host system’s date and time over to the emulated machine’s hardware real-time clock. Synchronization is performed every time the emulated operating system reads the hardware clock to calibrate its own internal clock, which usually happens once on every boot. Disabled: do not perform time synchronization. The emulated machine’s date and time can be set through its operating system or BIOS setup utility. Time only ticks while the emulated machine is running. Enabled (local time): synchronize the time in your host system’s configured timezone. Use this option when emulating an operating system which stores local time in the hardware clock, such as DOS or Windows. Enabled (UTC): synchronize the time in Coordinated Universal Time (UTC). Use this option when emulating an operating system which stores UTC time in the hardware clock, such as Linux.
https://86box.readthedocs.io/en/latest/settings/machine.html
2022-06-25T08:29:19
CC-MAIN-2022-27
1656103034877.9
[]
86box.readthedocs.io
This page describes what's available in the latest releases of Mining Prep and Process Mining. In this release, you’ll notice a fresh, new look across Process Mining and Mining Prep. We’ve aligned the design of the header bar to be consistent with the entire Appian platform. We want you to feel right at home no matter which tool you are using from our robust suite of capabilities. Whether you are analyzing a process for the first time or using our workflow and automation capabilities to enhance your mined processes, you should find the new experience more consistent and fluid to use. On This Page
https://docs.appian.com/suite/help/22.2/pm-5.2/get_started/whats-new.html
2022-06-25T08:02:45
CC-MAIN-2022-27
1656103034877.9
[]
docs.appian.com
Introduction & Plugin Configuration With Twilio SMS Notification addon used for sending SMS notification to bidder.User bids on product, product get expire and user wins product.All these activities are notified by sending SMS notification to the user.So, usar finds out what's going on about auction product.This addon sends the following SMS notification to the bidder. - Bid notification to self - Outbid notification to outbid user - Auction won for winner - Ending soon for all bidders Requirements for SMS Notification Addon SMS Notification addon has pre-requisite.So first admin needs to followup pre-requisite of addon.The plugin has to be configured with Twilio.Admin should have an account on Twilio.Because addon only works with Twilio.Admin will get two twilio parameters that need to be entered in the addon setting. - Account SID - Auth Token How to configure Plugin With Twilio Step 1 - Go to Auctions -> License & Addon and activate SMS Notification addon. Step 2 - Go to Auctions -> Settings -> Addons and do settings. Collect Users mobile number during registration - Display Country and Mobile number on default Wordpress Register form - This option indicates that if admin enables this option then country name & mobile number fields will displays on default wordpress register form. - Display Country and Mobile number on WooCommerce My Account Page - This option indicates that the country name & mobile number will displays with registration form on my account page.First admin has to turn on the registration form from WooCommerce -> Settings -> Accounts and check option for it. This option will work only after the admin turns on registration form on my account page. When admin enables the checkbox, the country name and mobile number field will appear on my account page. Here user can register mobile number and country so that the user gets an SMS notification for bid activity. Twilio Connection Here the parameters of the Twilio account will be required.Admin will get the parameters by logging in to the Twilio account. - Account SID - Account SID can be found on the Twilio account's dashboard. - Auth Token - Auth Token can be found on the Twilio account's dashboard. - From Number - This is trial number of admin twilio account.This will be mentioned on account dashboard.So, all these parameters will be found on the Twilio dashboard. Send Test SMS This section is for test SMS Notification.First enter mobile number but this number has to be verified in twilio account.For that go to twilio dashboard and varify your number. Select country, enter mobile number and varify it.If you verify by call, the call will come.If you verify by text, OTP will come to your number.Enter the OTP and verify your number. once varified your number after that enter mobile number, write test message and click send button in setting.Enter country code before mobile number.You can see below that the message has arrived. Customer Notifications SMS This setting is for enabling differant bid notification and customize messages.The admin can send as many notifications to the bidder as he wants by enable it. - Enable Place Bid SMS - By enabling this option bidder will get SMS Notification.When user bids on a product, user will get sms notification for place bid.Here is an example of what kind of message the bidder will get."bid_value" will get the total bid amount and "product_id" will get id of bid product.Admin can customize this message. - Enable Outbid SMS - By enabling this option bidder will get SMS Notification for outbid.When one user bids the highest, other bidder will get outbid message.Message will be sent as below."bid_value" will get highest value.It also include product link where bidder may bid highest. - Enable Won SMS - By enabling this option bidder will get bid won message for auction product.User bids on a product, admin selects the winner and user wons a product.After such a process is done, the bidder gets won message.Below is a description of won message."product_id" will get id of product, "product_name" will get name of product and "this_pay_kink" will get product link on website.Click on link and there bidder can make payment. - Enable Ending Soon SMS - When minimum time left for auction to expire, ending soon message will be send to bidder.Below is a description of ending soon message.Admin can customize message from here. - Mention time left for the auction to close - If admin mentions 1 hour here, the message will be sent when there is 1 hour left for the product to expire. This message will go to all those who have bid on the product. - For more details, please refer - how to setup cron jobs
https://docs.auctionplugin.net/article/111-introduction
2022-06-25T07:32:18
CC-MAIN-2022-27
1656103034877.9
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58806643dd8c8e484b24e64f/images/5f2036f404286306f8078053/file-b5mO5NNbx1.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58806643dd8c8e484b24e64f/images/5f20385e2c7d3a10cbab6c9c/file-ffYaYBcfJ6.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58806643dd8c8e484b24e64f/images/5f21226204286306f8078c01/file-uihr2bEXO3.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58806643dd8c8e484b24e64f/images/5f21254b04286306f8078c1c/file-LLpdPxOCCx.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58806643dd8c8e484b24e64f/images/5f2129b02c7d3a10cbab79d6/file-7G19EAORnv.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58806643dd8c8e484b24e64f/images/5f2129ed04286306f8078c53/file-WK3bk1SQOa.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58806643dd8c8e484b24e64f/images/5f2128b42c7d3a10cbab79c5/file-DvgM5BwUnJ.png', None], dtype=object) ]
docs.auctionplugin.net
3. Install Demo Content For those of you just starting out, we have created a demo content file that will help you see what you're doing as you set up your site. It includes similar content to the theme demo, such as: - sample posts - sample pages - sample categories - sample navigation menus NOTE: If you already have an established site or are migrating from another platform, please DO NOT INSTALL this content, and continue on to the next step. 1. Navigate to TOOLS > IMPORT. 2. You'll see the WordPress option at the bottom. Click the "Install Now" button. 3. Once it's done installing, you'll click Run Importer. 4. Choose the sample.xml file in the xml folder of your theme files (on your computer). Click Upload File and Import. 5. Assign all posts to your user profile (click the arrows), check the Download and Import File Attachments button, then click SUBMIT. Allow it a few moments to import everything and you'll be all set! Next - Add your Shop (optional)
https://docs.restored316.com/article/84-install-demo-content
2022-06-25T06:56:20
CC-MAIN-2022-27
1656103034877.9
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57c34265903360342852ecfb/images/57e18c039033606c1955ae3a/file-MCBbGy6Ilr.jpg', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57c34265903360342852ecfb/images/57e18cacc697910d0784c447/file-NKJgabrmYl.jpg', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57c34265903360342852ecfb/images/57e18de99033606c1955ae57/file-STonLsdDUg.jpg', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57c34265903360342852ecfb/images/57e19572c697910d0784c4e5/file-ssU4cbzgpk.jpg', None], dtype=object) ]
docs.restored316.com
setOptions Sets the options of the Grid. Use this method if you want to enable/disable a particular feature/option or to load the complete state obtained previously with the getOptions method. When setOptions is called, the Grid widget will be destroyed and recreated. If the widget is bound to remote data, a new read request will be made. There are a few important things to keep in mind when using getOptionsand setOptions. - calling setOptions()in a Grid event handler is not possible. - calling setOptions()in a function, which is related to the Grid's databinding mechanism may cause an endless loop. - JSON.stringify()cannot serialize function references (e.g. event handlers), so if stringification is used for the retrieved Grid state, all configuration fields, which represent function references, will be lost. You have two options to avoid this limitation: use a custom implementation to serialize JavaScript functions, or add the function references back to the deserialized configuration object before passing it to the setOptionsmethod. - When using the Grid MVC wrapper, any server templates will not be retrieved by the getOptionsmethod (e.g. toolbar or header templates with @<text></text>razor syntax). This is because the server templates are rendered server-side and do not have corresponding configuration options included in the JavaScript initialization statement that creates the Grid object client-side. As a result, the templates will be lost once the setOptions()method is invoked. There are two options to avoid the issue - use JavaScript initialization instead of an MVC wrapper, or add template configuration to the retrieved Grid state with the JavaScript equivalent syntax (e.g. headerTemplate and toolbar). Parameters options Object The configuration options to be set. Example - set sortable feature of the Grid to true <div id="grid"></div> <script> $("#grid").kendoGrid({ columns: [ { field: "name" }, { field: "age" } ], dataSource: [ { name: "Jane Doe", age: 30 }, { name: "John Doe", age: 33 } ] }); var grid = $("#grid").data("kendoGrid"); grid.setOptions({ sortable: true }); </script> When used for AngularJS, the $scope should be passed to the Grid options. By default, the Grid when initialized expects such logic. $scope.grid.setOptions($.extend({}, options, { $angular: [$scope] }));
https://docs.telerik.com/kendo-ui/api/javascript/ui/grid/methods/setoptions
2022-06-25T08:06:10
CC-MAIN-2022-27
1656103034877.9
[]
docs.telerik.com
Objects rendering content¶ IMAGE for the rendering of an image: lib.logo = IMAGE lib.logo { file = fileadmin/logo.gif file.width = 200 stdWrap.typolink.parameter = 1 } The result is an image based on file :file:`logo.gif` with width of 200 pixels and a link to the page 1. TEXT is for the rendering of standard text or the content of fields: lib.motto = TEXT lib.motto.value = Inspiring people to share lib.motto.wrap = <div class="highlight">|</div> FILES is used to retrieve information about one or more files and perform some rendering with it: lib.banner = FILES lib.banner { references { table = pages fieldName = media uid.data = page:uid } renderObj = IMAGE renderObj { file.import.data = file:current:uid file.treatIdAsReference = 1 file.width = 500 wrap = <div class="banner">|</div> } } This code will probably look pretty abstract to you right now. What it does is to reference the images that were related to a given page in the "media" field. It takes each of these images and resizes them to a maximum width of 500 pixels. Each image is wrapped in a `<div>` tag. FLUIDTEMPLATE renders a template with the template engine Fluid with variables and data that you define - as previously discussed in the “Insert content in a HTML template” chapter.
https://docs.typo3.org/m/typo3/tutorial-typoscript-in-45-minutes/11.5/en-us/TypoScriptObjects/RenderingContent/Index.html
2022-06-25T08:56:15
CC-MAIN-2022-27
1656103034877.9
[]
docs.typo3.org
ACS Clone Repo Tool¶ The “Clone Repo” tool is a tool that allows you to clone any http or https-hosted RPM repository to the local Integrated File System (IFS) of the target IBM i system. How do I launch the “Clone Repo” tool?¶ In Access Client Solutions, first access the Open Source Package Management tool (Tools->”Open Source Package Management”) The clone tool is available after signing onto a system with the open source package management tool (Utilities->”Clone Repo for Offline Use”) Interface options, explained¶ Additional Operations -> Create or Update Repository Definition¶ The YUM package manager only knows about repos that are defined in YUM’s repository list. The repository list is simply a set of .repo files in the /QOpenSys/etc/yum/repos.d/ Additional Operations -> Disable Repositories that Require Internet Access from the IBM i System¶ By default, YUM will fail any operations if it can’t read from all the configured repositories. This options disables Internet-requiring repos, so that YUM operations continue to work. Keep this option checked if your IBM i system can’t access the Internet. Serving up your internal repo via nginx using ACS¶ The ACS “Clone Repo” tool makes it easy to serve your cloned repo to other systems in your network. If you check the “create nginx configuration file” option, the following files are created for you: nginx.conf: Used by nginx. By default, it configures nginx to use 5 worker processes and listen on port 2055. Feel free to customize to suit your needs. startServer: Starts nginx as a background task in current subsystem. startServerBatch: Submits nginx to QHTTPSVR subsystem (feel free to customize to suit your needs). stopServer: Stops the nginx instance Of course, the next step is to configure other systems to point at your newly-created repo. That can be done on endpoint systems either by: installing yum-utilsand invoking yum-config-manager --add-repo <ip-address-where-hosted>(for instance, yum-config-manager --add-repo). creating a repository definition in /QOpenSys/etc/yum/repos.d. A repository definition is a small text file with basic information. Use ibm.repoas an example. To automate yum updates, one can use a job scheduler entry, specifying -y so the YUM tool doesn’t stop to ask for user confirmation. Example: ADDJOBSCDE JOB(YUMUP) CMD(QSH CMD('exec /QOpenSys/pkgs/bin/yum -y upgrade')) FRQ(*WEEKLY) SCDDAY(*ALL) This example does a daily upgrade, but note that no update will happen if the configured repo or repos have no changes. If the only configured repo is your private one, then this will not do anything until you update your repo. To summarize, the process of creating your own repository and hosting it for all your systems involves: Run the ACS Clone Repo Tool, checking the “Create nginx configuration file” box Start the nginx server by running the startServer/ startServerBatchscript Configure endpoint systems (optional) automate (A good practice might to to have a different repo for each class of systems, such as development, test, and production)
https://ibmi-oss-docs.readthedocs.io/en/latest/acscloner/README.html
2022-06-25T07:31:12
CC-MAIN-2022-27
1656103034877.9
[]
ibmi-oss-docs.readthedocs.io
NotifyCategory from panda3d.core import NotifyCategory - class NotifyCategory Bases: - __init__(param0: NotifyCategory) - property children Sequence[NotifyCategory] Returns the nth child Category of this particular Category. - get_child(i: int) NotifyCategory Returns the nth child Category of this particular Category. - is_on(severity: NotifySeverity) bool Returns true if messages of the indicated severity level ought to be reported for this Category. - is_spam() bool) ostream Begins a new message to this Category at the indicated severity level. If the indicated severity level is enabled, this writes a prefixing string to the Notify.out()stream and returns that. If the severity level is disabled, this returns Notify.null(). - static set_server_delta(delta: int) Sets a global delta (in seconds) between the local time and the server’s time, for the purpose of synchronizing the time stamps in the log messages of the client with that of a known server. - set_severity(severity: NotifySeverity) Sets the severity level of messages that will be reported from this Category. This allows any message of this severity level or higher. - property severity NotifySeverity Getter Setter Sets the severity level of messages that will be reported from this Category. This allows any message of this severity level or higher.
https://docs.panda3d.org/1.11/python/reference/panda3d.core.NotifyCategory
2022-06-25T07:03:21
CC-MAIN-2022-27
1656103034877.9
[]
docs.panda3d.org
P2 First Board This tutorial will guide you through some things that may help in making your first P2 base board design. This board includes basic features needed for P2 designs, including: - USB connector - RGB status LED - RESET and MODE buttons - Voltage regulator - Breakout pins for GPIO and ports Basic Design The P2 base board in this tutorial is about as simple as you can build. It does not have a fuel gauge or PMIC, and is powered by USB only, with no battery support. There will be other tutorials for more complex power supply designs. This is the Eagle board design for the USB SoM base board: It's a two-layer board so it is easy and inexpensive to manufacture, and you can work with it on the free version of Eagle CAD. The Eagle CAD design files can be downloaded as a zip file here. The files include: - P2PlugTestUSB.sch EagleCAD schematic. - P2PlugTestUSB.brd EagleCAD board layout file. This can also be submitted to OshPark. - P2PlugTestUSBV1.zip Gerber files. This is what you submit to JLCPCB. - P2PlugTestUSB.lbr Library file with all components in the schematic - P2PlugTestUSB.cam Configuration file for generating the Gerber files. and MODE lines have 10K pull-up resistors to 3V3. This design uses a C&K PTS645SH50SMTR92 which is a 6 Texas Instruments TPS62026DGQR 3.3V fixed voltage regulator. It supplies 600 mA at 3.3V from a 3.6V to 6V supply, which is perfect for USB power (5V). It requires 10 uF input and output capacitors and a 10 uH inductor. BoM (Bill of Materials) Assembly This is the board that I received from JLCPCB. I've also ordered many boards from OshPark. The P2 has a lot of pins and it's a pain to apply solder paste by hand, though it can be done. Using a solder stencil will make your life much easier. I ordered my stencil with my board from JLCPCB, but if you order a board from OshPark you can get the stencil separately from Osh Stencils. I used a single stencil for the P2 first battery board and P2 first USB board, since the battery board only adds additional components; this is why you can see green (solder mask) through some of the stencil holes. Since I used a 5 mil stainless steel stencil I frequently end up with too much solder paste on some of the tight pins. It's a good idea to clean this up; I use a small dental-style scraper tool for this._5<< I prefer to use low-temperature lead-free solder paste, in this case Sn42/Bi57/Ag1 with a melting point 137°C (278°F). This board uses 0603-sized components that are easily placed by hand. I use these two tools, but some prefer tweezers over forceps. This is my placement guide for the components: And the board with most of the components (except the P2) roughly placed. Reflow I used an inexpensive T962 reflow oven to build this board. It's not the most accurate and there are some hot and cold zones, but works fine here. . Testing You may want to first test the board by connecting a bench power supply to VUSB and GND at 5V. This is handy because you can set a reasonable current limit and see if the board is drawing current. This is optional, however, and you can always just throw caution to the wind and plug it into USB. If all goes well, the status LED should light up white and then go to blinking dark blue (listening mode). Try getting information about the board: particle identify And set Wi-Fi credentials so you can connect to the cloud: particle serial wifi Try entering other modes like DFU mode (blinking yellow). Try listing DFU devices and see if it shows up. dfu-util -l Celebrate making your first working P2 base board! 2022-05-02 (v2) - Expanded the tRestrict and bRestrict around the antenna area. For best results remove as much copper GND plane from this area as well, and avoid GND loops around the antenna if at all possible. 2021-05-04 - Initial version
https://docs.particle.io/hardware/wi-fi/p2-first-board/
2022-06-25T08:26:21
CC-MAIN-2022-27
1656103034877.9
[array(['/assets/images/p2-first-board/p2-custom.png', 'Board Image'], dtype=object) array(['/assets/images/p2-first-board/schematic.png', 'Schematic'], dtype=object) array(['/assets/images/p2-first-board/board-layout.png', 'Board Layout'], dtype=object) array(['/assets/images/p2-first-board/bare-board.jpeg', 'Board'], dtype=object) array(['/assets/images/p2-first-board/stencil.jpeg', 'Stencil'], dtype=object) array(['/assets/images/som-first-board/microscope.jpg', 'Microscope'], dtype=object) array(['/assets/images/som-first-board/solder-paste.jpg', 'Microscope'], dtype=object) array(['/assets/images/som-first-board/tools.jpg', 'Tools'], dtype=object) array(['/assets/images/p2-first-board/assembly.png', 'Assembly'], dtype=object) array(['/assets/images/p2-first-board/paste.jpeg', 'Most components placed'], dtype=object) array(['/assets/images/som-first-board/reflow.jpg', 'Reflow Oven'], dtype=object) ]
docs.particle.io
You can use Veracode for VS Code to scan either a single file or all files in a folder. Before you can scan C# projects, you must have the C# for Visual Studio Code extension installed. You can download this extension from the Visual Studio Code marketplace. The C# for Visual Studio Code extension enables C# compilation for Visual Studio Code. Veracode for VS Code uses the extension to generate binaries for Greenlight scans.
https://docs.veracode.com/r/c_scanning_with_vs_code
2022-06-25T08:58:38
CC-MAIN-2022-27
1656103034877.9
[]
docs.veracode.com
community.general.keyring lookup – grab secrets from the OS keyring.keyring. Synopsis Allows you to access data stored in the OS provided keyring/keychain. Requirements The below requirements are needed on the local controller node that executes this lookup. keyring (python library) Examples - name: output secrets to screen (BAD IDEA) ansible.builtin.debug: msg: "Password: {{item}}" with_community.general.keyring: - 'servicename username' - name: access mysql with password from keyring mysql_db: login_password={{lookup('community.general.keyring','mysql joe')}} login_user=joe Return Values Common return values are documented here, the following are the fields unique to this lookup: Collection links Issue Tracker Repository (Sources) Submit a bug report Request a feature Communication
https://docs.ansible.com/ansible/latest/collections/community/general/keyring_lookup.html
2022-06-25T07:02:57
CC-MAIN-2022-27
1656103034877.9
[]
docs.ansible.com
Dimension hierarchies provides full support for dimensional hierarchy modelling at dataset level, enabling smooth natural transition between granularity levels of data. Hierarch. Hierarchies may only be defined through columns that are classified as dimensions. If necessary, change the column attribute from Measure to Dimension. Alternatively, clone a column, change it to a dimension, and only then use it in defining a dimensional hierarchy. Just like with column names, the name of the dimensional hierarchy must be unique to the dataset. Columns that define a dimension hierarchy are retained as a reference, not a copy. Therefore, changes to the basic definition of the column (such as name, type, calculation) propagate into the dimensional hierarchy. To build out dimensional hierarchies, use 'drag and drop' action to add dimension columns, and to prioritize them. We recommend that you use the delete function to permanently remove items from a hierarchy. You can also move elements from one hierarchy to another; note that this action removes the dimension from the source hierarchy and adds it to the target hierarchy.
https://docs.cloudera.com/data-visualization/7/advanced-analytics-concepts/topics/viz-dimension-hierarchy.html
2022-06-25T07:03:55
CC-MAIN-2022-27
1656103034877.9
[]
docs.cloudera.com
.0 Description This method iterates through all feature flags and for each feature flag invokes Is Feature Enabled. If a feature is enabled, this method adds the feature’s key to the return list. Parameters The table below lists the required and optional parameters in PHP. 4 months ago
https://docs.developers.optimizely.com/experimentation/v3.1.0-full-stack/docs/get-enabled-features-php
2022-06-25T07:42:29
CC-MAIN-2022-27
1656103034877.9
[]
docs.developers.optimizely.com
The quickstarts provide step by step instructions for various common Duende IdentityServer scenarios. They start with the absolute basics and become more complex - it is recommended you do them in order. Every quickstart has a reference solution - you can find the code in the samples folder. The first thing you should do is install our templates: dotnet new --install Duende.IdentityServer.Templates They will be used as a starting point for the various tutorials.
https://docs.duendesoftware.com/identityserver/v6/quickstarts/0_overview/
2022-06-25T07:57:38
CC-MAIN-2022-27
1656103034877.9
[]
docs.duendesoftware.com
ironic.common.states module¶ Mapping of bare metal node states. Setting the node power_state is handled by the conductor’s power synchronization thread. Based on the power state retrieved from the driver for the node, the state is set to POWER_ON or POWER_OFF, accordingly. Should this fail, the power_state value is left unchanged, and the node is placed into maintenance mode. The power_state can also be set manually via the API. A failure to change the state leaves the current state unchanged. The node is NOT placed into maintenance mode in this case. - ironic.common.states.ACTIVE = 'active'¶ Node is successfully deployed and associated with an instance. - ironic.common.states.ADOPTFAIL = 'adopt failed'¶ Node failed to complete the adoption process. This state is the resulting state of a node that failed to complete adoption, potentially due to invalid or incompatible information being defined for the node. - ironic.common.states.ADOPTING = 'adopting'¶ Node is being adopted. This provision state is intended for use to move a node from MANAGEABLE to ACTIVE state to permit designation of nodes as being “managed” by Ironic, however “deployed” previously by external means. - ironic.common.states.AVAILABLE = 'available'¶ Node is available for use and scheduling. This state is replacing the NOSTATE state used prior to Kilo. - ironic.common.states.CLEANFAIL = 'clean failed'¶ Node failed cleaning. This requires operator intervention to resolve. - ironic.common.states.CLEANING = 'cleaning'¶ Node is being automatically cleaned to prepare it for provisioning. - ironic.common.states.CLEANWAIT = 'clean wait'¶ Node is waiting for a clean step to be finished. This will be the node’s provision_state while the node is waiting for the driver to finish a cleaning step. - ironic.common.states.DELETED = 'deleted'¶ Node tear down was successful. In Juno, target_provision_state was set to this value during node tear down. In Kilo, this will be a transitory value of provision_state, and never represented in target_provision_state. - ironic.common.states.DELETE_ALLOWED_STATES = ('manageable', 'enroll', 'adopt failed')¶ States in which node deletion is allowed. - ironic.common.states.DEPLOY = 'deploy'¶ Node is successfully deployed and associated with an instance. This is an alias for ACTIVE. - ironic.common.states.DEPLOYDONE = 'deploy complete'¶ Node was successfully deployed. This is mainly a target provision state used during deployment. A successfully deployed node should go to ACTIVE status. - ironic.common.states.DEPLOYING = 'deploying'¶ Node is ready to receive a deploy request, or is currently being deployed. A node will have its provision_state set to DEPLOYING briefly before it receives its initial deploy request. It will also move to this state from DEPLOYWAIT after the callback is triggered and deployment is continued (disk partitioning and image copying). - ironic.common.states.DEPLOYWAIT = 'wait call-back'¶ Node is waiting to be deployed. This will be the node provision_state while the node is waiting for the driver to finish deployment. - ironic.common.states.ENROLL = 'enroll'¶ Node is enrolled. This state indicates that Ironic is aware of a node, but is not managing it. - ironic.common.states.ERROR = 'error'¶ An error occurred during node processing. The last_error attribute of the node details should contain an error message. - ironic.common.states.FASTTRACK_LOOKUP_ALLOWED_STATES = frozenset({'available', 'clean wait', 'cleaning', 'deploying', 'enroll', 'inspect wait', 'inspecting', 'manageable', 'rescue wait', 'rescuing', 'wait call-back'})¶ States where API lookups are permitted with fast track enabled. - ironic.common.states.INSPECTING = 'inspecting'¶ Node is under inspection. This is the provision state used when inspection is started. A successfully inspected node shall transition to MANAGEABLE state. For asynchronous inspection, node shall transition to INSPECTWAIT state. - ironic.common.states.INSPECTWAIT = 'inspect wait'¶ Node is under inspection. This is the provision state used when an asynchronous inspection is in progress. A successfully inspected node shall transition to MANAGEABLE state. - ironic.common.states.LOOKUP_ALLOWED_STATES = frozenset({'clean wait', 'cleaning', 'deploying', 'inspect wait', 'inspecting', 'rescue wait', 'rescuing', 'wait call-back'})¶ States when API lookups are normally allowed for nodes. - ironic.common.states.MANAGEABLE = 'manageable'¶ Node is in a manageable state. This state indicates that Ironic has verified, at least once, that it had sufficient information to manage the hardware. While in this state, the node is not available for provisioning (it must be in the AVAILABLE state for that). - ironic.common.states.NOSTATE = None¶ No state information. This state is used with power_state to represent a lack of knowledge of power state, and in target_*_state fields when there is no target. - ironic.common.states.REBUILD = 'rebuild'¶ Node is to be rebuilt. This is not used as a state, but rather as a “verb” when changing the node’s provision_state via the REST API. - ironic.common.states.RESCUEWAIT = 'rescue wait'¶ Node is waiting on an external callback. This will be the node provision_state while the node is waiting for the driver to finish rescuing the node. - ironic.common.states.STABLE_STATES = ('enroll', 'manageable', 'available', 'active', 'error', 'rescue')¶ States that will not transition unless receiving a request. - ironic.common.states.STUCK_STATES_TREATED_AS_FAIL = ('deploying', 'cleaning', 'verifying', 'inspecting', 'adopting', 'rescuing', 'unrescuing', 'deleting')¶ States that cannot be resumed once a conductor dies. If a node gets stuck with one of these states for some reason (eg. conductor goes down when executing task), node will be moved to fail state. - ironic.common.states.UNDEPLOY = 'undeploy'¶ Node tear down process has started. This is an alias for DELETED. - ironic.common.states.UNRESCUING = 'unrescuing'¶ Node is being restored from rescue mode (to active state). - ironic.common.states.UNSTABLE_STATES = ('deploying', 'wait call-back', 'cleaning', 'clean wait', 'verifying', 'deleting', 'inspecting', 'inspect wait', 'adopting', 'rescuing', 'rescue wait', 'unrescuing')¶ States that can be changed without external request. - ironic.common.states.UPDATE_ALLOWED_STATES = ('deploy failed', 'inspecting', 'inspect failed', 'inspect wait', 'clean failed', 'error', 'verifying', 'adopt failed', 'rescue failed', 'unrescue failed')¶ Transitional states in which we allow updating a node. - ironic.common.states.VERBS = {'abort': 'abort', 'active': 'deploy', 'adopt': 'adopt', 'clean': 'clean', 'deleted': 'delete', 'deploy': 'deploy', 'inspect': 'inspect', 'manage': 'manage', 'provide': 'provide', 'rescue': 'rescue', 'undeploy': 'delete', 'unrescue': 'unrescue'}¶ Mapping of state-changing events that are PUT to the REST API - This is a mapping of target states which are PUT to the API, eg, PUT /v1/node/states/provision {‘target’: ‘active’} - The dict format is: {target string used by the API: internal verb} This provides a reference set of supported actions, and in the future may be used to support renaming these actions.
https://docs.openstack.org/ironic/latest/contributor/api/ironic.common.states.html
2022-06-25T08:06:50
CC-MAIN-2022-27
1656103034877.9
[]
docs.openstack.org
In the final step, you add a new Reconfigurable Module to the Shifter VSM. In the create_prom.tcl script, you can see that two black box modules have already been generated. These represent two new RMs that may have been created after the static design was deployed to the field. You modify the DFX Controller settings to access one of these RMs by assigning the size, address, properties and trigger conditions. - Shut down the Shift VSM so it can be modified. dfxc_shutdown_vsm vs_shift - Check the status of the first three RM IDs to see their register bank assignments. dfxc_show_rm_configuration vs_shift 0 dfxc_show_rm_configuration vs_shift 1 dfxc_show_rm_configuration vs_shift 2 Currently, RM ID 2 is not assigned to any partial bitstreams. This is the behavior as requested when the initial trigger mapping was done during core customization. - When the MCS file is created for the prom, it adds additional blanking RMs that are already loaded into the BPI flash. Use this sequence of commands to reassign the trigger mapping for slot 2 to point to the blanking Reconfigurable Module for vs_shift. dfxc_write_register vs_shift_rm_control2 0 This defines the settings for the RM_CONTROL register for slot 2. No shutdown, startup, or reset are required. Note how for the other two slots, the differing reset durations lead to different control values. dfxc_write_register vs_shift_rm_bs_index2 327684 This assigns a new bitstream reference for this RM ID. dfxc_write_register vs_shift_trigger2 2 This assigns the trigger mapping such that trigger index 2 retrieves RM 2. The RM_BS_INDEX register within the DFX Controller is 32 bits but is broken into two fields. UltraScale devices require clearing and partial bitstreams. These bitstreams are identified separately with unique IDs, but referenced together in this field. This value of 327684 converts to 0000000000000101_0000000000000100in binary. Or more simply, ID 5 for the upper 16 bits for the CLEAR_BS_INDEX and ID 4 for the lower 16 bits for the BS_INDEX. This assignment sets the clearing and partial bitstream identifiers at the same time. dfxc_show_rm_configuration vs_shift 2 This shows the current state of RM ID 2. Note the changes from the prior call to this command. - Complete the RM ID 2 customization by setting the bitstream details. dfxc_write_register vs_shift_bs_size4 375996 dfxc_write_register vs_shift_bs_address4 12935168 dfxc_write_register vs_shift_bs_size5 26036 dfxc_write_register vs_shift_bs_address5 13312000 - Restart the VSM and then issue trigger events to it using software, as there is no pushbutton assigned for slot 2. dfxc_restart_vsm_no_status vs_shift dfxc_send_sw_trigger vs_shift 2 Switch between values of 0,1, and 2 to reload different partial bitstreams. The blanking bitstream in slot 2 removes the shifter function, so no activity on the LEDs is seen. Note that this same sequence of events could not be performed for the Count VSM as it is currently configured, even knowing that the PROM image has a Count black box partial bitstream sitting at address 13338624 with a size of 274104. During DFX Controller customization, this VSM was selected to have only 2 RMs allocated, so expansion is not permitted.
https://docs.xilinx.com/r/2020.2-English/ug947-vivado-partial-reconfiguration-tutorial/Step-7-Modifying-the-DFX-Controller-in-the-FPGA
2022-06-25T07:14:37
CC-MAIN-2022-27
1656103034877.9
[]
docs.xilinx.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Remove-WAF2WebACL-Id <String>-LockToken <String>-Name <String>-Scope <Scope>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter> ManagedByFirewallManageris false in the specified WebACL. Before deleting any web ACL, first disassociate it from all resources. ListDistributionsByWebACLId. For information, see ListDistributionsByWebACLId. UpdateDistribution. For information, see UpdateDistribution.
https://docs.aws.amazon.com/powershell/latest/reference/items/Remove-WAF2WebACL.html
2022-06-25T09:07:39
CC-MAIN-2022-27
1656103034877.9
[]
docs.aws.amazon.com
Setting up a sandbox account The Clover open platform provides tools for you to design and build solutions for Clover merchants. The Clover sandbox environment is a testbed where you can experiment with the platform capabilities. You can create test merchant accounts to manage merchant data and also to install and test your solutions for these test accounts. You can also create test merchants in other regions where you want to make your app available. IMPORTANT If you process credit cards, we highly recommend you secure your account with two-factor authentication to protect merchants and their data. NOTE Creating, updating, and deleting apps in the sandbox environment won't affect your published apps installed by Clover merchants using Clover's App Market. Solutions used by Clover merchants are built in the Clover production environment. To learn more about the different Clover developer environments and related accounts, see Developer Accounts. Create a sandbox developer account To create your sandbox developer account: - On the sandbox account creation page, enter your email address. - In the confirmation email you receive, follow the instructions to verify your email address. You are redirected to the Developer Dashboard to complete your account information. - Fill in the following information about your account: - Full Name: Your identification information in the sandbox environment - Public Developer Name: Either your developer name or company name. The developer name is displayed to users when viewing your app on the Clover App Market. - Create your Test Merchant: Your first test merchant. You can edit this information at a later point and also create more test merchants in the Developer Dashboard. Use the following table to complete the test merchant information. IMPORTANT If you are creating a test merchant for any region besides the US, be sure to select the correct currency for testing your app. Some regions, such as Canada, allow processing in more than one currency, and this setting cannot be changed once the test merchant is created. - After you have entered your account information, select Create Account. You now see the Developer Dashboard for your account in the sandbox environment. Sample inventory We’ve created a sample inventory file that will help you get started with your test merchant. - After you log on to your sandbox account, select your test merchant from the drop-down list in the header. - Select the Inventory app and import the sample inventory file. For more information, refer to Working with inventory. Clover platform status updates Sign up on Clover's Status page to stay informed about maintenance windows. Using this information you can plan your work so that you don't attempt to connect to Clover's resources when they will be less responsive than usual or down for maintenance. The URLs for all sandbox services now have dynamic IP addresses. Any developers using static IP addresses for connecting to sandbox services must support the URLs instead. Learn more about Clover environments. Updated 3 months ago
https://docs.clover.com/docs/setup-clover-sandbox-account
2022-06-25T07:13:32
CC-MAIN-2022-27
1656103034877.9
[]
docs.clover.com
Shared secrets is by far the most common technique for authenticating clients. From a security point of view they have some shortcomings The following snippet creates a shared secret. var secret = new Secret("good_high_entropy_secret".Sha256()); By default it is assumed that every shared secret is hashed either using SHA256 or SHA512. If you load from a data store, your IdentityServer would store the hashed version only, whereas the client needs access to the plain text version. You can either send the client id/secret combination as part of the POST body:: POST /connect/token Content-type: application/x-www-form-urlencoded client_id=client& client_secret=secret& grant_type=authorization_code& code=hdh922& redirect_uri= ..or as a basic authentication header:: POST /connect/token Content-type: application/x-www-form-urlencoded Authorization: Basic xxxxx client_id=client1& client_secret=secret& grant_type=authorization_code& code=hdh922& redirect_uri= You can use the IdentityModel client library to programmatically interact with the protocol endpoint from .NET code. using IdentityModel.Client; var client = new HttpClient(); var response = await client.RequestAuthorizationCodeTokenAsync(new AuthorizationCodeTokenRequest { Address = TokenEndpoint, ClientId = "client", ClientSecret = "secret", Code = "...", CodeVerifier = "...", RedirectUri = "" });
https://docs.duendesoftware.com/identityserver/v6/tokens/authentication/shared_secret/
2022-06-25T07:38:26
CC-MAIN-2022-27
1656103034877.9
[]
docs.duendesoftware.com
Create a Patch¶ Quick links: So you want to fix a bug or add a new feature to TYPO3? Great! Tip If you should encounter any problems or have questions, talk to us on in the #typo3-cms-coredev channel (see Slack). Step by Step Walkthrough¶ You should have a cloned Git repository with a working TYPO3 installation as described in setup. Especially the Git setup is required. Create an Issue on Forge More information: Report an Issue Every patch must have a matching issue on Forge, so create an issue now or submit a patch for an existing issue. Make your changes to the code, add documentation, tests This part is pretty straightforward. But be warned, there are still a few dark places deep inside the TYPO3 core dating back to the medieval times of PHP4. Yes, TYPO3 has been around for quite some time now. And there is ancient code we didn’t have to touch yet because it just works. Make sure to look at How to deprecate classes, methods, arguments and hooks in the TYPO3 core in the Appendix for information about to deprecate things if you need to make changes to the public API. For new features, breaking changes and deprecations, it is necessary to add information to the changelog. If you change SCSS, JavaScript or TypeScript files, you can build locally. Add Unit Tests or Functional Tests for new functionality, refine existing tests if necessary. Tests are important because they ensure that TYPO3 will behave consistently now and in the future. See Testing the core in TYPO3 Explained for more information about writing and running tests. Commit your changes Please make sure that you read the Commit Message rules for TYPO3 CMS in the Appendix. Your code will not be merged if it does not follow the commit message rules. Important The section Commit Message rules for TYPO3 CMS is a must-read. Read it. Follow it. For a bugfix, your commit message may look something like this: [BUGFIX] Subject line of max 52 chars Some descriptions with line length of max. 72 characters Resolves: #12346 Releases: main, 10.4 Only create one commit. Do not create a branch. Work on main. The commit-msg hook will do some sanity checks and add a line starting with Change-Id:. If you have activated the pre-commit hook it will loudly complain if something does not conform to the coding guidelines. In that case, use the runTests.sh script to to fix CGL issues. After you have created your commit, you can still make changes by amending to your commit: Tip Keep in mind that you can commit with –amend as often as you want, just make sure you keep the Change-Id:line intact. Push to Gerrit To submit the patch to Gerrit, issue the following command: If you have setup the default as described in Setting up Your Remote it is sufficient to use: In case you want to push a “Work in progress”, check out: Workflow - work in progress. If Gerrit accepts your push, it responds with the following messages: remote: New Changes: remote:<gerrit-id> remote: To ssh://<username>@review.typo3.org:29418/Packages/TYPO3.CMS.git * [new branch] HEAD -> refs/for/<release-branch> If you see an error, check out the Git Troubleshooting section. You can visit the link to to see your patch in Gerrit. If the automatically starting pre-merge build fails due to an error on Bamboo which isn’t caused by your patch (e.g. time out) you can restart it on Intercept. Advanced users / core team only: See cheat sheet: other branches for pushing to other branches. Optional: Advertise review on Slack channel Once your push to Gerrit goes through, you will receive a URL for your new change. If you are on Slack you can now advertise your new change in the #typo3-cms-coredev channel. You can get a preformatted line of your change to post in the channel by clicking the copy button next to the title in Gerrit: This is not something, you will do for every review. As a first contributor it is recommended to mention that you are new to the process. Now, it’s time to sit back and await feedback on your changes. The review team process dozens of requests each day, so expect a succinct response that is short and to the point. You will get notified by email, if there is activity on your patch in Gerrit (e.g. votes, comments, new patchsets, merge etc.). It is not unusual for a patch to get comments requesting changes. If that happens, please respond in a timely fashion and improve your review. If things are unclear, ask in the #typo3-cms-coredev channel on. Tip Look at the page Aliases & Git Aliases for some sample aliases which might help to simplify your workflow in the future. Helpful links¶ Next Steps¶ You will find some more information about the review process in the chapter Handle and Improve a Patch (Gerrit). The following pages are especially relevant for new contributors: - Tips for new contributors - Introduction to Gerrit describes the review tool Gerrit. - Find a review on Gerrit is helpful if you don’t know how to find your patch on Gerrit. - Gerrit works with up- and downvoting patches. A patch must get a specific number of upvotes before it can be merged. Code Review gives an introduction to how this works. - When you make additional changes to your patch, make sure you do not add another commit. Append to your original commit instead as described in Upload a new Patch Set. - Before starting to work on a new, unrelated patch you need to run the Cleanup tasks.
https://docs.typo3.org/m/typo3/guide-contributionworkflow/main/en-us/BugfixingAZ/Index.html
2022-06-25T07:41:41
CC-MAIN-2022-27
1656103034877.9
[]
docs.typo3.org
The Clocking Wizard is designed for users with any level of experience. The wizard automates the process of creating a clocking network by guiding you to the proper primitive configuration and allowing advanced users to override and manually set any attribute. Although the Clocking Wizard provides a fully verified clocking network, understanding the Xilinx® clocking primitives will aid you in making design trade-off decisions.
https://docs.xilinx.com/r/en-US/pg321-clocking-wizard/Recommended-Design-Experience
2022-06-25T07:48:33
CC-MAIN-2022-27
1656103034877.9
[]
docs.xilinx.com