content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Provide' and 'botherder' groups so that those can be viewed separately. The rospy documentation similarly has a 'client-api' group that pulls together APIs for a Client API page. Every ROS name in your code must be documented. Names are very important in ROS because they are the API to nodes and services. They are also capable of being remapped on the command-line, so it is VERY IMPORTANT THAT YOU LIST NAMES AS THEY APPEAR IN THE CODE. It is also important that you write your code so that the names can be easily remapped.
http://docs.ros.org/en/jade/api/microstrain_3dm_gx5_45/html/api/index.html
2022-01-16T22:58:48
CC-MAIN-2022-05
1642320300244.42
[]
docs.ros.org
. // Prints how is managed the text when the contents rendered // are too large to fir in the area given. function OnGUI() { Debug.Log(GUI.skin.button.clipping); } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void OnGUI() { Debug.Log(GUI.skin.button.clipping); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.3/Documentation/ScriptReference/GUIStyle-clipping.html
2022-01-16T21:38:55
CC-MAIN-2022-05
1642320300244.42
[]
docs.unity3d.com
A newer version of this page is available. Switch to the current version. ProgressPanel.AppearanceCaption Property Contains appearance settings used to customize the appearance of the control’s ProgressPanel.Caption. Namespace: DevExpress.XtraWaitForm Assembly: DevExpress.XtraEditors.v19.1.dll Declaration [DXCategory("Appearance")] public AppearanceObject AppearanceCaption { get; set; } <DXCategory("Appearance")> Public Property AppearanceCaption As AppearanceObject Property Value Remarks You can use the AppearanceCaption property to change the font settings and foreground color of the control’s ProgressPanel.Caption. See Also Feedback
https://docs.devexpress.com/WindowsForms/DevExpress.XtraWaitForm.ProgressPanel.AppearanceCaption?v=19.1
2022-01-16T21:27:22
CC-MAIN-2022-05
1642320300244.42
[]
docs.devexpress.com
Containers Build Interface Overview Container assessment results provided by the CI/CD Jenkins plugin are shown in a dedicated space on the Builds tab of the “Containers” screen. Still need to install the plugin? See our Containers CI/CD Plugin page for instructions. View all builds On your “Containers” screen, click the Builds tab. A table displays all build jobs that have been configured and run with the container assessment plugin. Relevant data includes the latest image assessed, build system, build date, compliance status, and the number of vulnerabilities found. Click any of the compliance status tabs to filter the “Jobs” table to display only those corresponding records. View job details Click a link in the “Build Job” column to inspect that particular job in detail. The “Build Job Details” screen has the following features: - The results of the latest build will be shown with compliance statuses for each configured rule. - The “Build History” graph details several assessment data trends for your job. Use the provided dropdown to switch between available graphs. - The “Assessed Job Builds” table records all builds that have been run for your job. Click any of these rows to inspect the compliance results of individual rules. Build Compliance in Image Details The latest build results are also viewable from the “Image Details” screen of applicable container images. - On your Images tab, click the ID of any image denoted with a “B” symbol to inspect it. - Images that have been assessed with the plugin will feature an additional Build Compliance tab, along with the latest overall compliance result. Click this tab to inspect individual policy rules and their corresponding build actions. - If desired, click the name link of the build job to navigate directly to the relevant “Build Job Details” screen for this container image.
https://docs.rapid7.com/insightvm/containers-build-interface/
2022-01-16T21:58:59
CC-MAIN-2022-05
1642320300244.42
[]
docs.rapid7.com
Organizing Your Work In order to start your project on the right track, it is a good idea to organize your files and directories properly. In this section, you will learn tips to help organize your work: Naming Your Files To make it easier to remember what each project, layer, drawing or colour palette is, name your file with a relevant name representing the content of your project. Make sure to give it a name which easily identifies the project, such as JumpingFrog. That way, when you have several projects, you can locate specific ones more easily. We recommend that you name your scene using alphanumeric characters: a to z, 0 to 9 and underscore ( _ ). Avoid spaces in the file names. Try to provide the maximum amount of information for future identification, which helps ensure that you do not mix up elements and lose information. Here are some typical examples of project names: Saving Files and Structuring a Project When you create your project, you must save it somewhere. It is a good idea to create a folder on your hard drive to contain all the elements of your project. Keeping everything together in one place is always useful for retrieving scenes, sounds, artwork, or reference material. There are many ways to structure your project folder. Here is an example you could use to start. Before you save any elements in your production, you should decide on a project name, which should reflect the content or title. Use the name to create your main folder on your local hard drive, where you will save all your production elements. For example, the main directory could be called SuperTurtleAdventure_Project. Once your main folder is created, you should create a subfolder to contain the different categories of material you will store in it. For example, you could have categories such as:
https://docs.toonboom.com/help/toon-boom-studio-81/Content/TBS/User_Guide/002_Starting/005_H1_Organizing_Work.html
2022-01-16T23:04:15
CC-MAIN-2022-05
1642320300244.42
[array(['../../../Resources/Images/TBS/User_Guide/TBS-ProjectStructure.png', None], dtype=object) ]
docs.toonboom.com
Changelog¶ v6.0.0¶ Breaking Changes - Drop support of Python 3.5 because a required dependency (llvmlite) does not support it anymore. New Features - Setup consistent way for logging. (Logging) - Added downloader ( audiomate.corpus.io.CommonVoiceDownloader) for the Common Voice Corpora. - Add existence checks for reader ( audiomate.corpus.io.CorpusReader) to see if folder exists. - Add existence checks and a option for forcing redownload for downloader ( audiomate.corpus.io.CorpusDownloader). v5.2.0¶ New Features - Added reader ( audiomate.corpus.io.LibriSpeechReader) and downloader ( audiomate.corpus.io.LibriSpeechDownloader) for the LibriSpeech Dataset. v5.1.0¶ New Features - Added Downloader for SWC Corpus (( audiomate.corpus.io.SWCDownloader). - Updated SWC-Reader ( audiomate.corpus.io.SWCReader) with an own implementation, so no manual preprocessing is needed anymore. - Added conversion class ( audiomate.corpus.conversion.WavAudioFileConverter) to convert all files (or files that do not meet the requirements) of a corpus. - Added writer ( audiomate.corpus.io.NvidiaJasperWriter) for NVIDIA Jasper. - Create a consistent way to define invalid utterances of a dataset. Invalid utterance ids are defined in a json-file (e.g. audiomate/corpus/io/data/tuda/invalid_utterances.json). Those are loaded automatically in the base-reader and can be accessed in the concrete implementation. v5.0.0¶ Breaking Changes - Changed audiomate.corpus.validation.InvalidItemsResultto use it not only for Utterances, but also for Tracks for example. - Refactoring and addition of splitting functions in the audiomate.corpus.subset.Splitter. New Features - Added audiomate.corpus.validation.TrackReadValidatorto check for corrupt audio tracks/files. - Added reader ( audiomate.corpus.io.FluentSpeechReader) for the Fluent Speech Commands Dataset. - Added functions to check for contained tracks and issuers ( audiomate.corpus.CorpusView.contains_track(), audiomate.corpus.CorpusView.contains_issuer()). - Multiple options for controlling the behavior of the audiomate.corpus.io.KaldiWriter. - Added writer ( audiomate.corpus.io.Wav2LetterWriter) for the wav2letter engine. - Added module with functions to read/write sclite trn files ( audiomate.formats.trn). Fixes - Improved performance of Tuda-Reader ( audiomate.corpus.io.TudaReader). - Added wrapper for the `audioread.audio_open`function ( audiomate.utils.audioread) to cache available backends. This speeds up audioopen operations a lot. - Performance improvements, especially for importing utterances, merging, subviews. v4.0.1¶ Fixes - Fix audiomate.corpus.io.CommonVoiceReaderto use correct file-extension of the audio files. v4.0.0¶ Breaking Changes - For utterances and labels -1was used for representing that the end is the same as the end of the parent utterance/track. In order to prevent -1checks in different methods/places float('inf')is now used. This makes it easier to implement stuff like label overlapping. audiomate.annotations.LabelListis now backed by an interval-tree instead of a simple list. Therefore the labels have no fixed order anymore. The interval-tree provides functionality for operations like merging, splitting, finding overlaps with much lower code complexity. - Removed module audiomate.annotations.label_cleaning, since those methods are available on audiomate.annotations.LabelListdirectly. New Features - Added reader ( audiomate.corpus.io.RouenReader) and downloader ( audiomate.corpus.io.RouenDownloader) for the LITIS Rouen Audio scene dataset. - Added downloader ( audiomate.corpus.io.AEDDownloader) for the Acoustic Event Dataset. - [#69] Method to get labels within range: audiomate.annotations.LabelList.labels_in_range(). - [#68] Add convenience method to create Label-List with list of label values: audiomate.annotations.LabelList.with_label_values(). - [#61] Added function to split utterances of a corpus into multiple utterances with a maximal duration: audiomate.corpus.CorpusView.split_utterances_to_max_time(). - Add functions to check for overlap between labels: audiomate.annotations.Label.do_overlap()and audiomate.annotations.Label.overlap_duration(). - Add function to merge equal labels that overlap within a label-list: audiomate.annotations.LabelList.merge_overlapping_labels(). - Added reader ( audiomate.corpus.io.AudioMNISTReader) and downloader ( audiomate.corpus.io.AudioMNISTDownloader) for the AudioMNIST dataset. Fixes v3.0.0¶ Breaking Changes - Moved label-encoding to its own module ( audiomate.encoding). It now provides the processing of full corpora and store it in containers. - Moved audiomate.feeding.PartitioningFeatureIteratorto the audiomate.feedingmodule. - Added audiomate.containers.AudioContainerto store audio tracks in a single file. All container classes are now in a separate module audiomate.containers. - Corpus now contains Tracks not Files anymore. This makes it possible to different kinds of audio sources. Audio from a file is now included using audiomate.tracks.FileTrack. New is the audiomate.tracks.ContainerTrack, which reads data stored in a container. - The audiomate.corpus.io.DefaultReaderand the audiomate.corpus.io.DefaultWriternow load and store tracks, that are stored in a container. - All functionality regarding labels was moved to its own module audiomate.annotations. - The class audiomate.tracks.Utterancewas moved to the tracks module. New Features - Introducing the audiomate.feedingmodule. It provides different tools for accessing container data. Via a audiomate.feeding.Datasetdata can be accessed by indices. With a audiomate.feeding.DataIteratorone can easily iterate over data, such as frames. - Added processing steps for computing Onset-Strength ( audiomate.processing.pipeline.OnsetStrength)) and Tempogram ( audiomate.processing.pipeline.Tempogram)). - Introduced audiomate.corpus.validationmodule, that is used to validate a corpus. - Added reader ( audiomate.corpus.io.SWCReader) for the SWC corpus. But it only works for the prepared corpus. - Added function ( audiomate.corpus.utils.label_cleaning.merge_consecutive_labels_with_same_values()) for merging consecutive labels with the same value - Added downloader ( audiomate.corpus.io.GtzanDownloader) for the GTZAN Music/Speech. - Added audiomate.corpus.assets.Label.tokenized()to get a list of tokens from a label. It basically splits the value and trims whitespace. - Added methods on audiomate.corpus.CorpusView, audiomate.corpus.assets.Utteranceand audiomate.corpus.assets.LabelListto get a set of occurring tokens. - Added audiomate.encoding.TokenOrdinalEncoderto encode labels of an utterance by mapping every token of the label to a number. - Create container base class ( audiomate.corpus.assets.Container), that can be used to store arbitrary data per utterance. The audiomate.corpus.assets.FeatureContaineris now an extension of the container, that provides functionality especially for features. - Added functions to split utterances and label-lists into multiple parts. ( audiomate.corpus.assets.Utterance.split(), audiomate.corpus.assets.LabelList.split()) - Added audiomate.processing.pipeline.AddContextto add context to frames, using previous and subsequent frames. - Added reader ( audiomate.corpus.io.MailabsReader) and downloader ( audiomate.corpus.io.MailabsDownloader) for the M-AILABS Speech Dataset. Fixes v2.0.0¶ Breaking Changes - Update various readers to use the correct label-list identifiers as defined in Data Mapping. New Features - Added downloader ( audiomate.corpus.io.TatoebaDownloader) and reader ( audiomate.corpus.io.TatoebaReader) for the Tatoeba platform. - Added downloader ( audiomate.corpus.io.CommonVoiceDownloader) and reader ( audiomate.corpus.io.CommonVoiceReader) for the Common Voice Corpus. - Added processing steps audiomate.processing.pipeline.AvgPooland audiomate.processing.pipeline.VarPoolfor computing average and variance over a given number of sequential frames. - Added downloader ( audiomate.corpus.io.MusanDownloader) for the Musan Corpus. - Added constants for common label-list identifiers/keys in audiomate.corpus. v1.0.0¶ Breaking Changes - The (pre)processing module has moved to audiomate.processing. It now supports online processing in chunks. For this purpose a pipeline step can require context. The pipeline automatically buffers data, until enough frames are ready. New Features - Added downloader ( audiomate.corpus.io.FreeSpokenDigitDownloader) and reader ( audiomate.corpus.io.FreeSpokenDigitReader) for the Free-Spoken-Digit-Dataset.
https://audiomate.readthedocs.io/en/latest/notes/changelog.html
2022-01-16T22:52:39
CC-MAIN-2022-05
1642320300244.42
[]
audiomate.readthedocs.io
Integrations Taptica is a global end-to-end mobile advertising platform that helps the world’s top brands reach their most valuable users. We offer data-focused marketing solutions driving execution and powerful brand insight. In order to enable our Audience integration with Taptica, you’ll need to obtain your Taptica API Key from your Taptica account manager. When forwarding audience data to Taptica, mParticle will send the following identifiers: Was this page helpful?
https://docs.mparticle.com/integrations/taptica/
2020-10-20T00:13:47
CC-MAIN-2020-45
1603107867463.6
[]
docs.mparticle.com
TOPICS× Accessing and Managing Logs Users can access a list of available log files for the selected environment using the Environment Card. Users can access a list of available log files for the selected environment. These files can be downloaded through the UI, either from the Overview page. Or, the Environments page: Regardless of where it is opened, the same dialog appears and allows for an individual log file to be downloaded. Logs through API In addition to downloading logs through the UI, logs will be available through the API and the Command Line Interface. For example, to download the log files for a specific environment, the command would be something alone the lines of $ aio cloudmanager:download-logs --programId 5 1884 author aemerror The following command allows the tailing of logs: $ aio cloudmanager:tail-log --programId 5 1884 author aemerror In order to obtain the environment Id (1884 in this case) and the available service or log name options you can use: $ aio cloudmanager:list-environments Environment Id Name Type Description 1884 FoundationInternal_dev dev Foundation Internal Dev environment 1884 FoundationInternal_stage stage Foundation Internal STAGE environment 1884 FoundationInternal_prod prod Foundation Internal Prod environment $ aio cloudmanager:list-available-log-options 1884 Environment Id Service Name 1884 author aemerror 1884 author aemrequest 1884 author aemaccess 1884 publish aemerror 1884 publish aemrequest 1884 publish aemaccess 1884 dispatcher httpderror 1884 dispatcher aemdispatcher 1884 dispatcher httpdaccess While Log Downloads will be available through both the UI and API, Log Tailing is API/CLI-only. Additional Resources Refer to the following additional resources to learn more about the Cloud Manager API and Adobe I/O CLI:
https://docs.adobe.com/content/help/en/experience-manager-cloud-service/implementing/using-cloud-manager/manage-logs.html
2020-10-20T01:53:30
CC-MAIN-2020-45
1603107867463.6
[array(['/content/dam/help/experience-manager-cloud-service.en/help/implementing/cloud-manager/assets/manage-logs1.png', None], dtype=object) array(['/content/dam/help/experience-manager-cloud-service.en/help/implementing/cloud-manager/assets/download-logs.png', None], dtype=object) array(['/content/dam/help/experience-manager-cloud-service.en/help/implementing/cloud-manager/assets/manage-logs3.png', None], dtype=object) ]
docs.adobe.com
AWSSupport-TroubleshootRDP Description The AWSSupport-TroubleshootRDP automation document allows the user to check or modify common settings on the target instance which may impact Remote Desktop Protocol (RDP) connections, such as the RDP port, Network Layer Authentication (NLA) and Windows Firewall profiles. Optionally, changes can be applied offline by stopping and starting the instance, if the user explicitly allows for offline remediation. By default, the document reads and outputs the values of the settings. Changes to the RDP settings, RDP service and Windows Firewall profiles should be carefully reviewed before running this document. Run this Automation (console) Document type Automation Owner Amazon Platforms Windows Parameters Action Type: String Valid values: CheckAll | FixAll | Custom Default: Custom Description: (Optional) [Custom] Use the values from Firewall, RDPServiceStartupType, RDPServiceAction, RDPPortAction, NLASettingAction and RemoteConnections to manage the settings. [CheckAll] Read the values of the settings without changing them. [FixAll] Restore RDP default settings, and disable the Windows Firewall. AllowOffline Type: String Valid values: True | False Default: False Description: (Optional) Fix only - Set it to true if you allow an offline RDP remediation in case the online troubleshooting fails, or the provided instance is not a managed instance. Note: For the offline remediation, SSM Automation stops the instance, and creates an AMI before attempting any operations.. Firewall Type: String Valid values: Check | Disable Default: Check Description: (Optional) Check or disable the Windows firewall (all profiles). InstanceId Type: String Description: (Required) The ID of the instance to troubleshoot the RDP settings of. NLASettingAction Type: String Valid values: Check | Disable Default: Check Description: (Optional) Check or disable Network Layer Authentication (NLA). RDPPortAction Type: String Valid values: Check | Modify Default: Check Description: (Optional) Check the current port used for RDP connections, or modify the RDP port back to 3389 and restart the service. RDPServiceAction Type: String Valid values: Check | Start | Restart | Force-Restart Default: Check Description: (Optional) Check, start, restart, or force-restart the RDP service (TermService). RDPServiceStartupType Type: String Valid values: Check | Auto Default: Check Description: (Optional) Check or set the RDP service to automatically start when Windows boots. RemoteConnections Type: String Valid values: Check | Enable Default: Check Description: (Optional) An action to perform on the fDenyTSConnections setting: Check, Enable. S3BucketName Type: String Description: (Optional) Offline only - S3 bucket name in your account where you want to upload the troubleshooting logs. Make sure the bucket policy does not grant unnecessary read/write permissions to parties that do not need access to the collected logs. SubnetId Type: String Default: SelectedInstanceSubnet Description: (Optional) Offline only - The subnet ID for the EC2Rescue instance used to perform the offline troubleshooting. If no subnet ID is specified, AWS Systems Manager Automation will create a new VPC. IMPORTANT: The subnet must be in the same Availability Zone as InstanceId, and it must allow access to the SSM endpoints. Required IAM permissions The AutomationAssumeRole requires the following actions to successfully run the Automation document. It is recommended that the EC2 instance receiving the command has an IAM role with the AmazonSSMManagedInstanceCore Amazon managed policy attached. For the online remediation, the user must have at least ssm:DescribeInstanceInformation, ssm:ExecuteAutomation and ssm:SendCommand to run the automation and send the command to the instance, plus ssm:GetAutomationExecution to be able to read the automation output. For the offline remediation, the user must have at least ssm:DescribeInstanceInformation, ssm:ExecuteAutomation, ec2:DescribeInstances, plus ssm:GetAutomationExecution to be able to read the automation output. AWSSupport-TroubleshootRDP calls AWSSupport-ExecuteEC2Rescue to perform the offline remediation - please review the permissions for AWSSupport-ExecuteEC2Rescue to ensure you can run the automation successfully. Document Steps aws:assertAwsResourceProperty - Check if the instance is a Windows Server instance aws:assertAwsResourceProperty - Check if the instance is a managed instance (Online troubleshooting) If the instance is a managed instance, then: aws:assertAwsResourceProperty - Check the provided Action value (Online check) If the Action = CheckAll, then: aws:runPowerShellScript - Runs the PowerShell script to get the Windows Firewall profiles status. aws:executeAutomation - Calls AWSSupport-ManageWindowsService to get the RDP service status. aws:executeAutomation - Calls AWSSupport-ManageRDPSettings to get the RDP settings. (Online fix) If the Action = FixAll, then: aws:runPowerShellScript - Runs the PowerShell script to disable all Windows Firewall profiles. aws:executeAutomation - Calls AWSSupport-ManageWindowsService to start the RDP service. aws:executeAutomation - Calls AWSSupport-ManageRDPSettings to enable remote connections and disable NLA. (Online management) If the Action = Custom, then: aws:runPowerShellScript - Runs the PowerShell script to manage the Windows Firewall profiles. aws:executeAutomation - Calls AWSSupport-ManageWindowsService to manage the RDP service. aws:executeAutomation - Calls AWSSupport-ManageRDPSettings to manage the RDP settings. (Offline remediation) If the instance is not a managed instance then: aws:assertAwsResourceProperty - Assert AllowOffline = True aws:assertAwsResourceProperty - Assert Action = FixAll aws:assertAwsResourceProperty - Assert the value of SubnetId (Use the provided instance's subnet) If SubnetId is SELECTED_INSTANCE_SUBNET aws:executeAwsApi - Retrieve the current instance's subnet. aws:executeAutomation - Run AWSSupport-ExecuteEC2Rescue with provided instance's subnet. (Use the provided custom subnet) If SubnetId is not SELECTED_INSTANCE_SUBNET aws:executeAutomation - Run AWSSupport-ExecuteEC2Rescue with provided SubnetId value. Outputs manageFirewallProfiles.Output manageRDPServiceSettings.Output manageRDPSettings.Output checkFirewallProfiles.Output checkRDPServiceSettings.Output checkRDPSettings.Output disableFirewallProfiles.Output restoreDefaultRDPServiceSettings.Output restoreDefaultRDPSettings.Output troubleshootRDPOffline.Output troubleshootRDPOfflineWithSubnetId.Output
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-awssupport-troubleshootrdp.html
2020-10-20T00:48:36
CC-MAIN-2020-45
1603107867463.6
[]
docs.aws.amazon.com
Overview The Eligibility App is a powerful tool for hospitals and healthcare providers to identify details regarding a patients coverage to improve their accounts receivable and help patients understand their payment responsibilities. The app communicates with insurance companies to view the eligibility status of a patient's coverage and detailed benefit information. OverviewOverview - Part of your process - Insert our Eligibility module at any point(s) of your scheduling, appointment verification, and RCM processes. - Detailed insured’s profile - Gain deep, instant insights into various types of coverage, deductibles, out of pocket maximums, and other payer-specific policy data. Additional Information:Additional Information: - Built on Microsoft Power Platform leveraging Workflow automation, powerful reporting and visual dashboards. - Data is FHIR aligned offering ultimate interoperability. - Integrated with Dynamics Health 365 for a full Practice Management solution.
https://docs.chorus.cloud/docs/Eligibility/EligibilityOverview
2020-10-20T00:41:52
CC-MAIN-2020-45
1603107867463.6
[]
docs.chorus.cloud
Configure GPU scheduling and isolation You can configure GPU scheduling and isolation on your cluster. Currently only Nvidia GPUs are supported in YARN. - YARN NodeManager must be installed with the Nvidia drivers. - In Cloudera Manager, select the YARN service. - Click the Configuration tab. - Select the NodeManager Default Group check-box. - In the configuration tab, search for Enable GPU Usage and define the GPU devices that are managed by YARN using one of the following ways. - Use the default value, auto, for auto detection of all GPU devices. In this case all GPU devices are managed by YARN. - Manually define the GPU devices that are managed by YARN. For more information about how to define these GPU devices manually, see Using GPU on YARN. - Search for NodeManager GPU Detection Executable and define the location of nvidia-smi. By default, this property has no value and it means that YARN checks the following paths to find nvidia-smi: - /usr/bun - /bin - /usr/local/nvidia/bin - Click Save, and then restart all the cluster components that require a restart. If the NodeManager fails to start, the following error is displayed: Fix the error by exporting theFix the error by exporting the INFO gpu.GpuDiscoverer (GpuDiscoverer.java:initialize(240)) - Trying to discover GPU information ... WARN gpu.GpuDiscoverer (GpuDiscoverer.java:initialize(247)) - Failed to discover GPU information from system, exception message:ExitCodeException exitCode=12: continue... LD_LIBRARY_PATHin the yarn -env.sh using the following command: export LD_LIBRARY_PATH=/ usr/local/nvidia/lib:/usr/local/nvidia/lib64:$LD_LIBRARY_PATH
https://docs.cloudera.com/runtime/7.2.2/yarn-allocate-resources/topics/yarn-configuring-gpu-scheduling-and-isolation.html
2020-10-19T23:35:26
CC-MAIN-2020-45
1603107867463.6
[]
docs.cloudera.com
Postie is direct mail for digital marketers. Manage and easily deploy your direct mail campaigns, build knowledge, and collect better results. In order to enable our integration with Postie, you’ll need your API Key, available on the API Setup page in Postie under Integrations. Postie will not accept data more than 24 hours old. mParticle forwards email addresses to Postie. mParticle will forward the following event types to Postie: Was this page helpful?
https://docs.mparticle.com/integrations/postie/
2020-10-20T00:15:02
CC-MAIN-2020-45
1603107867463.6
[]
docs.mparticle.com
Automatic geocoding What is geocoding? Geocoding is the computational process of transforming a physical address description to a location on the Earth's surface. Reverse geocoding, on the other hand, converts geographic coordinates to a description of a location, usually the name of a place or an addressable location. - Wikipedia. Posterno has a built-in geocoder that automatically triggers when certain tasks are performed onto your website. Resources usage: Please note that geocoding makes use of your provider's data therefore it's tied to your account's limits. Make sure you keep an eye on your costs. Data collected: - Street number - Street name - City - State - Post code All geocoded data is stored into your database within the post meta table with the key geocoded_data. The "admin" geocoder: When a listing has it's coordinates changed from within the admin panel, Posterno will automatically query the maps api of your chosen provider ( Google maps, etc.. ) and retrieve all the information related to the address of the coordinates. The geocoder triggers only when coordinates change. Where is the data used? Posterno does not display this data publicly on the frontend. Currently the geocoder is useful only when displaying geocoded data within listings schemas.
https://docs.posterno.com/article/557-automatic-geocoding
2020-10-20T00:49:36
CC-MAIN-2020-45
1603107867463.6
[]
docs.posterno.com
Our general privacy policy and app-specific privacy policy, which you can find below, apply to Smart Attachments app. Please, review both prior to using the app. StiltSoft Smart Attachments JIRA); JIRA Project ID (as a hashed, anonymized value); JIRA Server ID (as a hashed, anonymized value for identification of unique JIRA instances) - Licensed User Tier in JIRA Project Type Ratio (ratio of JIRA project types which the Add-on is enabled for, applicable for JIRA 7 only) Bulk Count (count of attachments on which a bulk operation is performed) File Extensions (formats of loaded or edited file types; no file names are processed) Project Ratio (ratio of JIRA projects with the enabled Add-ons) Issue Types Ratio (ratio of categories with and without configured issue types within the same project) Access Restriction Ratio (ratio of categories with and without access restrictions within the same project) Add-on Status (Add-on status for the project when opening the related Add-on configuration section for the project). File Attaching Mechanism (how a file has been attached to the issue, for example, drag-n-drop, via the transition or Create Issue form) Bulk Operation Type (type of the used bulk operation) We also collect Anonymous Data about the following actions or events within Add-on: Category renaming Category removal Category editing Project configuration importing New document revision upload Attachment renaming Attachment to email sending Attachment to category distribution Attachment editing in MS Office - Usage of comments for discussing attachments - Operations on comments and comment threads The collection mode is on by default. Your JIRA administrator can opt out from the Anonymous Data collection. Why We Collect Anonymous Data Anonymous Data is used for: Analytical and statistical purposes; Understanding your interaction with the Add-On; Improvement and development of the Add-On or our other products. Cookies We use persistent cookies to recognize you when you use Add-on. Add-on Add-On Add-On, please, contact us at [email protected].
https://docs.stiltsoft.com/display/public/CATAT/Privacy+Policy
2020-10-20T00:44:06
CC-MAIN-2020-45
1603107867463.6
[]
docs.stiltsoft.com
ZendService\SlideShare¶ The ZendService\SlideShareendService\SlideShare¶ In order to use the ZendService\SlideShare\SlideShare component. Once you have setup an account, you can begin using the ZendService\SlideShare\SlideShare component by creating a new instance of the ZendService\SlideShare\SlideShare object and providing these values as shown below: The SlideShow object¶ All slide shows in the ZendService\SlideShare\SlideShare component, this data class will be used frequently to browse or add new slide shows to or from the web service. Retrieving a single slide show¶ The simplest usage of the ZendService\SlideShare\SlideShare component is the retrieval of a single slide show by slide show ID provided by the slideshare.net application and is done by calling the getSlideShow() method of a ZendService\SlideShare\SlideShare object and using the resulting ZendService\SlideShare\SlideShow object as shown. Retrieving Groups of Slide Shows¶(ice\SlideS:
https://zf2-docs.readthedocs.io/en/latest/modules/zendservice.slide-share.html
2020-10-20T00:25:40
CC-MAIN-2020-45
1603107867463.6
[]
zf2-docs.readthedocs.io
Important You are viewing documentation for an older version of Confluent Platform. For the latest, click here. Encryption and Authentication using SSL¶ Important You are viewing documentation for an older version of Confluent Platform. For the latest, click here. kafka.server.keystore.jks -alias localhost -validity {validity} -genkey You need to specify two parameters in the above command: - keystore: the keystore file that stores the certificate. The keystore file contains the private key of the certificate; therefore,¶.client.truststore.jks -alias CAR SSL on a large Kafka kafka.server kafka.server.keystore.jks -alias CARoot -import -file ca-cert keytool -keystore kafka.server server - cert-signed: the signed certificate of the server.
https://docs.confluent.io/3.1.2/kafka/ssl.html
2020-10-20T00:16:05
CC-MAIN-2020-45
1603107867463.6
[]
docs.confluent.io
Challenge 4: Role of data¶ Artificial Intelligence (AI) techniques and tools are benefiting today from the enormous amount of personal and environmental data that is registered daily by IT systems. The quality and interoperability of this data are a determining factor for the possibility of applying new technologies. Among the main AI techniques that can be used to process such data, for example, is that of so-called supervised learning. In this case, the data must be “annotated” by humans who teach the machines how to interpret it. This operation is very onerous since it requires a conspicuous and complex amount of human work. In addition to the long time necessary to perform this annotation work, the discretion of the annotators could generate uneven datasets (i.e.: similar data annotated in a different way), weakening the operation of machines and propagating errors and biases [1]. The challenge associated with the role of data is therefore the creation of conditions, including organisational conditions, which allow Artificial Intelligence to use correctly created databases, where consistency, quality and intelligibility are guaranteed. In the Internet of Things field, one of the main challenges to be addressed is that the data collected by interconnected devices and sensors is different from that with which the scientific community of data-scientists has had to deal with in the past. In fact, the greatest successes that have been achieved in the AI field regard applications such as image processing, autonomous driving and web search that have been made possible thanks to the availability of large and relatively structured datasets, able to be used therefore in training machine learning algorithms. On the contrary, data coming from a multitude of connected devices can be fragmented, heterogeneous and distributed irregularly in space and time: a challenge of rare complexity for anyone who wants to analyse data in a structured manner. A second area of discussion is the management and research of data published on the web in the form of linked open data [2]. This data, which may regard both the institutional task of a public body (e.g. land registry or administrative data) as well as its operation (e.g. internal data) is made accessible and usable through open formats. While representing a mine of information, the data needs adequate tools to be exploited to its full potential. In particular, information retrieval [3] and filtering models and methods are needed based on semantic technologies and shared ontologies. This work, already envisaged by the Digital Administration Code (DAC) and launched within the scope of the activities of the Digital Team, will be part of the broader perspective of conceptual governance of public information assets. Regarding the huge data assets of the Public Administration, the challenge that AI technologies allow to face is that of transforming such data into widespread and shared knowledge, such as to make the Public Administration transparent to citizens and above all to itself, guaranteeing to citizens and administrators not only semantic access to information and interoperability of processes, but a better understanding of the relationship between state and citizen. Once the conditions for the proper functioning of the Artificial Intelligence methodologies have been created, one of the tasks of Public Administration will be to aggregate the data necessary to support process improvement. This could be achieved through the creation of an open platform for the collection, generation and management of certain types of data, directly related to Public Administration [4]. The decentralised use of public datasets, essential for the development of active participation practices (civic activism), in turn requires specific capabilities of governance of the socio-technical system of Public Administration. It is in fact essential that data quality is ensured at source, through the generalised adoption of guidelines and appropriate content standards. To achieve these ambitious objectives, there are many issues to be addressed, including some that have been appearing in the e-government plans of developed countries for many years. These include: - truthfulness and completeness of data; - data distribution and access methods; - design and definition of shared ontologies; - supervision of public dataset quality; - estimate of the economic value attributable to the data; - tools that allow citizens to monitor data production; - management and promotion of data access [5]; - regulation of data usage [6]. The last three items of the list just presented introduce a further issue for PA: making sure that anyone who wants to develop Artificial Intelligence solutions useful for citizens can have equal and non-discriminatory access to the necessary data. Footnotes
https://libro-bianco-ia.readthedocs.io/en/latest/doc/capitolo_3_sfida_4.html
2020-10-19T23:31:39
CC-MAIN-2020-45
1603107867463.6
[]
libro-bianco-ia.readthedocs.io
Upgrading BOSH Director on GCP Page last updated: This topic describes how to upgrade BOSH Director for Pivotal Cloud Foundry (PCF) on Google Cloud Platform (GCP). In this procedure, you create a new Ops Manager VM instance that hosts the new version of Ops Manager. To upgrade, you export your existing Ops Manager installation into this new VM. After you complete this procedure, follow the instructions in Upgrading PAS and Other Pivotal Cloud Foundry Products. Step 1: Locate the Pivotal Ops Manager Installation File Log in to the Pivotal Network, and click on Pivotal Cloud Foundry Operations Manager. From the Releases drop-down, select the release for your upgrade. existing deployment location. Step 2: Create a Private VM Image Log in to the GCP Console. In the left navigation panel, click Compute Engine, and select Images. Click Create Image. Complete the following fields: - Name: Enter a name that matches the naming convention of your existing Ops Manager image files. - Encryption: Leave Automatic (recommended) selected. - Source: Choose Cloud Storage file. - Cloud Storage file: Paste in the Google Cloud Storage filepath you copied from the PDF or YAML existing deployment. - Zone: Choose a zone from the region of your existing deployment. you initially installed Pivotal Cloud Foundry. See Set up an IAM Service Account. Allow HTTP traffic: Only select this checkbox if you selected it in your original Ops Manager VM configuration. Allow HTTPS traffic: Only select this checkbox if you selected it in your original Ops Manager VM configuration. Networking: Select the Networking tab, and perform the following steps: - For Network and Subnetwork, select the network and subnetwork you created when you initially deployed Pivotal Cloud Foundry. See Create a GCP Network with Subnet section of the Preparing to Deploy PCF on GCP topic. - For Network tags, enter any tags that you applied to your original Ops Manager. For example, if you used the pcf-opsmanagertag to apply the firewall rule you created in Create Firewall Rules for the Network, then apply the same tag to this Ops Manager VM. - For Internal IP, select Custom. In the Internal IP address field, enter a spare address located within the reserved IP range configured in your existing BOSH modify the entry that points a fully qualified domain name (FQDN) to the Ops Manager VM. Replace the original Ops Manager static IP address with the public IP address of the new Ops Manager VM, continue the upgrade instructions in Upgrading Pivotal Cloud Foundry topic. Later on, if you need to SSH into the Ops Manager VM to perform diagnostic troubleshooting, see SSH into Ops Manager.
https://docs.pivotal.io/pivotalcf/2-1/customizing/gcp-om-upgrade.html
2018-10-15T10:44:07
CC-MAIN-2018-43
1539583509170.2
[]
docs.pivotal.io
only Python 2, azure, and] or: $ pip install toil[all] Here’s what each extra provides: Building from Source¶ If developing with Toil, you will need to build from source. This allows changes you make to Toil to be reflected immediately in your runtime environment. First, clone the source: $ git clone $ cd toil Then, create and activate a virtualenv: $ virtualenv venv $ . venv/bin/activate From there, you can list all available Make targets by running make. First and foremost, we want to install Toil’s build requirements. (These are additional packages that Toil needs to be tested and built but not to be run.): $ make prepare Now, we can install Toil in development mode (such that changes to the source code will immediately affect the virtualenv): $ make develop Or, to install with support for all optional Installing Toil with Extra Features: $ make develop extras=[aws,mesos,azure,google,encryption,cwl] Or: $ make develop extras=[all] To build the docs, run make develop with all extras followed by: $ make docs To run a quick batch of tests (this should take less than 30 minutes): $ export TOIL_TEST_QUICK=True; make test For more information on testing see Running Tests.
https://toil.readthedocs.io/en/3.15.0/gettingStarted/install.html
2018-10-15T11:17:16
CC-MAIN-2018-43
1539583509170.2
[]
toil.readthedocs.io
Beginning in ONTAP 9.4, you can copy the ONTAP software image from the NetApp Support Site to a local folder. This only applies to ONTAP 9.4 patch updates. For upgrades from ONTAP 9.3 or earlier, you must copy the ONTAP software image to an HTTP.
http://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-930/GUID-9502D313-01B1-4D89-A277-E0B4A99ED417.html
2018-10-15T11:44:47
CC-MAIN-2018-43
1539583509170.2
[]
docs.netapp.com
Accessing Cloud. 2018-07-12 - Improving Performance for S3A - Working with Third-party S3-compatible Object Stores - Troubleshooting S3 - List of Figures List of Tables - 2.1. Cloud Storage Connectors - 3.1. Authentication Options for Different Deployment Scenarios - 3.2. S3Guard Configuration Parameters - 3.3. S3A Fast Upload Configuration Options - 3.4. S3A Upload Tuning Options - 6.1. Overview of Configuring Access to Google Cloud Storage - 6.2. Optional Properties Related to Google Cloud Storage - 7.1. Improving General Performance - 7.2. Accelerating ORC Reads in Hive - 7.3. Accelerating ETL Jobs
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/bk_cloud-data-access/content/index.html
2018-10-15T11:46:03
CC-MAIN-2018-43
1539583509170.2
[]
docs.hortonworks.com
UIElement. Render UIElement. Size Render UIElement. Size Render UIElement. Size Render Property Size Definition public : Size RenderSize { get; } Size RenderSize(); public Size RenderSize { get; } Public ReadOnly Property RenderSize As Size.
https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.uielement.rendersize
2018-10-15T11:07:32
CC-MAIN-2018-43
1539583509170.2
[]
docs.microsoft.com
What Is. Features of WCF WCF includes the following set of features. For more information, see WCF Feature Details.. Interoperability WCF implements modern industry standards for Web service interoperability. For more information about the supported standards, see Interoperability and Integration. Security. Multiple Transports and Encodings Messages can be sent on any of several built-in transport protocols and encodings. The most common protocol and encoding is to send text encoded SOAP messages using. Reliable and Queued Messages WCF supports reliable message exchange using reliable sessions implemented over WS-Reliable Messaging and using MSMQ. For more information about reliable and queued messaging support in WCF see Queues and Reliable Sessions.. Transactions WCF also supports transactions using one of three transaction models: WS-AtomicTtransactions, the APIs in the System.Transactions namespace, and Microsoft Distributed Transaction Coordinator. For more information about transaction support in WCF see Transactions.). Extensibility The WCF architecture has a number of extensibility points. If extra capability is required, there are a number of entry points that allow you to customize the behavior of a service. For more information about available extensibility points see Extending WCF. WCF Integration with Other Microsoft Technologies project type in Visual Studio 2012 or later.. The hosting features of Windows Server AppFabric application server is specifically built for deploying and managing applications that use WCF for communication. The hosting features includes rich tooling and configuration options specifically designed for WCF-enabled applications.
https://docs.microsoft.com/en-us/dotnet/framework/wcf/whats-wcf
2018-10-15T11:13:32
CC-MAIN-2018-43
1539583509170.2
[]
docs.microsoft.com
SPEC CPU 2006 The SPEC CPU 2006 benchmark is SPEC’s industry-standardized, CPU-intensive benchmark suite, stressing a system’s processor, memory subsystem and compiler. This benchmark suite includes the SPECint benchmarks and the SPECfp benchmarks. The SPECint 2006 benchmark contains 12 different enchmark tests and the SPECfp 2006 benchmark contains 19 different benchmark tests. SPEC CPU 2006 is not always part of a Linux distribution. SPEC requires that users purchase a license and agree with their terms and conditions. For this test case, users must manually download cpu2006-1.2.iso from the SPEC website and save it under the yardstick/resources folder (e.g. /home/ opnfv/repos/yardstick/yardstick/resources/cpu2006-1.2.iso) SPEC CPU® 2006 benchmark is available for purchase via the SPEC order form (). file: spec_cpu.yaml (in the ‘samples’ directory) benchmark_subset is set to int. SLA is not available in this test case. Test can be configured with different: benchmark_subset - a subset of SPEC CPU2006 benchmarks to run; SPECint_benchmark - a SPECint benchmark to run; SPECint_benchmark - a SPECfp benchmark to run; output_format - desired report format; runspec_config - SPEC CPU2006 config file provided to the runspec binary; runspec_iterations - the number of benchmark iterations to execute. For a reportable run, must be 3; runspec_tune - tuning to use (base, peak, or all). For a reportable run, must be either base or all. Reportable runs do base first, then (optionally) peak; runspec_size - size of input data to run (test, train, or ref). Reportable runs ensure that your binaries can produce correct results with the test and train workloads spec_cpu2006 ETSI-NFV-TST001
https://docs.opnfv.org/en/stable-fraser/submodules/yardstick/docs/testing/user/userguide/opnfv_yardstick_tc078.html
2018-10-15T11:18:35
CC-MAIN-2018-43
1539583509170.2
[]
docs.opnfv.org
Accessing Logs Viewing logs from my container services When you start a your containers via docker-compose up all of the defined services will start in the foreground. All log output to stdout/stderr within each container will be output the console. Each entry will be prefixed with the name of the running container to identify the source of the log message. If log output is not coming directly to the console (for example, if you started your containers via docker-compose up -d to detach them) you can still SSH into the container and browse the file system for logs. Most common services will provide some log output in /var/log. Alternatively, another option is to run the docker logs <containername> command.
http://docs.outrigger.sh/common-tasks/accessing-logs/
2018-10-15T10:15:32
CC-MAIN-2018-43
1539583509170.2
[]
docs.outrigger.sh
Understanding Scaling Behavior Concurrent executions refers to the number of executions of your function code that are happening at any given time. You can estimate the concurrent execution count, but the concurrent execution count will differ depending on whether or not your Lambda function is processing events from a poll-based event source. If you create a Lambda function to process events from event sources that aren't poll-based (for example, Lambda can process every event from other sources, like Amazon S3 or API Gateway), each published event is a unit of work, in parallel, up to your account limits. Therefore, the number of events (or requests) these event sources publish influences the concurrency. You can use the this formula to estimate your concurrent Lambda function invocations:. The number of concurrent executions for poll-based event sources also depends on additional factors, as noted following: Poll-based event sources that are stream-based Amazon Kinesis Data Streams Amazon DynamoDB For Lambda functions that process Kinesis or DynamoDB streams the number of shards is the unit of concurrency. If your stream has 100 active shards, there will be at most 100 Lambda function invocations running concurrently. This is because Lambda processes each shard’s events in sequence. increases, AWS Lambda automatically scales up polling activity until the number of concurrent function executions reaches 1000, the account concurrency limit, or the (optional) function concurrency limit, whichever is lower. Amazon Simple Queue Service supports an initial burst of 5 concurrent function invocations and increases concurrency by 60 concurrent invocations per minute. Note Account-level limits are impacted by other functions in the account, and per-function concurrency applies to all events sent to a function. For more information, see Managing Concurrency. Request Rate Request rate refers to the rate at which your Lambda function is invoked. For all services except the stream-based services, the request rate is the rate at which the event sources generate the events. For stream-based services, AWS Lambda calculates the request rate as follows: request rate = number of concurrent executions / function duration For example, if there are five active shards on a stream (that is, you have five Lambda functions running in parallel) and your Lambda function takes about two seconds, the request rate is 2.5 requests/second. Automatic Scaling AWS Lambda dynamically scales function execution in response to increased traffic, up to your concurrency limit. Under sustained load, your function's concurrency bursts to an initial level between 500 and 3000 concurrent executions that varies per region. After the initial burst, the function's capacity increases by an additional 500 concurrent executions each minute until either the load is accommodated, or the total concurrency of all functions in the region hits the limit. Note If your function is connected to a VPC, the Amazon VPC network interface limit can prevent it from scaling. For more information, see Configuring a Lambda Function to Access Resources in an Amazon VPC. To limit scaling, you can configure functions with reserved concurrency. For more information, see Managing Concurrency.
https://docs.aws.amazon.com/lambda/latest/dg/scaling.html
2018-10-15T10:52:15
CC-MAIN-2018-43
1539583509170.2
[]
docs.aws.amazon.com
MemoryStream.CanRead Property Gets a value indicating whether the current stream supports reading. Namespace: System.IO Assembly: mscorlib (in mscorlib.dll) Syntax 'Declaration Public Overrides ReadOnly Property CanRead As Boolean public override bool CanRead { get; } Property Value Type: System.Boolean true if the stream is open. Remarks If a class derived from Stream does not support reading, calls to the Read and ReadByte methods throw a NotSupportedException. If the stream is closed, this property returns false.
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/0s6ks81s(v%3Dvs.95)
2018-10-15T11:18:22
CC-MAIN-2018-43
1539583509170.2
[]
docs.microsoft.com
Conference calls When you're on an active call in Gplus Adapter for Salesforce, you can perform an instant conference call with another contact. How do I start a conference call? To start an instant conference, just click Instant Conference and choose a contact or enter a phone number in Team Communicator. A green status indicator next to the agent or agent group indicates that there are agents available and you and the customer will not have to wait long for the conference to go through. If you like, you can provide some details about the call in the Notes field before you click Instant Conference. When the contact sees the incoming call, he or she will also see your notes in the call details area. Once the conference is established, you can see the other parties listed in the Participants section. You can remove a participant by clicking the arrow next to the party you want to remove and selecting Delete from Conference. You can also start a consultation and talk with the conference target before performing an instant conference. See Initiating a Consultation for details. Feedback Comment on this article:
https://docs.genesys.com/Documentation/PSAAS/latest/Supervisor/GPAInitiateConf
2018-10-15T11:21:21
CC-MAIN-2018-43
1539583509170.2
[]
docs.genesys.com
When the option to 'Add processes/functions to list' (selection by process name) is chosen from the Work with Export List Menu a screen similar to the following example is used to manipulate the export list: This facility allows you to build temporary lists of processes (and associated functions) known to LANSA and display them on the screen. From these displayed lists, processes and/or functions can be chosen for inclusion into your export list. These lists can either be built from all processes, or only from processes that have a certain generic name. When any process is included in a displayed list all its associated functions are also shown for possible selection. The process definition will be exported with its selected function/s regardless of whether it is selected or not. If a process is not selected with its associated functions it will run in interpretive mode after import. A displayed list may not fit on one screen. This is indicated by a '+' sign in the lower right of the screen. In such cases use the roll up and roll down keys to scroll backwards and forwards through the displayed list. The Sel column Processes and/or functions which are already included in the export list, are shown with an 'X', 'G' or 'F' beside them. The 'X', 'G' or "F' cannot be removed. Thus a process or function cannot be removed from the export list using this facility. Use the option 'Review/Delete Objects Already in the List' to do this. Enter an 'Y' beside a process or function from the displayed list to include it in the export list. Alternatively you may use the Generic function key to add the generic name into the export list. When a process is generically selected, all of its functions are included as well. Generic names are described in detail in Generic Object Names. Comp Form Beside the chosen process or function, enter 'Y' in this column to indicate that the process or function is to be exported in 'compiled' or 'ready to use' form. If this option is used the process or function will be usable as soon as it has been imported into the target system. If this option is not used the process or function definition will have to be 'compiled' on the target system after it has been imported. When using the Generic function the decision to export the process and function's compiled form is entered through a pop-up. The decision made is applied to all processes/functions that are selected by this generic name. Export Fields To automatically include in the export list all fields that are used by a function, enter a 'Y' in this column beside the chosen function. If the Include all fields is requested and the function is in a Web enabled process, then web visual components for the fields used in the function will also be added to the export list, provided the the target system is NOT an AS/400 and *NOWEBEXP flag is not set. The 'Export Fields' column is ignored when using the Generic function. Export Files An option is provided to automatically include in the export list all file definitions that are used by a chosen function. To do this enter a 'Y' in this column beside the chosen function. The 'Export Files' column is ignored when using the Generic function. The option to automatically choose fields and files used by a function will only perform correctly if the chosen function is currently compiled. It is strongly recommended that if a function is chosen for export that its associated process should also be chosen. Additionally, the ability to choose an individual function within a process should only be used in simple amendment situations. In all other cases select the process and all its associated functions. Web detail export Web details for Process and Functions will be exported provided the process is web enabled, the export target system is NOT an AS/400 and system flag *NOWEBEXP is not set in the system data area DC@OSVEROP. You can turn on or off the *NOWEBEXP flag via Include Web Details in Export in Export and Import settings. If web details are to exported for a process and *LW3 type system variables have been created for the process (eg *LW3PBGI_pppppppppp), these *LW3 system variables for the process will be added to the list. These *LW3 type system variables will not be added when using the Generic function. If the web details are to be exported and a browse list used in the function has been customized by use of *LW3 type system variables, these *LW3 system variables for the browse list will be added to the list. These *LW3 type system variables will not be added when using the Generic function. Show Long Names You can show the long name for a process or function in the list by placing the cursor on it and pressing F22. Refer to Long Names for more details. When you have completed using this facility, use the Cancel function key to return to the Work with Export List Menu.
https://docs.lansa.com/14/en/lansa010/content/lansa/ugub_50077.htm
2018-10-15T10:37:37
CC-MAIN-2018-43
1539583509170.2
[]
docs.lansa.com
Snapshot policies You can use System Manager to create and manage Snapshot policies in your storage system. More information Creating Snapshot policies You can create a Snapshot policy in System Manager to specify the maximum number of Snapshot copies that can be automatically created and the frequency of creating them. Editing Snapshot policies You can modify the details of an existing Snapshot policy, such as the schedule name, SnapMirror label, or the maximum number of Snapshot copies that are created by using the Edit Snapshot Policy dialog box in System Manager . Deleting Snapshot policies You can use System Manager to delete Snapshot policies. If you delete a Snapshot policy that is being used by one or more volumes, Snapshot copies of the volume or volumes are no longer created according to the deleted policy. About Snapshot policies When applied to a volume, a Snapshot policy specifies a schedule or schedules according to which Snapshot copies are created and specifies the maximum number of Snapshot copies that each schedule can create. A Snapshot policy can include up to five schedules. Snapshot Policies window You can use the Snapshot Policies window to manage Snapshot policy tasks, such as adding, editing, and deleting Snapshot policies. Parent topic: Managing data protection Part number: 215-12668_C0 August 2018
http://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-930/GUID-7AC23366-6275-4652-AE51-32322E9054E2.html
2018-10-15T10:35:24
CC-MAIN-2018-43
1539583509170.2
[]
docs.netapp.com
Total disk space exceeds [%] What is this alert? This alert is raised when the percentage of segment host disk space in use exceeds the percentage specified in the alert rule. The master disk space is not included in the calculation. The alert is raised once a day until the percentage drops below the percentage in the alert rule. What to do This alert warns you so that you can add disk storage or free up storage in order to prevent a catastrophic disk full error that could interrupt Greenplum Database service. Here are some suggestions for freeing space on Greenplum Database hosts. - Archive and remove backup files - Archive and drop older partitions - Rotate OS and database log files - Drop unneeded external tables and their data files - Vaccuum database tables and catalog tables
https://gpcc.docs.pivotal.io/480/help/alert-diskspace.html
2020-03-28T17:12:53
CC-MAIN-2020-16
1585370492125.18
[]
gpcc.docs.pivotal.io
Policy labels A policy label consists of a set of policies, other policy labels, and virtual server-specific policy banks. The App Firewall evaluates each policy bound to the policy label in order of priority. If the policy matches, it filters the connection as specified in the associated profile. Then it does whatever the Goto parameter specifies, which can be to terminate policy evaluation, go to the next policy, or go to the policy with the specified priority. If the Invoke parameter is set, it terminates processing of the current policy label and begins to process the specified policy label or virtual server. To create an App Firewall policy label by using the command lineTo create an App Firewall policy label by using the command line At the command prompt, type the following commands: add appfw policylabel <labelName> http_req save ns config Example The following example creates a policy label named policylbl1. add appfw policylabel policylbl1 http_req save ns config To bind a policy to a policy label by using the command lineTo bind a policy to a policy label by using the command line At the command prompt, type the following commands: bind appfw policylabel <labelName> <policyName> <priority> [<gotoPriorityExpression>] [-invoke (<labelType> <labelName>) ] save ns config Example The following example binds the policy policy1 to the policy label policylbl1 with a priority of 1. bind appfw policylabel policylbl1 policy1 1 save ns config To configure an App Firewall policy label by using the GUITo configure an App Firewall policy label by using the GUI Navigate to Security > Application Firewall > Policy Labels. In the details pane, do one of the following: - To add a new policy label, click Add. - To configure an existing policy label, select the policy label and the click Open. The Create App Firewall Policy Label or the Configure App Firewall Policy Label dialog box opens. The dialog boxes are nearly identical. If you are creating a new policy label, in the Create App Firewall Policy Label dialog box, type a name for your new policy label. The name can begin with a letter, number, or the underscore symbol, and can consist of from one to 127 letters, numbers, and the hyphen (-), period (.) pound (#), space ( ), at (@), equals (=), colon (:), and underscore (_) symbols. Select **Insert Policy **to insert a new row and display a drop-down list with all existing App Firewall policies. Select the policy you want to bind to the policy label, or select New Policy to create a new policy and follow the instructions in To create and configure a policy by using the GUI. The policy that you selected or created is inserted into the list of globally bound App Firewall policies. Make any additional adjustments. - To modify the policy priority, click the field to enable it, and then type a new priority. You can also select Regenerate Priorities to renumber the priorities evenly. - To modify the policy expression, double click that field to open the Configure App Firewall 5 through 7 to bind any additional App Firewall policies you want to the policy label. Click Create or OK, and then click Close. A message appears in the status bar, stating that you have successfully created or modified the policy label.
https://docs.citrix.com/en-us/netscaler/11-1/application-firewall/policy-labels.html
2020-03-28T18:47:51
CC-MAIN-2020-16
1585370492125.18
[]
docs.citrix.com
Managing Cloudera Data Science Workbench Hosts This topic describes how to perform some common tasks related to managing Cloudera Data Science Workbench hosts. Customize Workload Scheduling. This means, Cloudera Data Science Workbench will use the following order of preference when scheduling non-GPU workloads (session, job, experiment, or model): Worker Hosts > Master Host > GPU-equipped Hosts | Labeled Auxiliary Hosts When selecting a host to schedule an engine, Cloudera Data Science Workbench will give first preference to unlabeled Worker hosts. If Workers are unavailable or at capacity, CDSW will then leverage the Master host. And finally, any GPU-equipped hosts OR labeled auxiliary hosts will be leveraged. GPU-equipped Hosts - Hosts equipped with GPUs will be labeled auxiliary by default so as to reserve them for GPU-intensive workloads. They do not need to be explicitly configured to be labeled. A GPU-equipped host and a labeled auxiliary host will be given equal priority when scheduling workloads. - Master Host - The Master host must not be labeled an auxiliary node. If you want to reserve the Master for running internal Cloudera Data Science Workbench application components, use the Reserve Master Host property. Labeling Auxiliary Hosts Before you proceed, make sure you have reviewed the guidelines on customizing workload scheduling in Cloudera Data Science Workbench. Depending on your deployment type, use one of the following sets of instructions to use this feature: CSD Deployments On CSD deployments, use the Auxiliary Nodes property in the CDSW service in Cloudera Manager to specify a comma-separated list of auxiliary hosts. - Log into the Cloudera Manager Admin Console. - Go to the CDSW service. - Click the Configuration tab. - Enter the hostnames that you want to label as auxiliary. - Click Save Changes. - Restart the CDSW service to have this change go into effect. Reserving the Master Host for Internal CDSW Components Cloudera Data Science Workbench allows you to reserve the master host for running internal application components and services such as Livelog, the PostgreSQL database, and so on, while user workloads run exclusively on worker hosts. By default, the master host runs both, user workloads as well as the application's internal services. However, depending on the size of your CDSW deployment and the number of workloads running at any given time, it's possible that user workloads might dominate resources on the master host. Enabling this feature will ensure that CDSW's application components always have access to the resources they need on the master host and are not adversely affected by user workloads. Depending on your deployment type, use one of the following sets of instructions to enable this feature: as follows: - Log into the Cloudera Manager Admin Console. - Go to the CDSW service. - Click the Configuration tab. - Search for the following property: Reserve Master Host. Select the checkbox to enable it. - Click Save Changes. - Restart the CDSW service to have this change go into effect. Migrating a Deployment to a New Set of Hosts Adding a Worker Host Using Cloudera Manager - Log in to the Cloudera Manager Admin Console. - Add a new host to your cluster. Make sure this is a gateway host and you are not running any services on this host. - Assign the HDFS, YARN, and Spark 2 gateway roles to the new host. For instructions, refer the Cloudera Manager documentation at Adding a Role Instance. - Go to the Cloudera Data Science Workbench service. - Click the Instances tab. - Click Add Role Instances. - Assign the Worker and Docker Daemon roles to the new host. Click Continue. - Review your changes and click Continue. The wizard finishes by performing any actions necessary to add the new role instances. Do not start the new roles at this point. You must run the Prepare Node command as described in the next steps before the roles are started. - The new host must have the following packages installed on it.You must either manually install these packages now, or, allow Cloudera Manager to install them in the next step. If you choose the latter, make sure that Cloudera Manager has the permission needed to install the required packages. To do so, go to the Cloudera Data Science Workbench service and click Configuration. Search for the Install Required Packages property and make sure it is enabled. - Click Instances and select the new host. From the list of available actions, select the Prepare Node command to install the required packages on the new node. - On the Instances page, select the new role instances and click . Using Packages On an RPM deployment, the procedure to add a worker host to an existing deployment is the same as that required when you first install Cloudera Data Science Workbench on a worker. For instructions, see Installing Cloudera Data Science Workbench on a Worker Host. Removing a Worker Host Using Cloudera Manager - Log into the Cloudera Manager Admin Console. - Click the Instances tab. - Select the Docker Daemon and Worker roles on the host to be removed from Cloudera Data Science Workbench. - Select Stop to confirm the action. Click Close when the process is complete.and click - On the Instances page, re-select the Docker Daemon and Worker roles that were stopped in the previous step. - Select Delete to confirm the action.and then click Changing the Domain Name Cloudera Data Science Workbench allows you to change the domain of the web console. Using Cloudera Manager - Log into the Cloudera Manager Admin Console. - Go to the Cloudera Data Science Workbench service. - Click the Configuration tab. - Search for the Cloudera Data Science Workbench Domain property and modify the value to reflect the new domain. - Click Save Changes. - Restart the Cloudera Data Science Workbench service to have the changes go into effect.
https://docs.cloudera.com/documentation/data-science-workbench/1-7-x/topics/cdsw_manage_hosts.html
2020-03-28T18:53:18
CC-MAIN-2020-16
1585370492125.18
[]
docs.cloudera.com
build the documentation locally, install Sphinx: $ python -m pip install Sphinx ...\> py -m pip install Sphinx Then from the docs directory, build the HTML: $ make html ...\> make.bat html To get started contributing, you’ll want to read the reStructuredText reference. Your locally-built documentation will be themed differently than the documentation at docs.djangoproject.com. This is OK! If your changes look good on your local machine, they’ll look good on the website. How the documentation is organized¶ The documentation is organized into several categories: Tutorials take the reader by the hand through a series of steps to create something. The important thing in a tutorial is to help the reader achieve something useful, preferably as early as possible, in order to give them confidence. Explain the nature of the problem we’re solving, so that the reader understands what we’re trying to achieve. Don’t feel that you need to begin with explanations of how things work - what matters is what the reader does, not what you explain. It can be helpful to refer back to what you’ve done and explain afterwards. Topic guides aim to explain a concept or subject at a fairly high level. Link to reference material rather than repeat it. Use examples and don’t be reluctant to explain things that seem very basic to you - it might be the explanation someone else needs. Providing background context helps a newcomer connect the topic to things that they already know. Reference guides contain technical reference for APIs. They describe the functioning of Django’s internal machinery and instruct in its use. Keep reference material tightly focused on the subject. Assume that the reader already understands the basic concepts involved but needs to know or be reminded of how Django does it. Reference guides aren’t the place for general explanation. If you find yourself explaining basic concepts, you may want to move that material to a topic guide. How-to guides are recipes that take the reader through steps in key subjects. What matters most in a how-to guide is what a user wants to achieve. A how-to should always be result-oriented rather than focused on internal details of how>`. Django-specific markup¶ Besides Sphinx’s built-in markup, Django’s docs definecomm. Minimizing images¶ Optimize image compression where possible. For PNG files, use OptiPNG and AdvanceCOMP’s advpng: $ cd docs $ optipng -o7 -zm1-9 -i0 -strip all `find . -type f -not -path "./_build/*" -name "*.png"` $ advpng -z4 `find . -type f -not -path "./_build/*" -name "*.png"` This is based on OptiPNG version 0.7.5. Older versions may complain about the --strip all option being lossy. should be a tuple of (Full name, email address). Example:: [('John', '[email protected]'), ('Mary', '[email protected]')] Note that Django will email *all* of these people whenever an error happens. See :doc:`/howto/error-reporting` for more information. This marks up the following header as the “canonical” target for the setting ADMINS. This means any time I talk about ADMINS, I can reference it using :setting:`ADMINS`. That’s basically how everything fits together. Spelling check¶ Before you commit your docs, it’s a good idea to run the spelling checker. You’ll need to install a couple packages first: - pyenchant (which requires enchant) - sphinxcontrib-spelling.
https://docs.djangoproject.com/en/3.0/internals/contributing/writing-documentation/
2020-03-28T17:29:24
CC-MAIN-2020-16
1585370492125.18
[]
docs.djangoproject.com
Themes Contents - 1 Creating New Themes - 1.1 Number / Names of the Fields - 1.2 Notes - 1.3 Let's start a new theme - 1.4 The Fields - 1.4.1 Theme Name - 1.4.2 Background - 1.4.3 Alternating Colour #1 - 1.4.4 Alternating Colour #2 - 1.4.5 Link Colour - 1.4.6 Border Colour - 1.4.7 Header Colour - 1.4.8 Header Text Color - 1.4.9 Top Table Color - 1.4.10 Category Color - 1.4.11 Category Text Color - 1.4.12 Table Text Color - 1.4.13 Text Color - 1.4.14 Border Width - 1.4.15 Table Width - 1.4.16 Table Spacing - 1.4.17 Font - 1.4.18 Big Font - 1.4.19 Board Logo URL - 1.4.20 Image Directory - 1.4.21 Smilie Directory - 1.5 Conclusion - 2 Installing Addon Themes Creating New Themes This is a brief outline on how to create a new theme in XMB forums. This is offered up as an explanation of the different fields required in the theme creation dialog only. After familiarizing yourself with the different components you will be ready to start the creation process. Practice and experience will be the final steps to creating good XMB themes. Number / Names of the Fields XMB themes are composed of 21 variables represented by 21 fields, in the Admin Panel / Themes - Details, that the user fills in with the the desired colors/sizes/images: 1 Name field 12 Color fields (3 of these fields can use images for the color variable) 2 Font size/type fields 2 Directory designations 1 Main site logo 3 HTML layout variables Notes - All FIELDS MUST BE FILLED - not doing so causes problems! - It's best not to alter the existing themes, leave those as is and start a new one. We'll be going over how to do that. While the existing themes can be re-installed it can be problematic if the deleted theme is the default theme - or if any members were using that theme. - Never use an apostrophe ( ' ) in the theme name!! Never! This causes all kinds of grief. You may want to "personalize" your theme by naming it "Handy Andy's Theme". Don't do it - find another way of showing the author/object of the theme. I've also seen some problems arise with other special characters used in the "Name" field so avoid trouble by keeping it simple. - When creating new themes, or modifying existing themes always use the "#" prefix with your color codes. Example: use #ff0000 not ff0000- the reason for this is that while most of you probably surf the web using Internet Explorer or similar browsers (browsers using the IE engine) you would never know that your themes are not being displayed in the "Gecko" browsers (Netscape, Mozilla, Firebird). Without getting too technical .... IE is much more forgiving of the omission of the "#" while the Gecko browsers won't tolerate the same omission. And while the majority of your members may use Interent Explorer... who wants that rogue Mozilla user, who happens by, thinking you must be devoid of any creative talents - If you go to the Administration Panel / Themes page and click on any of the existing theme's "Details" link you will see how that theme is constructed. It's a good idea to do this to familiarize yourself with the layout and theme components. Let's start a new theme Go to Administration Panel / Themes Here you will see a list of all of the installed themes for your forum. At the bottom of the Theme list is a "New Theme" link - clicking this link will take you to a blank theme page that you will fill out with the colors/images/specifications of your choice. Let's go over the fields one by one. The Fields Theme Name This is where you will title your new theme. Please remember that you should never use an apostrophe in the theme name (See Notes above). Background (Enter a hex code or an image name): This is the background color or a tiling image that will be used on all pages. If you are confused as to exactly what this means use the "Details" of an existing theme to see how the background is used. Here are some guidelines for the use of tiling images as the background: - Don't use a background image that is too big. The people that visit your page don't want to wait forever for a 200K graphic just to fill the background. Many paint programs offer the means for optimizing your images within the "Save" dialog. Generally .jpg and .gif images are the best for the web. - Keep the background unobtrusive. Most web page background images should be muted, so that the text of the page can be read. - Use contrast wisely. Make sure that the text of your web page, especially links, contrast with the background image. Choose colors so that they coordinate rather than clash. - Avoid unsightly artifacts. Use tiling images that don't show visible seams or weird dithering effect. TIP - The web is a great resource for locating tilable background images. Use and search for "web backgrounds" or similar keywords. Alternating Colour #1 These are the colors that will be alternated in the forums. The best way to understand how this works is to use the "Details" link for one of the existing themes and observe how the 2 colors are used. Alternating Colour #2 These are the colors that will be alternated in the forums. The best way to understand how this works is to use the "Details" link for one of the existing themes and observe how the 2 colors are used. Link Colour This sets the color all of your links will be. This includes the forum titles, navigation links, Who's Online links, etc.... This requires a color hex code; i.e. #ff0000 (red) Border Colour This sets the color of the border that will surround your forum tables. This is dependent on Field #14 for the border width. Header Colour This is the color of the "headers" i.e. The bar across the the forums list on the index page (the one that says "Forum: Topics: Posts: Last Post: ) and the header menu bar where your main links are located. There are some other instances of the "header" color, again I suggest using the "Details" of an existing theme to familiarize yourself with where this color will occur. Header Text Color This sets the color of the text used in the headers. Make sure that you use a color that will be visible with Field# 7. Top Table Color (Enter a hex code or an image name): - The top table is the area where the main logo is as well as the "Last Active", "New u2u Message", and "Login / Logout links" - you may also use an image here as a background instead of a color. Category Color (Enter a hex code or an image name): - the category bars are the bars above the forums when you create them in the Administration Panel / Forums page. You can also use an image for these. Category Text Color This sets the color of the text used in the category bars. Table Text Color This set the text for "forum descriptions, Who's Online key, text in the news ticker, Stats titles, Member List numbers, Status, Date, , etc... Text Color This sets the color of the "Posts Subjects, navagation titles, Copyright text, Last Active and Welcome text, etc.... Border Width This sets the size of the border measured in pixels. 1 being the "thinnest" and increasing proportionately with the higher number. Table Width This is the over width of the forum tables - or - how wide your forums will expand across the screen. This can be set in percentage (100%) or in pixels (600). Table Spacing This is an html table value and controls the amount of spacing between the individal cells in each table. Font This field allow you to enter a font family. An example of a font family would be "Verdana" or "Comic Sans". A note about this - whatever font you specify here will need to be installed on the user's PC or it will not display correctly - if the user does not have the font that you specify their system will substitute their "default" Windows font for the one you have in that field. So with that in mind it's best to keep the "Font" simple. Most systems have the Verdana or Arial font installed so it's a good choice here. Big Font A bit of a misleading name - this is the "size" designation of the text, including the links. There are few places that are not affected by this field's value; like the "Last Post" column of each forum is one example of an exception. This field will accept the desired font size in px (pixels) or pt (points) an example of each would be: 12px or 7pt. Board Logo URL This is the main forum logo, the banner located in the upper left corner of the header. If you have a custom logo you will enter the file name (example: forumlogo.gif) here. Only the file name as the directory is hard coded as the theme's image folder in the forum's "image" directory. You'll notice that this has the additional instructions that read: [(Enter a hex code, a flash movie or an image name): To use a Flash movie as avatar, separate its url, width, and height by commas - like so: """" Image Directory Each theme must have a directory in which the images for that theme are stored. If you create a new theme you will need to either: use an existing theme's image folder for the images or, if you are using "custom" images for this new theme, create a new image folder to store the "custom" images. Whichever method you choose to use you must specify the image folder here. If you do have custom images you want to use, then you need to create new image folder on your server - and this folder must be within the forum's image directory. It must be in with the other theme folders - this is important as Field #20 is set to read from this location only. So if you are creating a new theme called "Neo" then you may want to name the new image directory "neo" as well. This is where all of the new theme's images will be uploaded. NOTE: It is a good policy to always use lower case letters for all images and folders - some browsers (albeit the older ones) do not do well with upper case letters for file names. Smilie Directory This is the path to the smilies folder. If you want to create a new set of smilies and have them in a different folder - you would enter this new set's folder name here. Otherwise this should be set to the default smilie folder (images/smilies). See the How To on "Adding New / Additional Smilies" for info on using multiple smilie sets. Conclusion That concludes the rundown on the 21 fields required to complete a new theme. I hope this helps to explain the process a little better than the available documentation and gets you on your way to creating some great themes. :) If you find a descrepancy in the above "How To" please send me a u2u and I'll review and correct if necessary. Thanks Daf Installing Addon Themes - Download the theme you like and unpack it using your favorite unpacker program such as Winzip or Winrar etc. - Copy the extracted complete folder of the images etc to the forums sub-folder called "images" by ftp. - Go to "Administration Panel" and click "Themes". Scroll down to the "Import Theme:" area and then click the browse button. Browse to the theme you downloaded to your hard drive and select the theme file which should already be called something like this example: ""XMB+Corporate.xmb"" - Click the button called "Import Theme into XMB" - Go to your "Control Panel" and "Edit Profile" and select the new theme from the drop down menu under Edit Profile - Options: Theme: Save your change.
https://docs.xmbforum2.com/index.php?title=Themes
2020-03-28T18:39:13
CC-MAIN-2020-16
1585370492125.18
[]
docs.xmbforum2.com
Help for diagnosing and resolving any Alfresco Community Edition issues that you might encounter. For additional help, refer to the following: - Alfresco Hub () - Admin Tools in Alfresco Share to view various installation and setup information - Community Edition from the source code, it is helpful to debug an instance running on a standard application server. You can configure Alfresco Community Edition WebDAV Diagnose and resolve issues that might arise when configuring WebDAV. - OpenLDAP tips Use these tips when working with OpenLDAP. - Active Directory tips Web Services API on a Windows client and frequently see errors such as java.net.BindException: Address already in use: connect in the client application, you might need to tune the client operating system parameters so that it can handle a higher rate of outbound TCP connections. - Troubleshooting IMAP Use this information to troubleshoot IMAP problems. - Troubleshooting schema-related problems The Schema Difference Tool provides a way of identifying and troubleshooting problems in database schemas. Parent topic: Alfresco Community Edition 201911 GA
https://docs.alfresco.com/community/concepts/ch-troubleshoot.html
2020-01-17T16:14:29
CC-MAIN-2020-05
1579250589861.0
[]
docs.alfresco.com
Archived Release Release. Release. Release Notes for v0.9.38 Features Added support for viewing and editing RedisJSON values in the Browser tool Added support for using RedisJSON commands (JSON.*) in the CLI tool Release Notes for v0.9.34 Role-Based Access Control Support When RBAC is enabled, admin users can control the permissions for other users. There are instance-specific permissions and global permissions. Instance permissions can be broadly classified into read and write permissions for each tool. Release Notes for v0.9.34.1 Bug Fixes Added special handling for desktop mode in RBAC Enable GA only in production Removed full server reloads from the application Improved logging for memory analysis Release Notes for v0.9.34.2 Fixes Fixed a long-existing caching bug Compress static files Committed previously missed migration files for RBAC models Release Notes for v0.9.35 Memory Analysis Bug Fixes Fixed manual exit() call. Added proper error handling. Fixed RDB parsing bug Added support for streams Memory Analysis Eviction Reports Added reports on keys based on LFU/LRU data extracted from rdb. Role Based Access Control UI Improvements New permissions are created automatically on instance or user addition. Replaced multiselect widget on Role admin panel page with checkboxed for permission fields. Release. Release. Release. Release Notes for v0.9.33 Cluster Support Improvements Ask for seed nodes when adding a cluster Added 2 cluster management alerts: Seed nodes don't agree on cluster configuration All seed nodes are down Fixed "flickering" issue after a slave node is deleted. Minor Bug Fixes Skip already processed key in rename bulk ops CSS fix for update version dialog Handle case of unlimited instances for adding sentinel masters
https://docs.redislabs.com/latest/ri/release-notes/archive/
2020-01-17T17:15:05
CC-MAIN-2020-05
1579250589861.0
[]
docs.redislabs.com
2. API descriptors¶ The React front-end has access to mapping coordinates reflecting product categories, countries and indicators. See project root static_assets: - final_countryTree_exiovisuals.csv - final_productTree_exiovisuals.csv - mod_indicators.csv These mapping coordinates are not only used to render tree selectables, but also to transmit the global id’s of the product categories, countries and indicators over the websocket channel. In turn the back-end handles these messages to perform calculations and store results. For example all countries and all products in the world represent the global id [1]. The indicator [1] represents product output. For further reference see the mapping CSV files. API routing: - API URL Websockets: <domain-ip>/ramascene/ - API URL AJAX: <domain-ip>/ajaxhandling/ - Interface format: JSON 2.1. Default calculations¶ The following queries denote the communication between front-end and back-end for performing default calculations. Interface descriptors [websocket message to back-end]: → to back-end complete payload example: Interface descriptors [websocket messages from back-end]: → from back-end complete response example: If the websocket message job_status is set to “completed”, the front-end can perform a POST request for results via Ajax containing the job_id named as ‘TaskID’. For example in the above websocket response we see that job_id is 176, the Ajax POST request is ‘TaskID:176’. Interface descriptors [AJAX response]: → from back-end complete response example: An important aspect is that in the current version the back-end expects the websocket message to contain a single value for indicator and year. Additionally if the query selection contains “GeoMap” the “nodesReg” descriptor can be an array of multiple elements denoting multiple countries, while the “nodesSec” descriptor can only have a single indicator. On the other hand if the query selection contains “TreeMap” the “nodesSec” descriptor can be an array of multiple elements denoting multiple products, while the “nodesReg” descriptor can only have a single indicator. 2.2. Modelling calculations¶ The following table denotes the communication between front-end and back-end for modelling calculations. Modelling is applied on existing default query selections. The technological change is a single value denoting a percentage. See below for a full query example: → from back-end complete response example: Multiple model selections can be added, however a user can only specify a single-selection per “product”, “originReg”, “consumedBy”, “consumedReg” in the array for this version of the application. The websocket response contains the added model details specified by name.
https://rama-scene.readthedocs.io/en/latest/API-descriptors.html
2020-01-17T16:28:16
CC-MAIN-2020-05
1579250589861.0
[]
rama-scene.readthedocs.io
TOPICS× Track core playback on Android This documentation covers tracking in version 2.x of the SDK. If you are implementing a 1.x version of the SDK, you can download the 1.x Developers Guide for Android here: Download SDKs - Initial tracking setupIdentify when the user triggers the intention of playback (the user clicks play and/or autoplay is on) and create aMediaObjectinstance.StreamTypeconstants:MediaTypeconstants:MediaHeartbeat.createMediaObject(<MEDIA_NAME>, <MEDIA_ID>, <MEDIA_LENGTH>, <STREAM_TYPE>, <MEDIA_TYPE>); - Attach metadataOptionally attach standard and/or custom metadata objects to the tracking session through context data variables. - Standard metadataAttaching the standard metadata object to the media object is optional. - Custom metadataCreate a dictionary for the custom variables and populate with the data for this media. For example:HashMap<String, String> mediaMetadata = new HashMap<String, String>(); mediaMetadata.put("isUserLoggedIn", "false"); mediaMetadata.put("tvStation", "Sample TV Station"); mediaMetadata.put("programmer", "Sample programmer"); - Track the intention to start playbackTo begin tracking a media session, calltrackSessionStarton the Media Heartbeat instance. For example:public void onVideoLoad(Observable observable, Object data) { _heartbeat.trackSessionStart(mediaInfo, mediaMetadata); }The second value is the custom media metadata object name that you created in step 2.trackSessionStarttracks the user intention of playback, not the beginning of the playback. This API is used to load the media data/metadata and to estimate the time-to-start QoS metric (the time duration betweentrackSessionStartandtrackPlay).If you are not using custom media metadata, simply send an empty object for the second argument intrackSessionStart. - Track the actual start of playbackIdentify the event from the media player for the beginning of the media playback, where the first frame of the media is rendered on the screen, and calltrackPlay:// Video is rendered on the screen) and call trackPlay. public void onVideoPlay(Observable observable, Object data) { _heartbeat.trackPlay(); } - Track the completion of playbackIdentify the event from the media player for the completion of the media playback, where the user has watched the content until the end, and calltrackComplete:public void onVideoComplete(Observable observable, Object data) { _heartbeat.trackComplete(); } - Track the end of the sessionIdentify the event from the media player for the unloading/closing of the media playback, where the user closes the media and/or the media is completed and has been unloaded, and calltrackSessionEnd:// Closes the media and/or the media completed and unloaded, // and call trackSessionEnd(). public void onMainVideoUnload(Observable observable, Object data) { _heartbeat.trackSessionEnd(); }trackSessionEndmarks the end of a media tracking session. If the session was successfully watched to completion, where the user watched the content until the end, ensure thattrackCompleteis called beforetrackSessionEnd. Any othertrack*API call is ignored aftertrackSessionEnd, except fortrackSessionStartfor a new media tracking session. - Track all possible pause scenariosIdentify the event from the media player for media pause and calltrackPause:public void onVideoPause(Observable observable, Object data) { _heartbeat.trackPause(); }Pause ScenariosIdentify any scenario in which the Video media from the point of interruption. - Identify the event from the player for media play and/or media resume from pause and calltrackPlay.// trackPlay() public void onVideoPlay(Observable observable, Object data) { _heartbeat.trackPlay(); }This may be the same event source that was used in Step 4. Ensure that eachtrackPause()API call is paired with a followingtrackPlay()API call when the media playback resumes. See the following for additional information on tracking core playback: - Sample player included with the Android SDK for a complete tracking example.
https://docs.adobe.com/content/help/en/media-analytics/using/sdk-implement/track-av-playback/track-core/track-core-android.html
2020-01-17T17:00:13
CC-MAIN-2020-05
1579250589861.0
[]
docs.adobe.com
Launch this Stack Bitnami Moodle Stack for Oracle Cloud Infrastructure Classic Moodle is an open source online Learning Management System (LMS) widely used at universities, schools, and corporations worldwide. It is modular and highly adaptable to any type of online learning. Need more help? Find below detailed instructions for solving complex issues.
https://docs.bitnami.com/oracle/apps/moodle/
2020-01-17T15:41:50
CC-MAIN-2020-05
1579250589861.0
[]
docs.bitnami.com
Playbill font family The idea of having an Egyptian face whose horizontals were thicker than the verticals seems to have surfaced only a few years after the Battle of Waterloo. Designs of this kind were given names like Italienne or, later in England, French Antique. In 1938, the Stephenson Blake type foundry revived this look, with the appropriately named Playbill, and others followed. It seems proper to start with the first and best known. Playbill Licensing and redistribution info - Font redistribution FAQ for Windows - License Microsoft fonts for enterprises, web developers, for hardware & software redistribution or server installations Products that supply this font Feedback
https://docs.microsoft.com/en-us/typography/font-list/playbill
2020-01-17T17:51:11
CC-MAIN-2020-05
1579250589861.0
[array(['images/playbill.png', 'Playbill'], dtype=object)]
docs.microsoft.com
If you want to use container images not found in the Red Hat Container Catalog, you can use other arbitrary container images in your Azure Red Hat OpenShift instance, for example those found on the Docker Hub. For Azure Red Hat OpenShift-specific guidelines on running containers using an arbitrarily assigned user ID, see Support Arbitrary User IDs in the Creating Images guide.
https://docs.openshift.com/aro/using_images/other_images/other_container_images.html
2020-01-17T15:48:18
CC-MAIN-2020-05
1579250589861.0
[]
docs.openshift.com
About event sinks Information about the current status of the Pexip Infinity platform and any conference instances currently in progress can normally be obtained using the Management status API. However, in deployments with high levels of live system monitoring activity, such as those managed by service providers, frequent polling of the Management Node via the API can cause performance issues. To avoid this you can configure the system to automatically send details of every participant and conference management event to an external service known as an event sink. When an event occurs, the Conferencing Node sends this information using a simple POST of JSON data to the nominated event sink server. What information is sent? You cannot change what information is sent to an event sink. To view what information is sent to an event sink, go toand from the bottom right of the page, select . The same information can be downloaded as a Swagger or OpenAPI JSON document by selectingor . You can also use a test site such as to view live event data being sent. Note that conference start and end event times are from the Conferencing Node's perspective and not the Management Node's perspective. For a complete description of the information that is sent, including examples, see Event sink API. Configuring event sinks Each system location can be configured with one or more event sinks. The same event sink can be used for more than one system location. To add, edit or delete an event sink, go to. The available options are:
https://docs.pexip.com/admin/event_sink.htm
2020-01-17T17:38:18
CC-MAIN-2020-05
1579250589861.0
[]
docs.pexip.com
Abstract¶ The goal of this PEP is to provide a standard layout and meta-data for Python distributions, so that all tools creating and installing distributions are inter-operable. To achieve this goal this PEP proposes a new format for describing meta-data and layout of the distribution archive. Rationale¶ There are a number of problems currently in Python packaging. - Lack of a standard cross tool layout for distributions. - Multiple locations where the same meta-data is defined. - Ability to build all types of projects. Standard Layout¶ Right now there are a number of competing standards for what is contained inside of a distribution archive. distutils and setuptools share an idiom of using a setup.py, distutils2 uses a setup.cfg, and bento uses a bento.info. This is further compounded by the fact that due to the executable nature of the distutils/setuptools standard setup.py distutils2 and bento can bootstrap themselves using code located inside of setup.py. Meta-data¶ Currently meta-data can be located in one of a minimum of two locations. PKG-INFO and setup.py. It can also be located inside of setup.cfg, bento.info, and any other location that a packager might wish (again due to the executable nature of setup.py). Custom Compilation¶ A number of projects have had to work around or monkeypatch distutils because of assumptions that distutils makes about how to compile a project were wrong. This includes projects that want to cross compile [1] and projects with complex compiler dependencies such as Numpy [2]. Further more there have been serious doubts raised by some that any generic compilation step would be able to cover all needs [3] [4]. Standard Layout¶ All Python distributions are gzip archived containing a dist.json file as well as any source or binary files that should be included as part of the distribution. Source Distribution¶ A source distribution is defined as a distribution that does not include any sort of precompiled files. A source distribution MUST contain a dist.json and all source files, Python or otherwise, that this distribution contains. Binary Distribution¶ A binary distribution is defined as package that does not require any sort of compilation step to complete. A binary distribution MUST contain a dist.json as well as one or more directories containing a compiled distribution. dist.json¶ dist.json is a JSON file containing all the meta-data for this distribution. It must be a valid JSON file and cannot be a JavaScript object literal. Fields¶ name¶ The most important things in your dist.json are the name and version fields. The name and version together form an identifier that is assumed to be completely unique. Changes to the distribution should come along with changes to the version. The name is what your thing is called. version¶ The most important things in your dist.json are the name and version fields. The name and version together form an identifier that is assumed to be completely unique. Changes to the distribution should come along with changes to the version. The version must be in the format specified in PEP 386 [7] description¶ [#rest]. For programs that work with the metadata, supporting markup is optional; programs can also display the contents of the field as-is. This means that authors should be conservative in the markup they use. keywords¶ A list of additional keywords to be used to assist searching for the distribution in a larger catalog. It should be a list of strings. Example: { "keywords": ["dog", "puppy", "voting election"], } maintainer¶ A string or dictionary representing the current maintainer of the distribution, see People Fields for more information. This field SHOULD be omitted if it is the same as the author. contributors¶ A list of additional contributors for the distribution. Each item in the list must either be a string or a dictionary, see People Fields for more information. uris¶ A dictionary of Label: URI for this project. Each label is limited to 32 characters in length. Example: { "uris": { "Home Page": "", "Bug Tracker": "" } } license¶ Text indicating the license covering the distribution where the license is not a selection from the “License” Trove classifiers. See classifiers below. This field may also be used to specify a particular version of a license which is named via the Classifier field, or to indicate a variation or exception to such a license. classifiers¶ A List of strings where each item represents a distinct classifier for this distribution. Classifiers are described in PEP 301 [6]. Example: { "classifiers": [ "Development Status :: 4 - Beta", "Environment :: Console (Text Based)" ] } platform¶ A Platform specification describing an operating system supported by the distribution which is not listed in the “Operating System” Trove classifiers. requires_python¶ This field specifies the Python version(s) that the distribution is guaranteed to be compatible with. Version numbers must be in the format specified in Version Specifiers. People Fields¶ The author, and maintainer fields, and the contributors field list items each accept either a string or a dictionary. The dictionary is a mapping of name, url, like this: { "name": "Monty Python", "email": "[email protected]", "url": "" } Any of the fields may be omitted where they are unknown. Additionally they may be specified using a string in the format of Name <email> (url). An example: Monty Python <[email protected]> () Version Specifiers¶ Version specifiers are a series of conditional operators and version numbers, separated by commas. Conditional operators must be one of “<”, “>”, “<=”, “>=”, “==”, ”!=” and “~>”. The “~>” is a special case which can be pronounced as “approximately greater than”. When this is used it signifies that the the version should be greater than or equal to the specified version within the same release series. For example, if “~>2.5.2” is the specifier, then any version matching 2.5.x will be accepted where x is >= 2. Any number of conditional operators can be specified, e.g. the string “>1.0, !=1.3.4, <2.0” is a legal version declaration. The comma (”,”) is equivalent to the and operator. Each version number must be in the format specified in PEP 386 [7]. Notice that some projects might omit the ”.0” prefix.
https://python-distribution-specification.readthedocs.io/en/latest/
2020-01-17T17:04:29
CC-MAIN-2020-05
1579250589861.0
[]
python-distribution-specification.readthedocs.io
9. Performance¶ 9.1. Specs of tested server¶ - Brand: Dell PowerEdge R320 - CPU: Intel Xeon E5-1410 v2 @ 2.80 GHz - Memory: 8 GiB DIMM DDR3 Synchronous 1600 MHz - Number of cores: 8 9.2. Load test setup¶ 9.2.1. websocket-tester¶ A custom websocket-tester is developed to simulate queries to the server. Multiple request are send using a simple loop. For more information on the exact queries see description on testing scenarios further down this page. 9.2.2. Celery flower¶ For assessing performance of calculations the library Celery Flower is used. Refer to the official docs for more information. To setup Flower on a remote server with Django you have to use the proxy server. For safety setup a htpasswd file in the nginx configuration. See example configuration here for nginx with Celery Flower (adjust to your needs) example_nginx_flower For RabbitMQ the management plugin has to be enabled: $ sudo rabbitmq-plugins enable rabbitmq_management An example to start flower (make sure you are in the project root): $ flower -A ramasceneMasterProject –port=5555 -Q modelling,calc_default –broker=amqp://guest:guest@localhost:5672// –broker_api= –url_prefix=flower In turn you can access flower via the web browser with <domain>/flower/. 9.2.3. Settings for Celery¶ The following settings are in place: - [Django settings] CELERY_WORKER_MAX_TASKS_PER_CHILD = 1 - [env. variable] OPENBLAS_NUM_THREADS=2 for default calculations - [env. variable] OPENBLAS_NUM_THREADS=5 for modelling calculations - [Celery] –concurrency 1 for default calculations - [Celery] –concurrency 1 for modelling calculations Each Celery queue has its own dedicated worker (1 worker for default and 1 worker for modelling) 9.2.4. Testing scenarios and simultaneous requests¶ The longest calculation route is the one with the selections TreeMap and Consumption view. For simplicity we use “value added” as the indicator coupled with “total” for product and countries. All scenarios use this selection. For modelling we select “totals” for all categories except “consumed by” which contains the “S: Agriculture, hunting and forestry” aggregate. A single technical change is set to an arbitrary value of 100. - Scenario A [Analytical]: 30 analytical request whereby 17 requests cover all years. and 13 request use the year 2011. - Scenario B [Modelling]: 30 modelling request whereby 17 requests cover all years. and 13 request use the year 2011. All requests do heavy calculations covering the modelling of intermediates. - Scenario C [Analytical + Modelling]: 15 requests over 15 different years for analytical and 15 request over 15 different years for modelling. Idle MEM use at point before load test: 572M 9.2.5. Results scenario A¶ - Max. time for a given task: 8.3 sec. - Total time for the last user to finish the task: 4 min. and 58 sec. - Highest detected MEM load: 2.87G (includes the idle MEM) Conclusion: The queued task in the right bottom plot show expected behaviour due to the concurrency set to 1. The time in queue for a given task is relatively long compared to the time take to do calculations. This was expected as the CPU use is limited coupled with no simultaneous requests. 9.2.6. Results scenario B¶ - Max. time for a given task: 48.97 sec. - Total time for the last user to finish the task: 22 min. 59 sec. - Highest detected MEM load: 3.49G (includes the idle MEM) - Execution time of the first task: 46.09 sec. Conclusion: CPU use is less limited for modelling and it can use 5 cores if needed, however that only speeds up execution time. The last user still has to wait considerable time as opposed to the analytical queries. The spikes in the two plots on the left show that there are no concurrent requests handled as set in the settings. 9.2.7. Results scenario C¶ - Max. time for a given analytical task: 9.48 sec. - Total time for the last user to finish the task for analytics: 3 min. 28 sec. - Max. time for a given modelling task: 49.36 sec. - Total time for the last user to finish the task for modelling: 11 min. 55 sec. - Highest detected MEM load: 6.44G (includes the idle MEM) Conclusion: As shown in the top left and bottom right graph both workers are active. The analytical queue depletes faster than the modelling queue, which is also expected and desired behaviour. The MEM load has increased as both workers use memory. 9.2.8. Final conclusion¶ Modelling has a significant impact on CPU use, in turn a limit is set on CPU considering the specs of the tested server. This limit results in relatively long waiting time for users doing modelling. To circumvent this either a server with more powerful specs is required or celery can be configured with workers on different machines. In both cases more CPU is required and optimally more memory. If more memory is in place, logically concurrency can be increased however new load tests have to be performed.
https://rama-scene.readthedocs.io/en/latest/performance.html
2020-01-17T17:11:52
CC-MAIN-2020-05
1579250589861.0
[array(['_images/scenarioA.png', '_images/scenarioA.png'], dtype=object) array(['_images/scenarioB.png', '_images/scenarioB.png'], dtype=object) array(['_images/scenarioC.png', '_images/scenarioC.png'], dtype=object)]
rama-scene.readthedocs.io
Materials for Developing Reading and Writing Skills in Mother Tongue: A Case Study of Lower Primary Schools in Tororo District Citation: ''The study was motivated by the current policy of teaching local languages in lower primary schools in Uganda, whose implementation started without adequate preparation of both the teachers and the necessary mother tongue teaching materials''. Publication Date: Sunday, October 3, 2010
http://docs.mak.ac.ug/books/materials-developing-reading-and-writing-skills-mother-tongue-case-study-lower-primary-schools
2020-01-17T16:27:03
CC-MAIN-2020-05
1579250589861.0
[]
docs.mak.ac.ug
Over the last four modules, you have populated your monitored inventory with some sample Elements, got to know the main areas of the Uptime Infrastructure Monitor UI, and learned about how the intersecting properties of Elements and Element Groups, service monitors and Service Groups, Users, and Views allows you to configure Uptime Infrastructure Monitor for every type of user in your organization. While doing these modules, you've hopefully used up enough time to allow some data collection cycles to happen, meaning there can be data in reports. This module consists of the following exercises: Generate a Resource Hot Spot Report - Click Reports, then click Resource Hot Spot in the left pane. - In the opening set of of options, click Last, then leave the selection at 1 Days. Because you presumably have only had these Elements monitored over the course of this Getting Started Guide, you do not have more than a days' worth of data to draw; however, feel free to increase the time frame if you have collected more data. - In the Report Options, let's Select All Options to also include any possible network-device issues. - The report allows you to define what constitutes a hot spot, and the default values are reasonable. In the hopes of having some "resource gluttons" appearing in your report, let's manufacture a crisis, and configure new, lower thresholds, as shown below: - CPU Used: 20% - Memory Used: 20% - In-Rate: 5% - Out-Rate: 5% Below the Report Options section are three sections that allow you to select what is to be included in the report. You can use any of the ways you've organized your inventory to select which Elements are included in the report: Element Groups, Views, and individual Elements. Note in the above screenshot that the Linux Servers View you created in the previous module, and the Production, Linux Servers and Windows Servers Element Groups you created in the module before that are available. - For simplicity, select All Groups from the List of Groups section (as shown in the image above), to include everything we have added to our monitored inventory. - Scroll to the bottom of the page to view the last two sections: Generate Now and Save Report: When configured to perfection, reports can be saved to be generated at a precise time, at a specific schedule, in various formats. Users also can save reports to their My Portal. Administrators and end-users can schedule reports for themselves, or as part of an agreement, deliver them to managers. Reports can also be generated in real time, to assist with diagnosis, or to fine-tune the configuration of a report. This example uses this process. - In the Generate Now section, click Print to Screen. Validation: Admire the Resource Hot Spot Report The results of the report depends on the activity and performance of your Elements, but hopefully there is enough activity for resource hot spots to be listed, such as in the following example: The opening Top Resource Consumers Summary lists Elements regardless of your configured thresholds; subsequent sections list any hot-spot Elements. Generate a Pre-configured Server Uptime Report When Uptime Infrastructure Monitor is first installed, a few broad-coverage, quick-value reports are created out of the box for the admin user. One of these is the Server Uptime report, which is ideal for all the ESX hosts and VMs that are managed by your VMware vCenter Server Element. - Click My Portal, then click the Saved Reports tab. - Note the report you generated in the last exercise is also in this list as a pre-configured report. One of the benefits users saving reports to their respective My Portal Saved Reports lists is they can generate at any time, based on saved settings. Let's demonstrate how to live in the moment. Click the play icon to print the Server Uptime Report to screen. Validation: Review the Server Uptime Report The pre-configured options for this report include all of your Elements (by the report configuration, the Infrastructure Element Group, as well as its subgroups), and whether they met a target uptime threshold of 95%. This is reported for the last seven days. If you have completed all of this guide in the same sitting, unless you are very slow, you won't have a week's worth of data to display. Uptime Infrastructure Monitor reports with however much data it has collected, which in this case is likely a day's worth. The following example shows a full week of meeting up-time targets, with a modest number of outages: Now that you've touched on a couple of reports, let's go back to what are essentially a real-time status report, the Quick Snapshot. Revisit the Quick Snapshot pages In the first module, specifically the first track, you added a VMware vCenter Server to your monitored inventory. In the final exercise, you viewed the Quick Snapshot for both the vCenter Server Element and one of its VMs. Because the vCenter Server was just added, there was no data in the graphs. Because the graphs show the last 24 hours of activity, you only need to wait overnight to fully populate them, but even a handful of data-collection cycles can suffice. Let's revisit these pages. - Click Infrastructure. - Click the vCenter Server's gear icon, then in the pop-up menu, click Graph Performance to display its Quick Snapshot. The this example, there is a full day's worth of data displayed for a same vCenter Server that comprises three datacenters. The top CPU and memory consumers are shown by cluster, ESX host, and resource pool; you should now see some ranked vCenter Server objects, accompanied by historical graphs. - Click Infrastructure to return to the main inventory view. Expand the Discovered Virtual Machines Infrastructure Group, and click the gear icon for any of the VMs (preferably the same one you selected back in the first module). In the pop-up menu, again, click Graph Performance to display that Element's Quick Snapshot. The key performance and resource metrics for the VM should now show some usage and baselines. Save
http://docs.uptimesoftware.com/display/UT/Generate+Reports
2020-01-17T17:20:25
CC-MAIN-2020-05
1579250589861.0
[]
docs.uptimesoftware.com
This step allows you make an arbitrary REST call. You can define a full endpoint directly or use an endpoint defined by an administrator on your Alfresco Process Services server. You can supply parameters to the call directly in the URL or from process variables in forms, and you can extract properties from the JSON response into process variables for use in your process definition. The REST call step dialog contains four tabs that let you fully define the call. Name and Description are simple text fields that help you and others to identify the task in your task list. You define the URL for your REST call in this tab. - HTTP Method This is the method associated with the REST call. The default is GET, but you must select between GET, POST, PUT, and DELETE based on the documentation for your chosen API call. The example shown in the screenshot, is using the api/enterprise/app-version REST call, which is documented as a GET call. - Base endpoint You select one from a list of endpoints that have been defined by your administrator. In the example the endpoint for the local Alfresco Process Services server REST API,, has been chosen. - Rest URL Copy the URL fragment from your selected REST API call. In this example we are using api/enterprise/app-version. You may also choose to enter the full URL, especially for REST services that have not been defined by your administrator, for example,. This can be useful during development and prototyping cycles. In all cases, you can use the Test button to test your endpoint. - Form Field/Variables You can insert values previously submitted in any form (or variables) in your process definition, into the REST URL. The value will be inserted at the position of the cursor in the Rest url field. Some REST calls require a JSON request body. You can add one or more JSON properties using this tab. For each property you define the name, property type and value. The value can either be a fixed value, or you can select the value of a form field from a list of available form fields in your process definition. REST calls return a JSON response body. You can define one or more pairs JSON response properties and process variables. When the step completes, each process variable will contain the value of the returned response property. You can use those values later in your process. In this example, the returned JSON property edition will be contained in the process variable activitiedition, which is a form field in a form used for displaying the edition string later in the process definition. For complex and nested POST request bodies, specify a JSON Template which is evaluated at run-time. The JSON editor provides syntax highlighting and will highlight any JSON syntax errors on the line number indicator.
https://docs.alfresco.com/process-services1.6/topics/rest_call.html
2020-01-17T16:14:10
CC-MAIN-2020-05
1579250589861.0
[array(['http://wac.996C.edgecastcdn.net/80996C/docsalfrescocom/sites/docs.alfresco.com/files/public/images/docs/defaultprocess_services1_6/restcall.png', None], dtype=object) ]
docs.alfresco.com
Windows Server 2008 and Vista SP1 RTM's Today you should be able to download Windows Server 2008 from your MSDN subscription (you do have one don't you?) or TechNet Plus Subscription. This is timely, now we can tell people that they can go get the bits we are using at the road show! Hyper-V is still 180 days away (start counting down), but Visual Studio 2008 and now Windows Server 2008 is now out, so go get them now!
https://docs.microsoft.com/en-us/archive/blogs/darrylburling/windows-server-2008-and-vista-sp1-rtms
2020-01-17T17:50:01
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
Account: Adding a User To add a user to the cluster: - Go to: settings > team - Click . - Enter the name, email and password of the new user and select the role to assign to the user. Select the type of user: How do I create an external user? - internal - Authenticates with RS - external - Authenticates with an external LDAP server To have a user authenticate with LDAP, you must have LDAP integration enabled. Then, create a user with the user type external. For the email alerts, click Edit and select the alerts that the user receives. You can select: How do I select email alerts? - Receive alerts for databases - The alerts that are enabled for the selected databases are sent to the user. You can either select all databases, or you can select Customize and select the individual databases to send alerts for. All databases includes existing and future databases. - Receive cluster alerts - The alerts that are enabled for the cluster in settings > alerts are sent to the user. Then, click Save. Click . To edit the name, password, role or email alerts of a user, hover over the user and click . To change a user from internal to external, you must delete the user and re-add it. Resetting user passwords To reset a user password from the CLI, run: rladmin cluster reset_password <username> You are asked to enter and confirm the new password. User Account Security To make sure your user accounts are secured and not misused, RS supports enforcement of: - Password complexity - Password expiration - Account lock on failed attempts To enforce a more advanced password policy that meets your contractual and compliance requirements and your organizational policies, we recommend that you use LDAP integration with an external identity provider, such as Active Directory. Setting up local password complexity RS lets you enforce a password complexity profile that meets most organizational needs. The password complexity profile is defined by: - At least 8 characters - At least one uppercase character - At least one lowercase character - At least one number (not first or last character) - At least one special character (not first or last character) - Does not contain the User ID or reverse of the User ID - No more than 3 repeating characters To enforce the password complexity profile, run: curl -k -X PUT -v -H "cache-control: no-cache" -H "content-type: application/json" -u "<administrator-user-email>:<password>" -d '{"password_complexity":true}' https://<RS_server_address>:9443/v1/cluster Setting local user password expiration RS lets you enforce password expiration to meet your compliance and contractual requirements. To enforce an expiration of a local users password after a specified number of days, run: curl -k -X PUT -v -H "cache-control: no-cache" -H "content-type: application/json" -u "<administrator_user>:<password>" -d '{"password_expiration_duration":<number_of_days>}' https://<RS_server_address>:9443/v1/cluster To disable password expiration, set the number of days to 0. Account lock on failed attempts To prevent unauthorized access to RS, you can enforce account lockout after a specified number of failed login attempts.
https://docs.redislabs.com/latest/rs/administering/designing-production/security/account-management/
2020-01-17T17:25:48
CC-MAIN-2020-05
1579250589861.0
[]
docs.redislabs.com
Forms - Can You Edit Live Projects? - Survey and Form Settings - How to Create Qualification Questions - How to Add and Edit Questions - How to Create a Consent Form - Question Quick Menu - Contact Form Element - Limit the Number of Answers a Respondent can Select - How to Insert a Link - How to Preview and Test Projects - How to Make a Question Required - How to Randomize the Answer Order - How to Add an Other Answer Option - How to Copy Questions - How to Remove Question Numbering - How to Move a Question - Why is My Question Locked? - How to White Label a Project - Will Deleting Questions Delete Results?
https://docs.shout.com/category/181-forms
2020-01-17T17:25:44
CC-MAIN-2020-05
1579250589861.0
[]
docs.shout.com
TOC & Recently Viewed Recently Viewed Topics Scan Zones During Migration A single Tenable.sc scan zone migrates to one scanner group and one target group in Tenable.io: - The scanner group inherits the name of the scan zone. - The target group inherits the name of the scan zone and the list of hosts from the scan zone. Note: If the migration tool migrates a scan zone but a scanner group or target group with that name already exists in Tenable.io, the migration tool handling depends on the group type: - scanner group — the migration tool does not create a scanner group. - target group — the migration tool merges the old target group with the new target group. Post-Migration Action Items Review your target group configurations in Tenable.io. For more information, see About Target Groups in the Tenable.io Vulnerability Management User Guide. After you link scanners to Tenable.io, add one or more scanners to each scanner group. For more information, see About Scanner Groups in the Tenable.io Vulnerability Management User Guide.
https://docs.tenable.com/migration/sc/Content/ScanZones.htm
2020-01-17T17:08:20
CC-MAIN-2020-05
1579250589861.0
[]
docs.tenable.com
TOPICS× Formatting Key-Value Pairs in DCS Calls When making a call, the DCS accepts key-value data in standard or serialized format. Review this section for information about how to format standard and serialized key-value data. Standard and Serialized Key-Value Pairs Delimiters and Separators for Serialized Key-Value Pairs With serialized key-value pairs, you must specify the markers that separate values within and between these variables. Audience Manager requires the following delimiters and separators:
https://docs.adobe.com/content/help/en/audience-manager/user-guide/api-and-sdk-code/dcs/dcs-api-reference/dcs-key-format.html
2020-01-17T16:18:14
CC-MAIN-2020-05
1579250589861.0
[]
docs.adobe.com
Standard Audit File for Tax (SAF-T) for Norway Important Dynamics 365 for Finance and Operations is now being licensed as Dynamics 365 Finance and Dynamics 365 Supply Chain Management. For more information about these licensing changes, see Dynamics 365 Licensing Update. This topic includes country-specific information about how to set up the Standard Audit File for Tax (SAF-T) for legal entities that have their primary address in Norway. Introduction Beginning January 2020, all companies in Norway are required by the Norwegian Tax Administration to provide SAF-T Financial data. This requirement is in accordance with version 1.4 of the documentation, which was published on July 8, 2019, and version 1.3 of the technical documentation, which was published on March 23, 2018, in the form of an XML report. The publication of these pieces of documentation coincided with version 1.1 of the "Norwegian SAF-T Financial data" XML Schema Definition (XSD) schema that was developed by the SAF-T Working group, Skatteetaten, and based on "OECD Standard Audit File - Taxation 2.00," which was modified on February 2, 2018. Overview To support the Norwegian SAF-T Financial data report, your Microsoft Dynamics 365 Finance application must be one of the following versions or later. When your Finance application version is suitable, import the following versions or later of these Electronic reporting (ER) configurations from Microsoft Dynamics Lifecycle Services (LCS). Import the latest versions of the configurations. The version description usually includes the number of the Microsoft Knowledge Base (KB) article that explains the changes that the configuration version introduced. Note After you've finished importing all the ER configurations from the preceding table, set the Default for model mapping option to Yes for the SAF-T Financial data model mapping configuration. For more information about how to download ER configurations from Microsoft Dynamics Lifecycle Services (LCS), see Download Electronic reporting configurations from Lifecycle Services. Setup To start to use the Norwegian SAF-T Financial data report in Finance, you must complete the following setup: - General ledger parameters: Set up the ER format on the General ledgers parameters page. - Sales tax codes: Associate sales tax codes with Norwegian standard value-added tax (VAT) tax codes. - Main accounts: Associate main accounts with Norwegian standard accounts. The following sections explain how to do each part of this setup. General ledger parameters - In Finance, go to General ledger > Ledger setup > General ledger parameters. - On the General ledger parameters page, on the Standard Audit File for Tax (SAF-T) tab, in the Standard Audit File for Tax (SAF-T) field, select SAF-T Format (NO). Sales tax codes As the documentation explains, in Norwegian SAF-T Financial data, sales tax codes that are used in Finance must be associated with Norwegian standard VAT tax codes (<StandardTaxCode>) for the purpose of SAF-T reporting. The Norwegian standard VAT tax codes are available at. To associate sales tax codes that are used in Finance with Norwegian standard VAT tax codes, follow these steps. In Finance, go to Tax > Indirect taxes > Sales tax > Sales tax codes. On the Sales tax code page, select the Sales tax code record, and then, on the Action Pane, on the Sales tax code tab, in the Sales tax code group, select External codes. On the External codes page, specify the Norwegian standard VAT tax codes that should be used for the selected sales tax code record for the purpose of SAF-T reporting. Main accounts As the documentation explains, in Norwegian SAF-T Financial data, main accounts that are used in Finance must be associated with Norwegian standard accounts for the purpose of SAF-T reporting. The Norwegian standard accounts are available at. To associate main accounts that are used in Finance with Norwegian standard accounts, follow these steps. - In Finance, go to General ledger > Chart of accounts > Accounts > Main accounts. - On the Main accounts page, select the Main account record, and then, on the Action Pane, select Edit. - On the General FastTab, in the Standard general ledger account section, in the Standard account field, select Standard account. You must define all the standard accounts on the Standard general ledger accounts page before you can select them for a main account. To quickly access the Standard general ledger accounts page from the Main accounts page, right-click the Standard account field, and then select View details. Generate the Norwegian SAF-T Financial data report To generate the Norwegian SAF-T Financial data report, follow these steps. In Finance, go to General ledger > Inquiries and reports > Standard Audit File for Tax (SAF-T) > Standard Audit File for Tax (SAF-T). In the dialog box for the report, in the From date and To date fields, specify the start and end dates of the period that you want to generate the report for. Select the check boxes for Customers, Vendors, and Financial dimensions to include all the records from the related tables on the report. If the Customers and Vendors check boxes are cleared, the report will include only those customers and vendors of your company that there were transactions for in the reporting period, and customers and vendors that have a non-zero balance. If the Financial dimensions check box is cleared, only those financial dimensions that were used in transactions during the reporting period will be reported in the <MasterFiles> node of the report. In the Personnel number field, select an employee to add the employee to the <UserID> node of the report. This node reports the ID of the user who generated the audit file. You can also apply filters for the Main accounts and General journal entry fields by using Records to include FastTab in the dialog box for the report. Report naming and splitting The documentation for Norwegian SAF-T Financial data requires the following naming structure for the XML reports that are generated: <SAF-T export type>_<organization number of the vendor that the data represents>_<date and time(yyyymmddhh24hmise>_<file number of total files>.xml Here is an example: SAF-T Financial_999999999_20160401235911_1_12.xml Here is an explanation of the parts of this file name: - SAF-T Financial states the SAF-T type of file. - 999999999 represents the organization number that belongs to the owner of the data. - 20160401235911 represents the date and time when the file was created. (A 24-hour clock is used for the time.) - 1_12 represents file 1 out of 12 total files in the export (that is, in the same selection). The volume of a single XML file must be less than 2 gigabytes (GB). Every individual XML file that is submitted must be validated against the schema. All <MasterFiles> nodes must be in the first file, and the associated transactions must be in the subsequent files (the number of these files is flexible). The following table shows a sample selection of one accounting year that has 12 periods. For each period, there is one file that contains transactions. There can be a maximum of 10 XML files in the same zip archive. In accordance with these requirements, the SAF-T Format (NO) ER format is implemented to automatically split the resulting report in XML format, based on the following assumptions: The maximum volume of the resulting XML report is 2,000,000 kilobytes (KB) (that is, 2 GB). All the XML files use the following naming structure: <SAF-T export type>_<organization number of the vendor that the data represents>_<date and time(yyyymmddhh24hmise> All the XML files are included in one zip archive. Each individual XML file is validated against the schema. After the report is generated, if more than one XML file is generated, the user must manually number the generated files in the zip archive by adding _<file number of total files> to the file names. The user must also make sure that there are no more than 10 XML files in the same zip archive. If there are more than 10 XML files in an archive, the user must manually split it into several archives, each of which has a maximum of 10 XML files. Feedback
https://docs.microsoft.com/en-us/dynamics365/finance/localizations/emea-nor-satndard-audit-file-for-tax
2020-01-17T16:19:08
CC-MAIN-2020-05
1579250589861.0
[array(['media/nor-saf-default-model-mapping.jpg', 'Default for model mapping option set to Yes'], dtype=object) array(['media/nor-saf-default-model-mapping.jpg', 'Upload and add button'], dtype=object) array(['media/nor-saf-gl-parameters.jpg', 'Standard Audit File for Tax (SAF-T) field on the General ledger parameters page'], dtype=object) array(['media/nor-saf-standard-main-accounts.jpg', 'Standard account field on the Main accounts page'], dtype=object) array(['media/nor-saf-standard-main-account-setup.jpg', 'View details command on the shortcut menu for the Standard account field'], dtype=object) ]
docs.microsoft.com
This topic displays help topics for the Package Management Cmdlets. Finds software packages in available package sources. Returns a list of Package Management package providers available for installation. Returns a list of all software packages that have been installed by using Package Management. Returns a list of package providers that are connected to Package Management. Gets a list of package sources that are registered for a package provider. Adds Package Management package providers to the current session. Installs one or more software packages. Installs one or more Package Management package providers. Adds a package source for a specified package provider. Saves packages to the local computer without installing them. Replaces a package source for a specified package provider. Uninstalls one or more software packages. Removes a registered package source. Thank you. Send feedback about
https://docs.microsoft.com/en-us/powershell/module/packagemanagement/?view=powershell-5.1
2020-01-17T17:01:50
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
RMsis requires it's own independent database. This section contains instructions for setting up all the supported versions of Databases. Important note regarding Unicode support A word before you start your DataBase Configuration ... - Please make sure that you create Unicode compliant database, so that you will not need an explicit migration anytime in future. - The suggested MySQL example in next section, creates a UTF-8 compliant Database. - Regarding SQL Server, please refer to the Atlassian Guide for suggestions on configuring the character encoding. MySQL - Open a unix terminal or a windows command prompt whichever is relevant to you. And login to MySQL database using the command line client that is shipped with the MySQL. Use "root" (administrator) credentials to login. - Create a database to store RMSis data. Use any name for it. Example - "rmsis". - Create a database user and assign permissions to this user to access the database created above. - Exit MySQL $ mysql -uroot -ppassword mysql> create database rmsis character set utf8 ; mysql> grant all on rmsis.* to 'username'@'hostname' identified by 'new_user_password'; mysql> exit; Note (known issue) : Mysql database only supports 3 byte UTF8 characters if the default 'utf8' character set is chosen. For 4byte UTF8 support, 'utf8mb4' character set needs to be chosen in Mysql. However, 'utf8mb4' character set is not supported by RMsis at present. This issue is logged as RMI-3474 in our CRM. - Due to this issue, you may experience unexpected behaviour and/or failure to access RMsis if a 4 byte UTF8 character is entered in a field in RMsis. PostgreSQL - Create a database user which RMsis will connect as (e.g. rmsis dbuser). This user will be used to configure RMsis's connection to this database in subsequent steps. Create a database for RMsis (e.g. rmsis) with Unicode collation. CREATE DATABASE rmsis WITH ENCODING 'UNICODE'; Or from the command-line: $ createdb -E UNICODE rmsis - Ensure that the user has permissions to connect to the database, and to create and write to tables in the database. Microsoft SQL Server - Create a database for RMsis (e.g. rmsis). Note that the collation type must be case insensitive, e.g.: 'SQL_Latin1_General_CP437_CI_AI' is case insensitive. If it is using your server default, check the collation type of your server. - Create a database user which RMsis will connect as (e.g. rmsisuser). Note that rmsisuser should not be the database owner, but should be in the db_owner role. - Ensure that the user has permission to connect to the database, and create and populate tables in the default schema. - Ensure that TCP/IP is enabled on SQL Server and listening on the correct port (the port is 1433 for the default instance of SQL Server). Read the Microsoft documentation for information on how to enable a network protocol (TCP/IP) and how to configure SQL server to listen on a specific port. - Ensure that SQL Server is operating in the appropriate authentication mode. By default, SQL Server operates in 'Windows Authentication Mode'. However, if user is not associated with a trusted SQL connection, i.e. 'Microsoft SQL Server, Error: 18452' is received during RMsis startup, it is required to change the authentication mode to 'Mixed Authentication Mode'. Read the Microsoft documentation on authentication modes and changing the authentication mode to 'Mixed Authentication Mode' - Turn off the SET NOCOUNT option. Open SQL Server Management Studio and navigate to Tools -> Options -> Query Execution -> SQL Server -> Advanced. The following screenshot displays the configuration panel for this setting in MSSQL Server 2005/2008. Ensure that the SET NOCOUNT option is not selected: - You will also need to access the Server > Properties > Connections > Default Connections properties box and clear the no count option. Microsoft SQL Server Authentication Modes - Windows Authentication mode - If you want to connect RMsis with Microsoft SQL Server database using windows authentication mode, - then leave the username and password fields blank while configuring RMsis database during RMsis installation. - Make sure that the logged in user of windows machine (on which RMsis server is running or will run) has required credentials for RMsis database. - Download the SQL Server JDBC driver (v1.2.4) from JTDS and place the ntlmauth.dll (shipped with jtds driver) file in the system path - Mixed Authentication mode (SQL Server and Windows Authentication mode) - Choose this option, when - you wish to explicitly configure username and password for Microsoft SQL Server. - RMsis is running on Linux, and you wish to use SQL Server on a different node. - When this option is chosen, username and password are required during.RMsis configuration. Connecting RMsis to named instances in SQL Server When using named instances in SQL Server, the following configuration can be used in RMsis Database Configuration (after performing the actions specified in above sections) : - Specify Database Connection : External - Database Type : MS-SQL - Hostname : <Hostname or the IP address of the database> - Port : 1433 <TCP port number of the database server> - Database : <RMsis_Database_Name>;instance=<Instance_name> - Username : <Username to access the database> - Password : <Password to access the database> - The above settings can also be used when using named instances in SQL Server on dynamic port. Please make sure that SQL Server Browser Service is running if you are using dynamic port. Securely connect RMsis to a SQL Server database (Connecting to SQL Server database using SSL) - You can follow the steps mentioned below to securely connect RMsis to a SQL Server database : - Switch to the database configuration page RMsis Menu > RMsis Configuration > Database Configuration - Select MS-SQL (Microsoft Driver) from the drop down menu while selecting Database Type. - Fill appropriate values for other fields in the database configuration page (as mentioned above in this page) and click on TEST AND SAVE CONFIGURATION page. - Please note that it is necessary to configure RMsis using an unsecured connection first. - This will create and store necessary database configuration. - Additional parameters for secured connection can be added manually in later stage. - If the MS-SQL server does not support unsecured connection, the test will fail and settings will not be saved. - If this is the case, configure RMsis using internal database first. - This will create the <JIRA_HOME>/rmsis/conf/jdbc.properties file. - Replace the internal database details with MS-SQL database as explained in next steps. - Once the configuration is saved, stop JIRA. This will stop RMsis as well. Kill all processes related to JIRA / RMsis (they are typically named as Tomcat*, java). This is important because sometimes stopping JIRA does not terminate all the processes. Detailed description about killing RMsis is available at: Locate <JIRA_HOME>/rmsis/conf/jdbc.properties file. The contents of the file will be similar to: db.name=rmsisDbName jdbc.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver jdbc.url=jdbc:sqlserver://localhost:1433;databaseName=${db.name} jdbc.username=sa jdbc.password=sa123 hibernate.dialect=org.hibernate.dialect.SQLServer2005Dialect jdbc.dataSourceClassName=com.microsoft.sqlserver.jdbc.SQLServerDataSource Add the desired additional parameters to the jdbc.url. For example, in order to add HTTPS/ SSL, append ;encrypt=true;trustServerCertificate=true; to the existing URL. The updated file will be db.name=rmsisDbName jdbc.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver jdbc.url=jdbc:sqlserver://localhost:1433;databaseName=${db.name};encrypt=true;trustServerCertificate=true; jdbc.username=sa jdbc.password=sa123 hibernate.dialect=org.hibernate.dialect.SQLServer2005Dialect jdbc.dataSourceClassName=com.microsoft.sqlserver.jdbc.SQLServerDataSource Save the contents of <JIRA_HOME>/rmsis/conf/jdbc.properties file. - Restart JIRA. This will automatically start RMsis with the updated configuration. Connecting RMsis to Oracle Supported Version(s) Currently, RMsis supports Oracle 11G and later versions. Database Setup using Oracle - Ensure that you have a database instance available for RMsis (either create a new one or use an existing one). - Within that database instance, create a user which RMsis will connect as (e.g. rmsisuser). - create user <rmsisuser> identified by <user_password> default tablespace <tablespace_name> quota unlimited on <tablespace_name>; - Notes : - When you create a user in Oracle, Oracle will create a 'schema' automatically. - When you create a user, the tablespace for the table objects must be specified. - Ensure that the user has the following permissions: - grant connect to <rmsisuser>; - grant create table to <rmsisuser>; - grant create sequence to <rmsisuser>; - grant create trigger to <rmsisuser>; Database Configuration for Oracle
https://docs.optimizory.com/display/rmsis/Database+Setup
2020-01-17T16:03:28
CC-MAIN-2020-05
1579250589861.0
[]
docs.optimizory.com
RDB Native360 - Applicant Processing The idibu RDB Native 360 plugin is very versatile, and allow you to process applications in a number of different ways. To manage these options, simply go to the plugin's option's panel (Just as you would if you wanted to add a job board) and select the 'Applicant Processing option'.. Here you will see all the four statuses of the applicant's journey (Received, Progress, File, Rejected), and you will be able to determine what happens when the consultant hits a particular button on the Notifier or 'View Applicants' tab. Add to database Ticking this box will mean an applicant record is created in RDB when any given status is applied. Add Cover Letter to Notebook Adds the body of the email (cover letter) that that was delivered with the application, to the applicant's notebook. Add Autoresponder to Notebook Here you can determine if you'd like any autoresponders to be stored in the candidate's notebook as well Add to Review List Do you want the candidate to be added to the review list for this role? Select your preferences here. Review list status Finally, you can choose what review list status (if any) is assigned to a candidate when they are added to the review list. The list of possible review list statuses is dynamic and dependent on your RDB settings. Remember to hit the Save button after you've finished!
https://v2-docs.idibu.com/article/172-13-applicant-processing
2020-01-17T16:36:37
CC-MAIN-2020-05
1579250589861.0
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/564e9cb1c697910ae05f547d/images/5b8e482c2c7d3a03f89e6a21/file-PaLxGVqsY9.png', None], dtype=object) ]
v2-docs.idibu.com
(American Marines at the Chosin Reservoir in North Korea, December, 1950) reader along side historical decision makers and the soldiers who carried out their orders. Whether Sides is writing about James Earl Ray and the assassination of Martin Luther King; the last survivors of the Bataan Death March; a biography of Kit Carson; or the late 19th century voyage of the USS Jeanette to the unchartered Artic waters, he tells his stories with uncanny historical accuracy and incisive analysis. In his current effort Sides conveys the authenticity and intensity of war on the Korean peninsula. His portrayal of the bravery of America soldiers is clear and unsettling as the realism of combat is laid bare for all to see. At times it is difficult to comprehend what these soldiers were able to overcome and reading the book during the week of Veteran’s Day makes Sides work that more relevant. (Major-General Oliver Prince Smith) Sides integrates all the important historical figures into his narrative, including American Marines and members of the US Army. We meet the egotistical General Douglas MacArthur and his staff of sycophants and supplicants. MacArthur can carry out the Inchon landing against all odds, but this logistical miracle seems to fuel is insatiable need for further glory. Fed by men like General Ned Almond whose main goal was to carry out MacArthur’s wishes, sluffing off any advice or criticism by other planners the only result could be the disaster that encompassed American soldiers at the Chosin Reservoir and along the Yalu River. Disregarding intelligence that went against his own staff, MacArthur and Almond would push on disregarding and ignoring contrary opinions. President Harry Truman appears and seems to go along with MacArthur, particularly at the Wake Island Conference until proof emerges that over 250,000 Chinese Communist soldiers have poured into North Korea from mid-October 1950 onward. (Major-General Edward “Ned” Almond) Perhaps Sides most revealing portrait in explaining how American soldiers met disaster in the Chosin Reservoir region was his comparison of the views of Major-General Oliver Prince Smith, the Commander of the First Marine Division, a by the book Marine who described MacArthur as “a man with a solemn regard for his own divinity;” and Major-General Edward “Ned” Almond, MacArthur’s Chief of Staff. All Almond cared about was speed, disregarding the obstacles that Smith faced in planning MacArthur’s assault on northern Korea. Smith was a deliberate and fastidious planner who resented Almond’s constant goading. He felt that Almond strutted around (like MacArthur!) and made pronouncements based on minimum intelligence. Almond was a racist who down played the abilities of Hispanic American troops and thought very little of the fighting ability of the Chinese. For Almond’s part he viewed Smith as an impediment to his overall goals of carrying out MacArthur’s wishes. He believed that Smith was overly concerned with planning minutiae, and his deliberate approach detracted from his grand plans. (General Douglas MacArthur watching the Inchon Landing) Sides portrayals of American soldiers and the their character provides insights and provide a mirror for the reader into the person’s abilities and their impact on their units, individual bravery, and the success or failure of their unit, battalion, or company’s mission. Studies of Lee Bae-Suk, a Chinese-American who escaped North Korea as a teenager and enlisted in the Marines; Captain William Earl Barber, Commander of Company F, 2nd Battalion role protecting the Toktong Pass, a key route to the Chosin Reservoir, and a student of Sun Tzu as was Mao Zedong; the exploits of Seventh Marines’ Company E, known as “Easy” Commander, First Lieutenant John Yancy at Hill 1282; Lieutenant Chew-Een who led the column to rescue Fox Company encircled by Chinese troops; the Jersey contingent of private Kenneth Benson and Private Hector Cafferata, Jr.’s heroism in Fox Company; Lieutenant Thomas Hudner who would earn the Congressional Medal of Honor for his bravery for his attempt to rescue Ensign Jesse Brown who hailed from a Mississippi sharecroppers background to become the first African-American fighter pilot in the US Navy; are among many along with other portrayals that are eye opening, as so many soldiers continued to fight on against all odds, despite wounds that would not have allowed most to even stand upright. (General Douglas MacArthur and President Harry Truman at their Wake Island meeting) Sides description of combat is almost pure in of itself, but completely unnerving. A prime example is the fight for Hill 1282 and the rescue attempt of Fox Company. The Chinese would attack American soldiers in human waves by the thousands paying little, or no attention to casualties as Marines repeatedly cut them down. The carnage and suffering are hard to comprehend as is the bravery of US Marines fighting in sub zero temperatures in the middle of the night to protect a small piece of geography in northern Korea against an enemy, lacking in communications using the unnerving sounds of bugles, cymbals, whistles and such to organize their attacks. Battles are seen through the eyes of the participants and the will and desire of each man is on full display. Sides has written an excellent narrative military history, but on another level, he has produced a study that highlights the relationship between men in combat and how they rely upon each other for their survival. It is a book about heroes, the idiocy of war, and the incompetence of decision-making by people at the top who are willing to send men to their deaths, in many cases without batting an eye. The book reads like a novel, but it presents history as truth, that cannot be denied or dismissed. (US soldiers retreat from the Chosin Reservoir, December, 1950)
https://docs-books.com/category/korean-war/
2020-01-17T17:03:01
CC-MAIN-2020-05
1579250589861.0
[]
docs-books.com
Use this information to modify the Share custom configuration file. To configure the Share application, you can use the custom configuration file named share-config-custom.xml. If you are overriding a configuration section, you must apply the replace="true" attribute to replace the existing Alfresco configuration. - Open the following file: tomcat/shared/classes/alfresco/web-extension/share-config-custom.xml - Uncomment any <config> items that you want to enable. - Add any <config> items that you want to include. - Save the edited file. - Restart Alfresco.
https://docs.alfresco.com/4.2/tasks/share-customizing-custom-config-file.html
2020-01-17T16:34:46
CC-MAIN-2020-05
1579250589861.0
[]
docs.alfresco.com
Monitoring Introduction The monitoring section in the web menu is related to problem management and status of your network. It is here that you will spend most of your time when using OP5 Monitor. In the monitoring section you can: - View host and service problems. - View performance graphs. - Execute service and host commands. - Show objects on maps. - Handle schedule downtime. - View syslog messages. This chapter will give you information about the most commonly used parts of the monitoring part ofOP5 Monitor.
https://docs.itrsgroup.com/docs/op5-monitor/7.5.0/topics/user-guide/monitoring/monitoring.htm
2020-01-17T16:33:43
CC-MAIN-2020-05
1579250589861.0
[]
docs.itrsgroup.com
Locking Pageable Code or Data Certain kernel-mode drivers, such as the serial and parallel drivers, do not have to be memory-resident unless the devices they manage are open. However, as long as there is an active connection or port, some part of the driver code that manages that port must be resident to service the device. When the port or connection is not being used, the driver code is not required. In contrast, a driver for a disk that contains system code, application code, or the system paging file must always be memory-resident because the driver constantly transfers data between its device and the system. A driver for a sporadically used device (such as a modem) can free system space when the device it manages is not active. If you place in a single section the code that must be resident to service an active device, and if your driver locks the code in memory while the device is being used, this section can be designated as pageable. When the driver's device is opened, the operating system brings the pageable section into memory and the driver locks it there until no longer needed. The system CD audio driver code uses this technique. Code for the driver is grouped into pageable sections according to the manufacturer of CD device. Certain brands might never be present on a given system. Also, even if a CD-ROM exists on a system, it might be accessed infrequently, so grouping code into pageable sections by CD type makes sure that code for devices that do not exist on a particular computer will never be loaded. However, when the device is accessed, the system loads the code for the appropriate CD device. Then the driver calls the MmLockPagableCodeSection routine, as described below, to lock its code into memory while its device is being used. To isolate the pageable code into a named section, mark it with the following compiler directive: #pragma alloc_text(PAGE*Xxx, *RoutineName) The name of a pageable code section must start with the four letters "PAGE" and can be followed by up to four characters (represented here as Xxx) to uniquely identify the section. The first four letters of the section name (that is, "PAGE") must be capitalized. The RoutineName identifies an entry point to be included in the pageable section. The shortest valid name for a pageable code section in a driver file is simply PAGE. For example, the pragma directive in the following code example identifies RdrCreateConnection as an entry point in a pageable code section named PAGE. #ifdef ALLOC_PRAGMA #pragma alloc_text(PAGE, RdrCreateConnection) #endif To make pageable driver code resident and locked in memory, a driver calls MmLockPagableCodeSection, passing an address (typically the entry point of a driver routine) that is in the pageable code section. MmLockPagableCodeSection locks in the whole contents of the section that contains the routine referenced in the call. In other words, this call makes every routine associated with the same PAGEXxx identifier resident and locked in memory. MmLockPagableCodeSection returns a handle to be used when unlocking the section (by calling the MmUnlockPagableImageSection routine) or when the driver must lock the section from additional locations in its code. A driver can also treat seldom-used data as pageable so that it, too, can be paged out until the device it supports is active. For example, the system mixer driver uses pageable data. The mixer device has no asynchronous I/O associated with it, so this driver can make its data pageable. The name of a pageable data section must start with the four letters "PAGE" and can be followed by up to four characters to uniquely identify the section. The first four letters of the section name (that is, "PAGE") must be capitalized. Avoid assigning identical names to code and data sections. To make source code more readable, driver developers typically assign the name PAGE to the pageable code section because this name is short and it might appear in numerous alloc_text pragma directives. Longer names are then assigned to any pageable data sections (for example, PAGEDATA for data_seg, PAGEBSS for bss_seg, and so on) that the driver might require. For example, the first two pragma directives in the following code example define two pageable data sections, PAGEDATA and PAGEBSS. PAGEDATA is declared using the data_seg pragma directive and contains initialized data. PAGEBSS is declared using the bss_seg pragma directive and contains uninitialized data. #pragma data_seg("PAGEDATA") #pragma bss_seg("PAGEBSS") INT Variable1 = 1; INT Variable2; CHAR Array1[64*1024] = { 0 }; CHAR Array2[64*1024]; #pragma data_seg() #pragma bss_seg() In this code example, Variable1 and Array1 are explicitly initialized and are therefore placed in the PAGEDATA section. Variable2 and Array2 are implicitly zero-initialized and are placed in the PAGEBSS section. Implicitly initializing global variables to zero reduces the size of the on-disk executable file and is preferred over explicit initialization to zero. Explicit zero-initialization should be avoided except in cases where it is required in order to place a variable in a specific data section. To make a data section memory-resident and lock it in memory, a driver calls MmLockPagableDataSection, passing a data item that appears in the pageable data section. MmLockPagableDataSection returns a handle to be used in subsequent locking or unlocking requests. To restore a locked section's pageable status, call MmUnlockPagableImageSection, passing the handle value returned by MmLockPagableCodeSection or MmLockPagableDataSection, as appropriate. A driver's Unload routine must call MmUnlockPagableImageSection to release each handle it has obtained for lockable code and data sections. Locking a section is an expensive operation because the memory manager must search its loaded module list before locking the pages into memory. If a driver locks a section from many locations in its code, it should use the more efficient MmLockPagableSectionByHandle after its initial call to MmLockPagableXxxSection. The handle passed to MmLockPagableSectionByHandle is the handle returned by the earlier call to MmLockPagableCodeSection or MmLockPagableDataSection. The memory manager maintains a count for each section handle and increments this count every time that a driver calls MmLockPagableXxx for that section. A call to MmUnlockPagableImageSection decrements the count. While the counter for any section handle is nonzero, that section remains locked in memory. The handle to a section is valid as long as its driver is loaded. Therefore, a driver should call MmLockPagableXxxSection only one time. If the driver requires additional locking calls, it should use MmLockPagableSectionByHandle. If a driver calls any MmLockPagableXxx routine for a section that is already locked, the memory manager increments the reference count for the section. If the section is paged out when the lock routine is called, the memory manager pages in the section and sets its reference count to one. Using this technique minimizes the driver's effect on system resources. When the driver runs, it can lock into memory the code and data that must be resident. When there are no outstanding I/O requests for its device, (that is, when the device is closed or if the device was never opened), the driver can unlock the same code or data, making it available to be paged out. However, after a driver has connected interrupts, any driver code that can be called during interrupt processing always must be memory resident. While some device drivers can be made pageable or locked into memory on demand, some core set of such a driver's code and data must be permanently resident in system space. Consider the following implementation guidelines for locking a code or data section. The primary use of the Mm(Un)LockXxx routines is to enable normally nonpaged code or data to be made pageable and brought in as nonpaged code or data. Drivers such as the serial driver and the parallel driver are good examples: if there are no open handles to a device such a driver manages, parts of code are not needed and can remain paged out. The redirector and server are also good examples of drivers that can use this technique. When there are no active connections, both of these components can be paged out. The whole pageable section is locked into memory. One section for code and one for data per driver is efficient. Many named, pageable sections are generally inefficient. Keep purely pageable sections and paged but locked-on-demand sections separate. Remember that MmLockPagableCodeSection and MmLockPagableDataSection should not be frequently called. These routines can cause heavy I/O activity when the memory manager loads the section. If a driver must lock a section from several locations in its code, it should use MmLockPagableSectionByHandle. Feedback
https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/locking-pageable-code-or-data
2020-01-17T17:24:12
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
How to Move a Question The Drag Handle appears when you click, or hover over, a question in your survey, form, or quiz. This feature does exactly what it says on the tin, allowing you to move the corresponding question to any other place in your project. If dragging isn’t quite your style, two arrows are also provided above and below the Drag Handle. Clicking either of these will push your question in the direction of the arrow, one place at a time. Moving a Question - Hover over a question you wish to move - Click and hold the Drag Handle - Move the question OR - Hover over a question you wish to move - Click the ‘Up’ or ‘Down’ arrows
https://docs.shout.com/article/26-how-to-move-a-question
2020-01-17T17:27:43
CC-MAIN-2020-05
1579250589861.0
[]
docs.shout.com
Delete threats and anomalies Threats and anomalies can be deleted in Splunk UBA by users with admin privileges. User risk scores are generated based on the anomalies and threats linked to the user. If you choose to delete threats and anomalies, you will affect these scores. When a threat is deleted, Splunk UBA remembers the specific combination of anomalies contributing to the threat, and does not generate the threat again when the same combination of anomalies is encountered in the future. All existing user risk scores that were based on the deleted threat are adjusted. You can also delete a threat if you realize that it is not a threat. In some cases, anomalies may be created that generate a threat but upon further investigation, the threat does not represent a real threat in your environment. For example: - If a department-wide password expiration, rather than brute force attack attempts, led to abnormal numbers of login failures and threat creation. - If atypical location behavior was observed for a user because someone is working remotely for a week Sometimes, anomalies may be generated and upon investigation, deemed to have low value. The following examples represent situations where anomalies would be expected and thus have less value than cases where anomalies would not be expected: - If you have a penetration tester on your network, the tester's behavior can create anomalies that do not indicate a real threat to your environment. - If one employee takes on additional job roles to cover another employee's vacation or leave, the employee's out-of-the-ordinary behaviors can generate anomalies. - An employee works remotely temporarily from an area where your company has no offices. You can delete these anomalies to prevent them from generating threats, and also to affect a desired change in user risk scores. You can restore and view deleted anomalies, if they were deleted by accident or based on investigation details that are no longer accurate. After you delete anomalies, threats created by those anomalies can change or disappear. Similarly, after restoring deleted anomalies, new threats can be created or existing threats can change. User risk scores are also directly affected by deleting or restoring anomalies. See Splunk UBA adjusts threats after you take action on anomalies. Delete threats in Splunk UBA To delete a threat in Splunk UBA, perform the following tasks: - Open the Threat Details for the threat. - Select an Action of Not a Threat. - Select a reason and optionally enter some comments about why you are deleting this threat. - Click OK to delete the threat. Deleting a threat removes it from Splunk UBA. When selecting a reason for deleting the threat, if you chose to whitelist entities involved in the threat (for example, whitelist one or more IPs or domains), the respective whitelist gets updated. This can affect any models that look for whitelisted IPs. The audit logs in Splunk UBA are updated when a threat is deleted. Delete anomalies in Splunk UBA Deleting anomalies does not affect the data science models. There are two ways to delete anomalies. - Move anomalies to the trash and potentially restore them at a later date. - Permanently delete anomalies. Move anomalies to the trash To move a single anomaly to the trash, perform the following tasks: - Open the Anomaly Details for the anomaly that you would like to delete. - Click Delete. - Select Move to Trash. - Click OK to confirm that you want to send the anomaly to the trash. Move multiple anomalies from the anomalies table to the anomalies trash. - Move to Trash. - Click OK to confirm that you want to delete the anomalies. Permanently delete anomalies To permanently delete a single anomaly, perform the following tasks: - Open the Anomaly Details for the anomaly that you would like to delete. - Click Delete. - Select Delete Permanently. - Click OK to confirm that you want to delete the anomaly permanently. Permanently delete multiple anomalies from the anomalies table. After you delete an anomaly in this way, you cannot restore it. - Delete Permanently. - Click OK to confirm that you want to delete the anomalies. View and restore deleted anomalies Review anomalies sent to the trash and restore anomalies sent to the trash in error from the Anomalies Trash view of the anomalies table. - Select Explore > Anomalies. - Select Actions > View Anomalies Trash. - To restore all anomalies previously sent to the trash, click Actions > Restore Anomalies. - To restore a selection of the anomalies previously sent to the trash, apply additional filters then click Actions > Restore Anomalies. - To restore a single anomaly sent to the trash, click the name to open the Anomaly Details view and click Restore from that view. If necessary, you can review the IDs of permanently deleted anomalies in the /ruleengine/realtimeruleexecutor.log log file. If you export anomalies to another system, such as Splunk Enterprise Security, an analyst can open a link to a deleted anomaly or an anomaly in the trash. You can still view and restore anomalies that have been sent to the trash, but you cannot review anomalies that have been permanently deleted. Following a link to a permanently deleted anomaly displays an error of "The requested anomaly could not be found." Splunk UBA cleans up old anomalies in the trash The AnomalyPurger process runs daily after Midnight and removes all anomalies in the trash more than 90 days old. - Configure the persistence.anomalies.trashed.maintain.daysproperty to remove anomalies in the trash that are more or less than 90 days old. - When the process runs, batches of 300K anomalies are removed from the trash until until all anomalies in the trash are removed. Configure the persistence.anomalies.trashed.del.limitproperty to change the batch size as desired. Limits for anomaly actions in Splunk UBA Splunk UBA defines the following limits when taking action on anomalies, such as changing the score, moving to or removing from a watchlist, deleting anomalies, restoring anomalies from the trash, or any anomaly action rules affecting existing anomalies: - In 10 and 20 node clusters, you can perform a single anomaly action that includes up to 200K anomalies - In clusters of 7 nodes or fewer, you can perform a single anomaly action that includes up to 100K anomalies This documentation applies to the following versions of Splunk® User Behavior Analytics: 4.3.1, 4.3.2, 4.3.3, 4.3.4, 5.0.0 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/UBA/5.0.0/User/Delete
2020-01-17T15:56:28
CC-MAIN-2020-05
1579250589861.0
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Project Panorama Integration Read more about the integration on the add-on page. Version 1.0 of the integration with Project Panorama associates Sprout Invoices' project records with the project records that are created via PSP. Before starting you will need the latest versions of Project Panorama, Sprout Invoices, and this add-on installed and activated. 1) Create a Sprout Invoices project You can learn more about Sprout Invoices' projects here. 2) Assign your SI project within your Project Panorama A new meta box is added to your projects page when creating a new PSP project. Shown below is how you can select a Sprout Invoices project to associate. That's really it since the next steps are things you may have already done, or default behavior. 3) Assign your invoices and estimates to an (SI) Project This is something you should be doing already if you have been using projects. After you'll see some new information added to your SI project admins.
https://docs.sproutinvoices.com/article/140-project-panorama-integration
2020-01-17T17:24:14
CC-MAIN-2020-05
1579250589861.0
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56b29689c69791436156527b/images/570d72ce9033602796674f7d/file-zyLbcE3sHZ.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56b29689c69791436156527b/images/570d737ac697911a6f03874a/file-WnBWCUdo1L.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56b29689c69791436156527b/images/570d73f49033602796674f83/file-NE8VGoQZtG.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56b29689c69791436156527b/images/570d741f9033602796674f85/file-NkFPItUW4l.png', None], dtype=object) ]
docs.sproutinvoices.com
Table of Contents Product Index Give AlphaKini a boost towards the future. This texture set includes: - Sci-Patches: a minimalistic style that is great to match any other sci-fi set, adding some robust style. - She-fi: rough metal and some kind of alien alloy, make this set perfect for the 24th century kickass action girl, 'kini style of course. Use it over a hexagons plated catsuit for a more every-purpose style. - Tau Ceti: the perfect match for Tau Ceti, have fun mixing and matching the Tau Ceti armor with AlphaKini. It comes in the three Tau Ceti colors for.
http://docs.daz3d.com/doku.php/public/read_me/index/21650/start
2020-01-17T15:52:51
CC-MAIN-2020-05
1579250589861.0
[]
docs.daz3d.com
To build the extension you need to install the » ICU library, version 4.0.0 or newer is required. As of PHP 7.4.0 ICU 50.1 or newer is required. This extension is bundled with PHP as of PHP version 5.3.0. Alternatively, the PECL version of this extension may be used with all PHP versions greater than 5.2.0 (5.2.4+ recommended).
http://docs.php.net/manual/en/intl.requirements.php
2020-01-17T16:23:50
CC-MAIN-2020-05
1579250589861.0
[]
docs.php.net
TOPICS× Add the Launch Embed Code This lesson introduces two of the main concepts of Launch: - Environments - Embed Codes New properties are created with three environments: - Development - Staging - Production These environments correspond to the typical stages in the code development and release process. If desired, you can add additional Development environments, which is common on larger teams with multiple developers working on different projects at the same time. The embed code is a <script> tag that you put on the pages of your site>. This lesson shows how to implement the asynchronous embed code of your Launch property's Development environment. Objectives At the end of this lesson, you will be able to: - Obtain the embed code for your Launch property - Understand the difference between a Development, Staging, and Production environment - Add a Launch embed code to an HTML document - Explain the optimal location of the Launch embed code in relation to other code in the <head> of an HTML document Copy the embed code From the property Overview screen, click on the Environments tab to go to the environments page. Note that Development, Staging, and Production environments have already been created for you. Development, Staging, and Production environments correspond to the typical environments in the code development and release process. Code is first written by developers in a Development environment. When they have completed their work, they send it to a Staging environment for QA and other teams to review. After the QA and other teams are satisfied, the code is published to the Production environment, which is the public-facing environment which your visitors experience when they come to your website. Launch permits additional Development environments, which is useful in large organizations where multiple developers work on different projects at the same time. These are the only environments needed to complete the tutorial. Environments allow you to have different working versions of your Launch libraries hosted at different URLs, so you can safely add new features and make them available to the right users (such as developers, QA engineers, the public, and so on) at the right time - In the Development row, click the Install icon to open the modal.Launch defaults to the asynchronous embed codes. - Click the Copy icon to copy the installation code to your clipboard. - Click Close to close the modal. Implement the embed code in the <head> of the sample HTML page The embed code should be implemented in the <head> element of all HTML pages that share the property. You might have one or several template files that control the <head> globally across the site, making it a straightforward process to add Launch. If you haven’t already, download the sample HTML page . Right-click on this link and click Save Link As . Then, open the page in a code editor. Replace the existing embed code on or around line 34 with the one on your clipboard and save the page. Next, open the page in a web browser. If you are loading the page using the file:// protocol, built a library in this Launch environment yet.. Launch implementation best practices Review some of the Launch implementation best practices that are demonstrated in the sample page: - Data Layer : - Adobe strongly recommends creating a digital, see Customer Experience Digital Data Layer 1.0 . - To maximize what you can do in Target, Customer Attributes, and Analytics, define your data layer before the Launch install code, in order. - JavaScript helper libraries : If you already have a library like JQuery implemented in the <head> of your pages, load it before Launch in order to leverage its syntax in Launch and Target. - HTML5 doctype : The HTML5 doctype is required for Target implementations. - preconnect and dns-prefetch : Use preconnect and dns-prefetch to improve the page load time. See also: . - Pre-hiding snippet for asynchronous Target implementations : To manage content flicker when Target is deployed via asynchronous Launch embed codes, you should hardcode a pre-hiding snippet on your pages before the Launch embed codes. This is discussed in the Target tutorial. Here is a summary what these best practices look like in the suggested order. Placeholders are in ALL CAPS>
https://docs.adobe.com/content/help/en/launch/using/implement/configure/implement-the-launch-install-code.html
2020-01-17T15:52:57
CC-MAIN-2020-05
1579250589861.0
[array(['/content/dam/help/launch.en/help/assets/launch-environments.png', None], dtype=object) array(['/content/dam/help/launch.en/help/assets/launch-copyinstallcode.png', None], dtype=object) array(['/content/dam/help/launch.en/help/assets/samplepage-404.png', None], dtype=object) ]
docs.adobe.com
Extensibility in U-SQL Big Data Applications Thu, 01 Dec 2016 10:00:00 GMT Learn how to use U-SQL extensibility mechanisms to process a variety of different data ranging from JSON to image data, along with guidance on how to add your own operators. Making Big Data Batch Analytics Easier Using U-SQL Fri, 01 Jan 2016 10:00:00 GMT Michael Rys introduces the new Big Data language U-SQL with its combination of SQL and C# semantics and demonstrates its power for solving batch-oriented analytics problems using a sample dataset. He also shows how to use Visual Studio to speed up development.
https://docs.microsoft.com/en-us%5Carchive%5Cmsdn-magazine%5Cauthors%5CMichael_Rys
2020-01-17T17:31:38
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
Contributing & Support¶ LOOT is very much a community project, and contributions from its users are very welcome, whether they be metadata, translations, code or anything else. The best way to contribute is to make changes yourself at GitHub! It’s the fastest way to get changes you want applied, and you’ll get your name automatically immortalised in our credits. If you encounter an issue with LOOT, check the Frequently Asked Questions page in case a solution is available there. Otherwise, general discussion and support takes place in LOOT’s official forum thread, which is linked to on LOOT’s homepage. If you want to submit metadata, the easiest way to do so is to add the metadata to your own LOOT install, then use the Copy Metadata feature to easily get it in a form that you can then edit into a masterlist on GitHub or post in the official forum threads. Information on dirty plugins is very welcome, but for such information to be useful we require at least the filename and the CRC of the dirty plugin. The CRC may be calculated using Wrye Bash or 7-Zip, with other sources being unverified as correct. In the case of 7-Zip, the “CRC checksum for data” is the one required. Any other information, such as the ITM record and deleted reference counts, is very welcome.
https://loot.readthedocs.io/en/0.13.5/app/contributing.html
2020-01-17T15:53:56
CC-MAIN-2020-05
1579250589861.0
[]
loot.readthedocs.io
.Model Assembly: AWSSDK.dll Version: (assembly version) The SetDataRetrievalPolicyRequest type exposes the following members .NET Framework: Supported in: 4.5, 4.0, 3.5 .NET for Windows Store apps: Supported in: Windows 8.1, Windows 8 .NET for Windows Phone: Supported in: Windows Phone 8.1
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TGlacierSetDataRetrievalPolicyRequestNET45.html
2017-09-19T15:38:30
CC-MAIN-2017-39
1505818685850.32
[]
docs.aws.amazon.com
When you are working on a new test case class, you might want to begin by writing empty test methods such as: public function testSomething() { } to keep track of the tests that you have to write. The problem with empty test methods is that they are interpreted as a success by the PHPUnit framework. This misinterpretation leads to the test reports being useless -- you cannot see whether a test is actually successful or just not yet implemented. Calling $this->fail() in the unimplemented test method does not help either, since then the test will be interpreted as a failure. This would be just as wrong as interpreting an unimplemented test as a success. If we think of a successful test as a green light and a test failure as a red light, we need an additional yellow light to mark a test as being incomplete or not yet implemented. PHPUnit_Framework_IncompleteTest is a marker interface for marking an exception that is raised by a test method as the result of the test being incomplete or currently not implemented. PHPUnit_Framework_IncompleteTestError is the standard implementation of this interface. Example 7.1 shows a test case class, SampleTest, that contains one test method, testSomething(). By calling the convenience method markTestIncomplete() (which automatically raises an PHPUnit_Framework_IncompleteTestError exception) in the test method, we mark the test as being incomplete. Example 7.1: Marking a test as incomplete <?php use PHPUnit\Framework\TestCase; class SampleTest extends TestCase { public function testSomething() { // Optional: Test anything here, if you want. $this->assertTrue(true, 'This should already work.'); // Stop here and mark this test as incomplete. $this->markTestIncomplete( 'This test has not been implemented yet.' ); } } ?> An incomplete test is denoted by an I in the output of the PHPUnit command-line test runner, as shown in the following example: phpunit --verbose SampleTest PHPUnit 6.2.0 by Sebastian Bergmann and contributors. I Time: 0 seconds, Memory: 3.95Mb There was 1 incomplete test: 1) SampleTest::testSomething This test has not been implemented yet. /home/sb/SampleTest.php:12 OK, but incomplete or skipped tests! Tests: 1, Assertions: 1, Incomplete: 1. Table 7.1 shows the API for marking tests as incomplete. Table 7.1. API for Incomplete Tests Not all tests can be run in every environment. Consider, for instance, a database abstraction layer that has several drivers for the different database systems it supports. The tests for the MySQL driver can of course only be run if a MySQL server is available. Example 7.2 shows a test case class, DatabaseTest, that contains one test method, testConnection(). In the test case class' setUp() template method we check whether the MySQLi extension is available and use the markTestSkipped() method to skip the test if it is not. Example 7.2: Skipping a test <?php use PHPUnit\Framework\TestCase; class DatabaseTest extends TestCase { protected function setUp() { if (!extension_loaded('mysqli')) { $this->markTestSkipped( 'The MySQLi extension is not available.' ); } } public function testConnection() { // ... } } ?> A test that has been skipped is denoted by an S in the output of the PHPUnit command-line test runner, as shown in the following example: phpunit --verbose DatabaseTest PHPUnit 6.2.0 by Sebastian Bergmann and contributors. S Time: 0 seconds, Memory: 3.95Mb There was 1 skipped test: 1) DatabaseTest::testConnection The MySQLi extension is not available. /home/sb/DatabaseTest.php:9 OK, but incomplete or skipped tests! Tests: 1, Assertions: 0, Skipped: 1. Table 7.2 shows the API for skipping tests. Table 7.2. API for Skipping Tests In addition to the above methods it is also possible to use the @requires annotation to express common preconditions for a test case. Table 7.3. Possible @requires usages Example 7.3: Skipping test cases using @requires <?php use PHPUnit\Framework\TestCase; /** * @requires extension mysqli */ class DatabaseTest extends TestCase { /** * @requires PHP 5.3 */ public function testConnection() { // Test requires the mysqli extension and PHP >= 5.3 } // ... All other tests require the mysqli extension } ?> If you are using syntax that doesn't compile with a certain PHP Version look into the xml configuration for version dependent includes in the section called “Test Suites” © 2005–2017 Sebastian Bergmann Licensed under the Creative Commons Attribution 3.0 Unported License.
http://docs.w3cub.com/phpunit~6/incomplete-and-skipped-tests/
2017-09-19T15:19:47
CC-MAIN-2017-39
1505818685850.32
[]
docs.w3cub.com
pg_locks pg_locks The pg_locks view provides access to information about the locks held by open transactions within Greenplum Database. (such as tables), individual pages of relations, individual tuples of relations, transaction IDs, and general database objects. Also, the right to extend a relation is represented as a separate lockable object.
http://gpdb.docs.pivotal.io/43120/ref_guide/system_catalogs/pg_locks.html
2017-09-19T15:25:35
CC-MAIN-2017-39
1505818685850.32
[array(['/images/icon_gpdb.png', None], dtype=object)]
gpdb.docs.pivotal.io
Elixir provides excellent interoperability with Erlang libraries. In fact, Elixir discourages simply wrapping Erlang libraries in favor of directly interfacing with Erlang code. In this section we will present some of the most common and useful Erlang functionality that is not found in Elixir. As you grow more proficient in Elixir, you may want to explore the Erlang STDLIB Reference Manual in more detail. The built-in Elixir String module handles binaries that are UTF-8 encoded. The binary module is useful when you are dealing with binary data that is not necessarily UTF-8 encoded. iex> String.to_charlist "Ø" [216] iex> :binary.bin_to_list "Ø" [195, 152] The above example shows the difference; the String module returns Unicode codepoints, while :binary deals with raw data bytes. Elixir does not contain a function similar to printf found in C and other languages. Luckily, the Erlang standard library functions :io.format/2 and :io_lib.format/2 may be used. The first formats to terminal output, while the second formats to an iolist. The format specifiers differ from printf, refer to the Erlang documentation for details. iex> :io.format("Pi is approximately given by:~10.3f~n", [:math.pi]) Pi is approximately given by: 3.142 :ok iex> to_string :io_lib.format("Pi is approximately given by:~10.3f~n", [:math.pi]) "Pi is approximately given by: 3.142\n" Also note that Erlang’s formatting functions require special attention to Unicode handling. The crypto module contains hashing functions, digital signatures, encryption and more: iex> Base.encode16(:crypto.hash(:sha256, "Elixir")) "3315715A7A3AD57428298676C5AE465DADA38D951BDFAC9348A8A31E9C7401CB" The :crypto module is not part of the Erlang standard library, but is included with the Erlang distribution. This means you must list :crypto in your project’s applications list whenever you use it. To do this, edit your mix.exs file to include: def application do [extra_applications: [:crypto]] end The digraph module (as well as digraph_utils) contains functions for dealing with directed graphs built of vertices and edges. After constructing the graph, the algorithms in there will help finding for instance the shortest path between two vertices, or loops in the graph. Given three vertices, find the shortest path from the first to the last. iex> digraph = :digraph.new() iex> coords = [{0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}] iex> [v0, v1, v2] = (for c <- coords, do: :digraph.add_vertex(digraph, c)) iex> :digraph.add_edge(digraph, v0, v1) iex> :digraph.add_edge(digraph, v1, v2) iex> :digraph.get_short_path(digraph, v0, v2) [{0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}] Note that the functions in :digraph alter the graph structure in-place, this is possible because they are implemented as ETS tables, explained next. The modules ets and dets handle storage of large data structures in memory or on disk respectively. ETS lets you create a table containing tuples. By default, ETS tables are protected, which means only the owner process may write to the table but any other process can read. ETS has some functionality to be used as a simple database, a key-value store or as a cache mechanism. The functions in the ets module will modify the state of the table as a side-effect. iex> table = :ets.new(:ets_test, []) # Store as tuples with {name, population} iex> :ets.insert(table, {"China", 1_374_000_000}) iex> :ets.insert(table, {"India", 1_284_000_000}) iex> :ets.insert(table, {"USA", 322_000_000}) iex> :ets.i(table) <1 > {<<"India">>,1284000000} <2 > {<<"USA">>,322000000} <3 > {<<"China">>,1374000000} The math module contains common mathematical operations covering trigonometry, exponential, and logarithmic functions. iex> angle_45_deg = :math.pi() * 45.0 / 180.0 iex> :math.sin(angle_45_deg) 0.7071067811865475 iex> :math.exp(55.0) 7.694785265142018e23 iex> :math.log(7.694785265142018e23) 55.0 The queue is a data structure that implements (double-ended) FIFO (first-in first-out) queues efficiently: iex> q = :queue.new iex> q = :queue.in("A", q) iex> q = :queue.in("B", q) iex> {value, q} = :queue.out(q) iex> value {:value, "A"} iex> {value, q} = :queue.out(q) iex> value {:value, "B"} iex> {value, q} = :queue.out(q) iex> value :empty rand has functions for returning random values and setting the random seed. iex> :rand.uniform() 0.8175669086010815 iex> _ = :rand.seed(:exs1024, {123, 123534, 345345}) iex> :rand.uniform() 0.5820506340260994 iex> :rand.uniform(6) 6 The zip module lets you read and write ZIP files to and from disk or memory, as well as extracting file information. This code counts the number of files in a ZIP file: iex> :zip.foldl(fn _, _, _, acc -> acc + 1 end, 0, :binary.bin_to_list("file.zip")) {:ok, 633} The zlib module deals with data compression in zlib format, as found in the gzip command. iex> song = " ...> Mary had a little lamb, ...> His fleece was white as snow, ...> And everywhere that Mary went, ...> The lamb was sure to go." iex> compressed = :zlib.compress(song) iex> byte_size song 110 iex> byte_size compressed 99 iex> :zlib.uncompress(compressed) "\nMary had a little lamb,\nHis fleece was white as snow,\nAnd everywhere that Mary went,\nThe lamb was sure to go." © 2012–2017 Plataformatec Licensed under the Apache License, Version 2.0.
http://docs.w3cub.com/elixir~1.5/erlang-libraries/
2017-09-19T15:22:08
CC-MAIN-2017-39
1505818685850.32
[]
docs.w3cub.com
While we strive to give you perfect uptime, like other complex applications RethinkDB is not immune to crashing. Here are some tips on how to recover from a crash, how to submit a bug report, and how to maximize availability. You may be able to check if the kernel’s out-of-memory killer is responsible for the crash by checking the system message buffer: sudo dmesg | grep oom This may show you messages similar to this: rethinkdb invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 [<ffffffff8111d272>] ? oom_kill_process+0x82/0x2a0 If this is the case, you may be able to avoid crashes by changing RethinkDB’s cache size. For information on in-memory caches, how to check their current size, and how to change them, read Understanding RethinkDB memory requirements. The log file’s location is dependent on your system configuration and how you started RethinkDB. If you started rethinkdb on a terminal rather than from a startup script, it will log to the rethinkdb_data directory. By default it will write to log_file but this may be overridden with the --log-file startup option. If your Linux system uses systemd, use journalctl to view the log: journalctl -u rethinkdb@<instance> If you installed RethinkDB through a package manager on a system that does not use systemd, then you may have to check where it’s configured to log. It’s very likely this will be in the /var/log/ directory (i.e., /var/log/rethinkdb). The log may give you information as to what caused the crash. If it doesn’t appear to be a memory issue and the log provides no clue, you can try asking for support on our official IRC channel, #rethinkdb on freenode or our Google Group. If your problem is a crash that we’ve seen before—or our users have—this may get you a quick answer. We use Github for issue tracking:. If you want to report a suspected bug in RethinkDB, open an issue there. The most important things for you to provide for us are: The full output from rethinkdb --version, something like: rethinkdb 1.13.3 (CLANG 5.1 (clang-503.0.40)) The full output from uname -a, something like: Darwin rethink.local 13.3.0 Darwin Kernel Version 13.3.0: Tue Jun 3 21:27:35 PDT 2014; root:xnu-2422.110.17~1/RELEASE_X86_64 x86_64 The backtrace from the crash, if it’s available in the logs. Other things that might be helpful to us, if you have them: rethinkdb._debug_table_statustable (a “hidden” table in the rethinkdbsystem database) rethinkdbon startup In the Data Explorer, the following command will output the contents of all the configuration/status tables and the most recent 50 lines of the logs table: r.expr(["current_issues", "jobs", "stats", "server_config", "server_status", "table_config", "table_status", "db_config", "cluster_config"]).map( [r.row, r.db('rethinkdb').table(r.row).coerceTo('array')] ).coerceTo('object').merge( {logs: r.db('rethinkdb').table('logs').limit(50).coerceTo('array')} ) RethinkDB supports replication of data: every table in a database can be replicated as many times as you have servers in a cluster. Setting up replication is a simple operation with the web interface or the command line tool. For details, read Sharding and replication. RethinkDB does not have fully automatic failover (yet), but if a server in a cluster crashes it can be manually removed from the cluster. In most cases, RethinkDB will recover from such a situation automatically. For information on this process, read Failover. © RethinkDB contributors Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License.
http://docs.w3cub.com/rethinkdb~java/docs/crashes/
2017-09-19T15:21:58
CC-MAIN-2017-39
1505818685850.32
[]
docs.w3cub.com
This article explains the visual elements of the RadComboBox control. Name Description Textbox Could also be referred to as "box", "input", etc. This is where the search text is typed. Toggle Button Positioned next to the textbox, it is used to toggle the dropdown containing the combobox items. Dropdown List Could also be referred to as "popup" or "dropdown". The combobox items that match the filter criteria (or all items if there isn't a search string provided) are displayed in this list which appears upon typing in the textbox or clicking/tapping the toggle button. List item Could also be referred to as "item", "entry", "option", etc.. Represents a single data item from the control's datasource. The value visible in the item comes from the field in the datasource, assigned as a dataTextField. If this field is not assigned, the text value is the string representation of the object to which the item is bound.
http://docs.telerik.com/help/windows-8-html/combobox-visual-structure.html
2017-09-19T15:30:19
CC-MAIN-2017-39
1505818685850.32
[]
docs.telerik.com
What Is the AWS Serverless Application Repository? The AWS Serverless Application Repository makes it easy for developers and enterprises to quickly find, deploy, and publish serverless applications in the AWS Cloud. For more information about serverless applications, see Serverless Computing and Applications You can easily publish applications, sharing them publicly with the community at large, or privately within your team or across your organization. To publish a serverless application (or app), you can use the AWS Management Console, the AWS SAM command line interface (AWS SAM CLI), or AWS SDKs to upload your code. Along with your code, you upload a simple manifest file, also known as an AWS Serverless Application Model (AWS SAM) template. For more information about AWS SAM, see the AWS Serverless Application Model Developer Guide. The AWS Serverless Application Repository is deeply integrated with the AWS Lambda console. This integration means. In this guide, you can learn about the two ways to work with the AWS Serverless Application Repository: Publishing Applications – Configure and upload applications to make them available to other developers, and publish new versions of applications. Deploying Applications – Browse for applications and view information about them, including source code and readme files. Also install, configure, and deploy applications of your choosing. Next Steps For a tutorial about publishing a sample application to the AWS Serverless Application Repository, see Quick Start: Publishing Applications. For instructions about deploying applications from the AWS Serverless Application Repository, see How to Deploy Applications.
https://docs.amazonaws.cn/en_us/serverlessrepo/latest/devguide/what-is-serverlessrepo.html
2021-01-15T20:58:42
CC-MAIN-2021-04
1610703496947.2
[]
docs.amazonaws.cn
You are viewing the RapidMiner Studio documentation for version 9.5 - Check here for latest version Create Create Create Threshold and Apply Threshold operators..' predictions and all other examples are assigned 'negative' predictions.
https://docs.rapidminer.com/9.5/studio/operators/scoring/confidences/create_threshold.html
2021-01-15T21:05:30
CC-MAIN-2021-04
1610703496947.2
[]
docs.rapidminer.com
You are viewing the RapidMiner Studio documentation for version 9.6 - Check here for latest version Weight by Deviation (RapidMiner Studio Core) SynopsisThis operator calculates the relevance of attributes of the given ExampleSet based on the (normalized) standard deviation of the attributes. Description The Weight by Deviation operator calculates the weight of attributes with respect to the label attribute based on the (normalized) standard deviation of the attributes. The higher the weight of an attribute, the more relevant it is considered. The standard deviations can be normalized by average, minimum, or maximum of the attribute. Please note that this operator can be only applied on ExampleSets with numerical label.. The standard deviation is a measure of how spread out numbers are. The formula is simple: it is the square root of the Variance. - normalizeThis parameter indicates if the standard deviation should be divided by the minimum, maximum, or average of the attribute. Range: selection Tutorial Processes Calculating the attribute weights of the Polynomial data set The 'Polynomial' data set is loaded using the Retrieve operator. A breakpoint is inserted here so that you can have a look at the ExampleSet. You can also see the standard deviation of all attributes in the 'Statistics' column in the Meta Data View. The Weight by Deviation operator is applied on this ExampleSet to calculate the weights of the attributes. The normalize weights parameter is set to false, thus the weights will not be normalized. The sort weights parameter is set to true and the sort direction parameter is set to 'ascending', thus the results will be in ascending order of the weights. You can verify this by viewing the results of this process in the Results Workspace. You can also see that these weights are exactly the same as the standard deviations of the attributes.
https://docs.rapidminer.com/9.6/studio/operators/modeling/feature_weights/weight_by_deviation.html
2021-01-15T21:19:57
CC-MAIN-2021-04
1610703496947.2
[]
docs.rapidminer.com
. Click the Filters toggle to show or hide the Filters section. - If the agent heartbeat and health status properties Do one of the following: The All Hosts page displays with a list of the hosts filtered by the cluster name. -.
https://docs.cloudera.com/cloudera-manager/7.2.6/managing-clusters/topics/cm-status.html
2021-01-15T21:40:43
CC-MAIN-2021-04
1610703496947.2
[]
docs.cloudera.com
0138 Anonymous General information - Submitterʼs details - This submission has been made anonymously About the submission - Please select the industry your submission is in relation to. If required, you may select multiple industries. - Other Services - Do you have a particular regional interest? If required, you may select multiple regions. - National - New South Wales. Supporting businesses to attract the right candidates for their vacant positions that they may not have the time, resource or ability to do so themselves helps them to achieve their goals and contribute to a strong economy. Placing Recruitment consultant onto the STSOL will weaken the supply of good strong recruitment consultants coming to Australia and being able to stay long term with no opportunity of PR to assist long term plans of recruitment business owners. I believe this move will significantly reduce the effectiveness of the recruitment industry to support Australian businesses in achieving their goals of growth, prosperity and stability. [40076|100816]
https://docs.employment.gov.au/0138-anonymous
2021-01-15T21:03:19
CC-MAIN-2021-04
1610703496947.2
[]
docs.employment.gov.au
Ontology bounty program provides an easy and accessible channel to the community to contribute anything that is developed or discovered by member of the community. The following libraries have been created and are maintained by the community. A set of guides and tutorials on dApp development using the Ontology framework. A Python library that can be used to implement an anonymous credential scheme. An extension designed by Matus Zamborsky to develop, test, and deploy smart contracts on the Ontology blockchain.
https://docs.ont.io/community/community-libraries
2021-01-15T20:56:50
CC-MAIN-2021-04
1610703496947.2
[]
docs.ont.io
You are viewing the RapidMiner Studio documentation for version 9.5 - Check here for latest version Cluster Distance Performance (RapidMiner Studio Core) SynopsisThis operator is used for performance evaluation of centroid based clustering methods. This operator delivers a list of performance criteria values based on cluster centroids. parameters.. Please notice that empty clusters will be ignored in the calculation of the Davies-Bouldin index. - K-Medoids operator. The 'Ripley-Set' has two real attributes; 'att1' and 'att2'. The K-Medoids operator is applied on this data set with default values for all parameters. A breakpoint is inserted at this step so that you can have a look at the results of the K-Medoids operator. You can see that two new attributes are created by the K-Medoids operator. The id attribute is created to distinguish examples clearly. The cluster attribute is created to show which cluster the examples belong to. As parameter k was set to 2, only two clusters are possible. That is why each example is assigned to either 'cluster_0' or 'cluster_1'..
https://docs.rapidminer.com/9.5/studio/operators/validation/performance/segmentation/cluster_distance_performance.html
2021-01-15T21:18:54
CC-MAIN-2021-04
1610703496947.2
[]
docs.rapidminer.com
Upgrading a module to be compatible with SilverStripe 4 This guide will help you upgrade a SilverStripe 3 module to be compatible with SilverStripe 4. You should be familiar with Upgrading a project to SilverStripe 4 before reading this guide. The process for upgrading a SilverStripe module is very similar to the process for Upgrading a SilverStripe project. This guide focuses on highlighting ways in which upgrading a module differs from upgrading a regular project. Improving the upgrade experience of your users with a ".upgrade.yml" file Making your module compatible with SilverStripe 4 is only one part of the process. As a module maintainer, you also want to provide a good upgrade experience for your users. Your module can integrate with the SilverStripe upgrader just like the SilverStripe core modules. Your SilverStripe 4 module should ship with a .upgrade.yml file. This file is read by the upgrader and will define new APIs introduced by the upgraded version of your module. Each step in this guide details what entry you should add to your module's .upgrade.yml file. Step 0 - Branching off your project You'll want to run your module upgrade on a dedicated development branch. While it's possible to upgrade a module from within a SilverStripe project, it's usually cleaner and easier to clone your module and work directly on it. # We're assumming that the default branch of you module is the latest SS3 compatible branch git clone [email protected]:example-user/silverstripe-example-module.git cd silverstripe-example-module git checkout -b pulls/ss4-upgrade git push origin pulls/ss4-upgrade --set-upstream If you're planning to keep supporting the SilverStripe 3 version of your module, consider creating a dedicated SilverStripe 3 branch. To require the development branch of your module in a SilverStripe 4 project, you can use composer and prefix the name the name of your branch with dev-. composer require example-user/silverstripe-example-module dev-pulls/ss4-upgrade If the development branch is hosted on a different Git remote than the one used to publish your module, you'll need to add a VCS entry to your test project's composer.json file. { "name": "example-user/test-project", "type": "project", "require": { "example-user/silverstripe-example-module": "dev-pulls/ss4-upgrade" }, "repositories": [ { "type": "vcs", "url": "[email protected]:alternative-user/silverstripe-example-module.git" } ] } You will not be able to install your development branch in a SilverStripe 4 project until you've adjusted your module's dependencies. Step 1 - Upgrade your dependencies Before you can install your module in a SilverStripe 4 project, you must update your module's composer.json file to require SilverStripe 4 compatible dependencies. In most cases, you'll be better off updating your module's composer file manually, especially if your module only requires a small number of dependencies. You can use upgrader's recompose command if you want, but you'll need to carefully validate the resulting composer.json file. Update module's type SilverStripe 4 modules are now installed inside the vendor directory. To get your module installed in the vendor directory, you'll need to update its type to silverstripe-vendormodule. You'll also need to add a dependency to silverstripe/vendor-plugin. { "name": "example-user/silverstripe-example-module", - "type": "silverstripe-module", + "type": "silverstripe-vendormodule", "require": { + "silverstripe/vendor-plugin": "^1", + "silverstripe/framework": "^3" } } Prefer specific modules over recipes When upgrading a project, it is recommended to require recipes rather than modules. However, when upgrading a module, you want to limit the number of additional packages that gets installed along with your module. You should target specific packages that your module depends on. For example, let's say your module adds a ModelAdmin to the SilverStripe administration area without interacting with the CMS directly. In this scenario, the main module you need is silverstripe/admin which contains the ModelAdmin class and related administration functionality. If you update your composer.json file to require silverstripe/recipe-cms, you'll force your users to install a lot of modules they may not need like silverstripe/cms, silverstripe/campaign-admin, silverstripe/asset-admin, silverstripe/versioned-admin. Avoid rigid constraints Choose constraints based on the minimum version of SilverStripe 4 you are planning on supporting and allow your module to work with future releases. For example, if your module requires an API that got introduced with the 4.1 release of silverstripe/framework, then that's the version you should target. You should use the caret symbol ( ^) over the tilde ( ~) so your module works with more recent releases. In this scenario, your constraint should look like "silverstripe/framework": "^4.1". Avoid tracking unnecessary files If you run composer commands from your module's folder, a lock file will be created and dependencies will be installed in a vendor folder. You may also get project-files and public-files entries added under the extra key in your composer.json. While these changes may be useful for testing, they should not be part of the final release of your module. Finalising the module's dependency upgrade You should commit the changes to your module's composer.json and push them to your remote branch. By this point, your module should be installable in a test SilverStripe 4 project. It will be installed under the vendor directory (e.g.: vendor/example-user/silverstripe-example-module). However, it will throw exceptions if you try to run it. From this point, you can either work from a test project or you can keep working directly on your module. Step 2 - Update your environment configuration As a module maintainer, you shouldn't be shipping any environment file with your module. So there's no need for you to run the upgrader environment command. If your module requires environment variables, you should update your documentation accordingly, but otherwise you can move on to the next step. Step 3 - Namespacing your module Namespacing your module is mandatory to get it working with SilverStripe 4. You can use the add-namespace upgrader command to achieve this. # If you are working from a test project, you need to specify the `--root-dir` parameter upgrade add-namespace --root-dir vendor/example-user/silverstripe-example-module \ "ExampleUser\\SilverstripeExampleModule" \ vendor/example-user/silverstripe-example-module/code/ # If you are working directly from the module, you can ommit `--root-dir` parameter upgrade add-namespace "ExampleUser\\SilverstripeExampleModule" code/ If your module codebase is structured in folders, you can use the --psr4 and --recursive flag to quickly namespace your entire module in one command. This command will recursively go through the code directory and namespace all files based on their position relative to code. upgrade add-namespace --recursive --psr4 "ExampleUser\\SilverstripeExampleModule" code/ Configuring autoloading You need to update your composer.json file with an autoload entry, so composer knows what folder maps to what namespace. You can do this manually: { "name": "example-user/silverstripe-example-module", "type": "silverstripe-vendormodule", "require": { "silverstripe/framework": "^4", "silverstripe/vendor-plugin": "^1" - } + }, + "autoload": { + "psr-4": { + "ExampleUser\\SilverstripeExampleModule\\": "code/", + "ExampleUser\\SilverstripeExampleModule\\Tests\\": "tests/" + } + } } Alternatively, you can use the --autoload parameter when calling add-namespace to do this for you. upgrade add-namespace --recursive --psr4 --autoload "ExampleUser\\SilverstripeExampleModule" code/ upgrade add-namespace --recursive --psr4 --autoload "ExampleUser\\SilverstripeExampleModule\\Tests" tests Learn more about configuring autoloading in your composer.json file. Preparing your add-namespace will create a .upgrade.yml file that maps your old class names to their new namespaced equivalent. This will be used by the upgrade command in the next step. Depending on the nature of your module, you may have some class names that map to other common names. When the upgrade command runs, it will try to substitute any occurrence of the old name with the namespaced one. This can lead to accidental substitution. For example, let's say you have a Link class in your module. In many project the word Link will be used for other purposes like a field label or property names. You can manually update your .upgrade.yml file to define a renameWarnings section. This will prompt users upgrading to confirm each substitution. mappings: # Prompt user before replacing references to Link Link: ExampleUser\SilverstripeExampleModule\Model\Link # No prompt when replacing references to ExampleModuleController ExampleModuleController: ExampleUser\SilverstripeExampleModule\Controller renameWarnings: - Link Make sure to commit this file and to ship it along with your upgraded module. This will allow your users to update references to your module's classes if they use the upgrader on their project. Finalising your namespaced module By this point: - all your classes should be inside a namespace - your composer.jsonfile should have an autoload definition - you should have a .upgrade.ymlfile. However, your codebase is still referencing SilverStripe classes by their old non-namespaced names. Commit your changes before proceeding to the next step. Step 4 - Update codebase with references to newly namespaced classes This part of the process is identical for both module upgrades and project upgrades. # If upgrading from inside a test project upgrade-code upgrade --root-dir vendor/example-user/silverstripe-example-module \ vendor/example-user/silverstripe-example-module/ # If upgrading the module directly upgrade-code upgrade ./ All references to the old class names will be replaced with namespaced class names. By this point, you should be able to load your module with PHP. However, your module will be using deprecated APIs. Step 5 - Updating your codebase to use SilverStripe 4 API This step will allow you to update references to deprecated APIs. If you are planning on making changes to your own module's API, take a minute to define those changes in your .upgrade.yml: - this will help you with updating your own codebase - your users will be warned when using your module's deprecated APIs. You can define warnings for deprecated APIs along with a message. If there's a one-to-one equivalent for the deprecated API, you can also define a replacement. e.g.: warnings: classes: 'ExampleUser\SilverstripeExampleModule\Controller': message: 'This warning message will be displayed to your users' url: '' methods: 'ExampleUser\SilverstripeExampleModule\AmazingClass::deprecatedMethod()': message: 'Replace with a different method' replacement: 'newBetterMethod' props: 'ExampleUser\SilverstripeExampleModule\AmazingClass->oldProperty': message: 'Replace with a different property' replacement: 'newProperty' When you are done updating your .upgrade.yml file, you can run the inspect command to search for deprecated APIs. # If upgrading from inside a test project upgrade-code inspect --root-dir vendor/example-user/silverstripe-example-module \ vendor/example-user/silverstripe-example-module/code/ # If upgrading the module directly upgrade-code inspect code/ Step 6 - Update your entry point Module do not have an entry point. So there's nothing to do here. Step 7 - Update project structure This step is optional. We recommend renaming code to src. This is only a convention and will not affect how your module will be executed. If you do rename this directory, do not forget to update your autoload configuration in your composer.json file. Step 8 - Switch to public web-root The public web root does not directly affect module. So you can skip this step. Step 9 - Move away from hardcoded paths for referencing static assets While SilverStripe 4 projects can get away with directly referencing static assets under some conditions, modules must dynamically expose their static assets. This is necessary to move modules to the vendor folder and to enable the public web root. Exposing your module's static assets You'll need to update your module's composer.json file with an extra.expose key. { "name": "example-user/silverstripe-example-module", "type": "silverstripe-vendormodule", "require": { "silverstripe/framework": "^4", "silverstripe/vendor-plugin": "^1" }, "autoload": { "psr-4": { "ExampleUser\\SilverstripeExampleModule\\": "code/" } }, "autoload-dev": { "psr-4": { "ExampleUser\\SilverstripeExampleModule\\Tests\\": "tests/" } - } + }, + "extra": { + "expose": [ + "images", + "styles", + "javascript" + ] + } } Referencing static assets This process is essentially the same for projects and modules. The only difference is that module static asset paths must be prefix with the module's name as defined in their composer.json file. <?php - Requirements::css('silverstripe-example-module/styles/admin.css'); + Requirements::css('example-user/silverstripe-example-module: styles/admin.css'); $pathToImage = - 'silverstripe-example-module/images/logo.png'; + ModuleResourceLoader::singleton()->resolveURL('example-user/silverstripe-example-module: images/logo.png'); Step 10 - Update database class references Just like projects, your module must define class names remapping for every DataObject child. SilverStripe\ORM\DatabaseAdmin: classname_value_remapping: ExampleModuleDummyDataObject: ExampleUser\SilverstripeExampleModule\Models\DummyDataObject On the first dev/build after a successful upgrade, the ClassName field on each DataObject table will be substituted with the namespaced classname. Extra steps You've been through all the steps covered in the regular project upgrade guide. These 2 additional steps might not be necessary. Create migration tasks Depending on the nature of your module, you might need to perform additional tasks to complete the upgrade process. For example, the framework module ships with a file migration task that converts files from the old SilverStripe 3 structure to the new structure required by SilverStripe 4. Extend BuildTasks and create your own migration task if your module requires post-upgrade work. Document this clearly for your users so they know they need to run the task after they're done upgrading their project. Keep updating your The upgrader can be run on projects that have already been upgraded to SilverStripe 4. As you introduce new API and deprecate old ones, you can keep updating your .upgrade.yml file to make it easy for your users to keep their code up to date. If you do another major release of your module aimed at SilverStripe 4, you can use all the tools in the upgrader to make the upgrade process seamless for your users.
https://docs.silverstripe.org/en/4/upgrading/upgrading_module/
2021-01-15T19:54:02
CC-MAIN-2021-04
1610703496947.2
[]
docs.silverstripe.org
Converts the measure of the specified angle from degrees to radians. radians( angle_in_degrees ) angle_in_degrees: (Decimal) An angle measure that will be converted into radians. Decimal This function can only be used for values between 0 and 2π (~6.286). You can experiment with this function in the test box below. Test Input Test Output Test Output radians(180) returns 3.141592653589793 On This Page
https://docs.appian.com/suite/help/18.3/fnc_trigonometry_radians.html
2021-01-15T21:43:20
CC-MAIN-2021-04
1610703496947.2
[]
docs.appian.com
To enable Push Notifications and In-App Messaging for an app, simply enable the Messaging feature within Kumulos, upload your APNS and/or FCM certificates and then integrate an SDK into your app project. As soon as you publish the update to the app stores you will then be able to send push notifications and in-app messages to your users. If you are an agency, managing an app for a client, you can now send messages on their behalf as part of a retention and engagement service you can deliver for them or, you can let them do this from their Client Portal showing your brand. Enable Messaging¶ Create a Client¶ Kumulos groups your Apps under Clients, usually the name of the company whose app you are building or optimizing. To add a new client click the primary action button from your agency console. Fill in the name of the client. If you are managing an app for someone else, you can enable the Client Portal so they can send push notifications and in-app messages to their users. Click "Save" when done. You will now be redirected to the client dashboard. Add an App¶ Now, you can add an app for that client by clicking the primary action button. Fill in the name of the App, and optionally, add a brief description and upload an icon. Click "Save" when done. You will now be redirected to the dashboard for that app where you can enable Push Notifications. Enable Messaging¶ To start a 30 day free trial, either select Messaging from the left menu or click the Start button next to Messaging on the App Dashboard. You will now see some more information about the Messaging feature. Click Enable when prompted. You will be asked to confirm that you wish to begin a 30 day free trial. Click Yes, proceed when prompted. Your 30 day trial of messaging will begin. Configure Gateways¶ In order to send messages to users of your app, you must configure one or more messaging gateways. For mobile push notifications, this will likely involve configuring the Apple Push Notification Service (APNS) and/or Firebase Cloud Messaging (FCM), uploading your push certificates to Kumulos and then integrating the SDK. For in-app messaging, you simply need to integrate the SDK. There is no additional configuration required for web push notifications. Click CONFIGURE NOW or, expand 'Messaging' in the left menu and select 'Configuration'. You will now see the Messaging Configuration screen where you can add your APNS and/or FCM credentials and download the appropriate SDKs for your app. Click the cog icon next to the platform you would like to configure to open the dialog where you can enter the required information to send push notifications to iOS devices via APNS and/or Android devices via FCM. Safari Configuration¶ To send web notifications to desktop Safari, you'll need to create a separate Website Push Identifier Certificate in the Apple Developer Member Center. Please see our Web SDK Integration Guide for full details of how to create this certificate. Once you have this certificate, expand 'Messaging' in the left menu, select 'Configuration' and click the cog next to the Safari icon and upload your certificate along with your site URL and icon. FCM Configuration¶ In order to enable push notifications for Android with Kumulos, you'll need to set up an FCM project and configure push for your app. These steps are shown in the following video guide. Enabling Push Notifications for Android HCM Configuration, select 'Configuration' and click the cog next to the Huawei icon. You will also need to enter the App ID and App Secret from the Huawei Developer Console. When integrating an SDK, ensure you also complete the additional steps to add Huawei Mobile Services dependencies, files, plugins and manifest entries to your project. Download and Integrate an SDK¶ You can now download the appropriate SDK(s) for your mobile app by selecting an SDK from the list at the bottom of the Configuration screen. Follow the integration guide for that SDK to initialize the Kumulos client in your app project. Please note that you will need the API Key and Secret Key shown on the App Dashboard to do this. For web push notifications, you will need the Website & PWA SDK, which is distributed through a CDN or NPM. In addition to your API Key and Secret Key shown on the App Dashboard, you will also need the VAPID public key from the code snippets web push configuration dialog. Please see the Website & PWA integration guide for more details.! Reviewing your Configuration¶ To review your messaging gateway configuration, expand 'Messaging' in the left menu and select 'Configuration'. This will show how many installs are subscribed to receive push notifications and how many and users have opted in to receive in-app messages, broken down by platform. Viewing the Error Log¶ The Error Log will show any errors Kumulos has encountered sending push notifications to the native push gateways such as APNS or FCM. You can see the gateway where the error occurred, status code, the error message itself and when it happened. This can be help debug any problems. If you cannot see the Error Log, you may need to update to the latest APNS/FCM APIs. Common Errors¶ Here are some common errors along with details of how to resolve them. However, if you continue to experience any problems, please don't hesitate to contact support who are standing by to help! Device Token not for Topic¶ The DeviceTokenNotForTopic error from APNS usually means that the bundle id in the APNS certificate uploaded to Kumulos is different from the bundle id in the push token received from the install that the push was to be sent to. Please check that your APNS certificate is a Sandbox & Production certificate and that the bundle id matches that of your XCode project as shown in this video. Firebase Cloud Messaging API has not been used in project.¶ This error can sometimes be received from FCM depending how old your Firebase project is and where in the Google developer console you enabled FCM. Click on the link shown alongside this error in the Kumulos Error Log to go to your Google developer console and enable the Firebase Cloud Messaging API. Updating your Configuration¶ The 'Push Notifications' widget is where you can reconfigure the native push gateways. This will show an amber, warning alert if the push certificate for APNS is due to expire in under two weeks. This will show a red critical alert if the certificate has expired! In other words, if you cannot now send notifications to your iOS app! These alerts will be reflected in the App Dashboard. Click on the cog to see the expiry date of the certificate and upload a new one. Add users to the Testers Channel¶ The Testers channel is a special channel that you can use to send test messages to internal users from your team or your client, before sending the message to your wider audience. You can add users to the testers channel from the Install Explorer. Select the app and then click on the 'Installs' tab. This will show the most recent installs of your app (platform, device model, user identifier and install id). If you know your user identifier or install id, type this into the search box and click 'Lookup' to find your install. If you do not know your user identifier or install id, then simply uninstall and reinstall the app on your phone - this will create a new install. Once you have done this, click the 'Refresh' button and your install should be listed at the top. Expand the details to verify it is your device (model, location, timezone etc). Once you have found your install, click on the 'Push' tab and toggle the switch next to the 'Testers (System Channel)' to subscribe add yourself to the channel. You can also send a test push to verify this indeed your device. Now, whenever you are planning a new campaign, you can use the 'Test Send' button to send the push notification or in-app message everyone in the Testers channel, to see how the message will appear on your own device, before sending to your audience. Configure Web Push Prompts¶ To configure when and where to prompt website visitors to subscribe to web push notifications, expand 'Messaging' in the left menu, select 'Configuration' and then click the 'Add' button next to Push Prompts. You can add multiple prompts with different labels and appearances on different pages. Show Prompt¶ First you need to select the event that should cause the prompt to show. This can be any analytics event tracked by your site (for example a product purchase event) or when some a page is viewed. You can also match properties of the analytics event (for example only a product purchased event where value exceeds a given amount) or the paths for the pages where the prompt should be shown. To configure a prompt to show on some or all pages, select 'Page Viewed (system event)', with the 'Path' property and the 'In' operator. Under 'Includes' add the paths for the pages where the prompt should be shown. This can be the complete path or a wildcard. For example: - To show the prompt on every page, enter *, click 'Create Option *' then 'Add' - To show the prompt on every where path starts blog or news: - Enter /blog/*and click 'Create Option /blog/*' - Enter /news/*and click 'Create Option /news/*' - Click 'Add' You can have multiple path options in each 'Includes', in which case the prompt will be shown if the page matches one path or another path option. For example: where path starts blog or news: You can also 'Add' multiple 'Include' filters, in which case the prompt will only be shown if the event includes every filter (i.e. event must match filter one and filter two etc). For example: when a product is purchased from category 'merchandise' with value greater than \$100 USD. Please note if you include two paths like this then the prompt will only be shown if the page matches path one and path two. If this is not desired, click the 'x' icon to remove the filter. If you do not want the prompt to show immediately on the page, toggle 'Delay showing' and enter how many seconds to wait before showing the prompt. Labels¶ You can customize the text that will be shown when the user hovers over the prompt under labels. Appearance¶ Expand, 'Appearance' to customize the layout and color scheme for your prompt. You can position the prompt in the bottom left or right hand corner and change the background and foreground colors. As you do this, you can see a preview of how the prompt will look. Permission Request Overlay¶ To boost subscription rates, you can show some interstitial content whilst the browser is requesting push permission. Toggle 'Push Request Overlay' and enter a Heading and Body for the overlay. Choose an Image¶ Click CHOOSE IMAGE overlay. To use an icon you have previously uploaded, simply select the icon and click the tick primary action button. You can either scroll through your library or filter by the tags you have added previously. Colors¶ You can customize the colors of the mask that will be overlayed on the page whilst the browser is requesting push permission, the background color of the overlay and the forground color for text by clicking the appropriate color picker icon. This will open the color picker. Click 'Confirm' once you have selected the desired color. Links¶ You can add links to your privacy policy, for example, by clicking 'ADD LINK'. Enter a label and the URL for the link. If you want to add another link, click 'ADD LINK' again. If you want to remove a link, click the trashcan icon next to it. Click 'Save' to add the prompt. The SDK will now show this prompt within one hour. To force the SDK to update its prompt configuration sooner (during testing for example), simply visit the page, remove the promptUpdated key from local storage (in Chrome Developer Tools, go to the 'Application' tab, under 'Storage' expand 'Indexed DB' and 'kumulos') and then reload the page. Edit or Delete a Prompt¶ To edit or delete a prompt, expand 'Messaging' in the left menu, select 'Configuration', expand the context menu next to the prompt and select edit or delete as appropriate. Again, the SDK will update its prompt configuration within one hour. Customize Web Push Icon¶ To customize the icon displayed with web push notifications, expand 'Messaging' in the left menu, select 'Configuration' and click the settings cog next to the web icon. You can either enter the URL to the icon or use your media library. Click CHOOSE/UPLOAD ICON your web push notifications. To use an icon you have previously uploaded, simply select the icon and click the tick primary action button. You can either scroll through your library or filter by the tags you have added previously. Click CONFIGURE when done. You can override this default icon when sending a notification. That's it - you're all set to start sending notifications to your subscribers! Read on to learn about your messaging dashboard, how to use segments, channels, geofences and beacons to target your audience, how to send push notifications and in-app messages and how to create automation rules to automatically send notifications on a trigger such as a device entering or exiting a geofence.
https://docs.kumulos.com/messaging/getting-started/
2021-01-15T20:55:25
CC-MAIN-2021-04
1610703496947.2
[array(['/assets/console/clients/console.png', 'Add a Client'], dtype=object) array(['/assets/console/clients/add-client.png', 'Add a Client'], dtype=object) array(['/assets/console/apps/add-app.png', 'Create an App'], dtype=object) array(['/assets/messaging/getting-started/start.png', 'Start using messaging'], dtype=object) array(['/assets/messaging/getting-started/enable.png', 'Enable messaging'], dtype=object) array(['/assets/messaging/getting-started/confirm.png', 'Confirm start trial'], dtype=object) array(['/assets/messaging/getting-started/configure-dashboard.png', 'Configure Messaging Dashboard'], dtype=object) array(['/assets/messaging/getting-started/messaging-configuration-start.png', 'Messaging Configuration Screen'], dtype=object) array(['/assets/messaging/getting-started/push-configure-dialog.png', 'Configure Push Dialog'], dtype=object) array(['/assets/messaging/getting-started/download-sdk.png', 'Download SDK'], dtype=object) array(['/assets/integration/web-push-sdk-cdn.png', 'Web Push SDK Code Snippet for CDN'], dtype=object) array(['/assets/console/apps/install-push.png', 'Recent installs'], dtype=object) array(['/assets/messaging/getting-started/messaging-configuration.png', 'Messaging Configuration'], dtype=object) array(['/assets/messaging/getting-started/push-error-log.png', 'Error Log'], dtype=object) array(['/assets/messaging/getting-started/push-certificate-expiring.png', 'APNS Certificate Expired'], dtype=object) array(['/assets/messaging/getting-started/push-certificate-expired.png', 'APNS Certificate Expired'], dtype=object) array(['/assets/messaging/getting-started//push-certificate-expiry-date.png', 'APNS Certificate Expiry Date'], dtype=object) array(['/assets/messaging/audience/channels/channels-list.png', 'Testers Channel'], dtype=object) array(['/assets/messaging/getting-started/lookup-install.png', 'Lookup Install'], dtype=object) array(['/assets/messaging/getting-started/new-install.png', 'New Install'], dtype=object) array(['/assets/messaging/getting-started/add-install-to-testers-channel.png', 'Add Install to Testers Channel'], dtype=object) array(['/assets/messaging/getting-started/test-send.png', 'Test Send'], dtype=object) array(['/assets/messaging/getting-started/web-push-prompt-add.png', 'Add Prompt'], dtype=object) array(['/assets/messaging/getting-started/web-push-prompt-show.png', 'Show Prompt'], dtype=object) array(['/assets/messaging/getting-started/web-push-prompt-property-or.png', 'Using or operator'], dtype=object) array(['/assets/messaging/getting-started/web-push-prompt-property-and.png', 'Using and operator'], dtype=object) array(['/assets/messaging/getting-started/web-push-prompt-label.png', 'Customize tool-tip'], dtype=object) array(['/assets/messaging/getting-started/web-push-prompt-appearance.png', 'Customize tool-tip'], dtype=object) array(['/assets/messaging/getting-started/web-push-permission-request-overlay-config.png', 'Permission Request Overlay Config'], dtype=object) array(['/assets/messaging/getting-started/web-push-permission-request-overlay.png', 'Permission Request Overlay'], dtype=object) array(['/assets/messaging/getting-started/web-push-prompt-edit-delete.png', 'Add Prompt'], dtype=object) array(['/assets/messaging/getting-started/web-push-icon.png', 'Customize Icon'], dtype=object) ]
docs.kumulos.com
Save Data to a JSON File¶ Saving Data to a JSON File¶. Primitives¶ Saving primitives to a JSON file is pretty straight-forward using GSON: gson.toJson(123.45, new FileWriter(filePath)); Custom Objects¶ public class User { private int id; private String name; private transient String nationality; }} Using GsonBuilder¶().
https://jse.readthedocs.io/en/latest/utils/gson/saveJsonFile.html
2021-01-15T21:01:38
CC-MAIN-2021-04
1610703496947.2
[]
jse.readthedocs.io
Selecting objects - In the Display menu, click Selection mode. Alternately you can press Ctrl + M. - Draw a marquee on the vertex of the object that you want to select. The outline of the selected object changes to the dotted line. Press F1 inside the application to read context-sensitive help directly in the application itself ← ∈ Last modified: le 2019/04/13 13:30
http://docs.teamtad.com/doku.php/selecting_objects
2021-01-15T20:33:00
CC-MAIN-2021-04
1610703496947.2
[]
docs.teamtad.com
JIRA SERVER This article describes how to sync issue links between JIRA Server Instances. Source Instance Send over the issue links Outgoing sync replica.issueLinks = issue.issueLinks Destination Instance Add the script below in the Incoming sync of the Destination instance to receive issue links from the Source Instance. Incoming sync issue.issueLinks = replica.issueLinks Attention! You should separately synchronize all issues that are linked to a specific issue to keep an association between them on the Destination side.Example Sync between projects DEV and TEST; issue DEV-1 is linked to issue DEV-2; sync issue DEV-1 to TEST-1; TEST-1 still is not associated with any other issue sync issue DEV-2 to TEST-2; as a result, you will get a link between remote issues on the TEST project (TEST-1 and TEST-2) - The default issue links synchronization behavior does not include merging issue links from the source and destination issues. One the issue has been synchronized, it will have the same links as the source issue. - If you sync issue via multiple connections, issue links will be overwritten. In case you don't want the source issue links to be overridden if the remote side doesn't have any issue links we recommend to use the external script IssueLinks.groovy. See it in action See Also - Syncing issue links on Jira Server using an external script IssueLinks.groovy - Example script on how to sync issue links on Jira Cloud - How to display the remote issue link in a custom field - Exalate for Jira on-premise: Syncing issue links to a custom field with Jira Issue Link API - Links
https://docs.idalko.com/exalate/display/ED/How+to+synchronize+issue+links+on+Jira+Server
2021-01-15T20:14:07
CC-MAIN-2021-04
1610703496947.2
[]
docs.idalko.com
This guide walks through setting up the Kubernetes Ingress Controller using Kong Enterprise. This architecture is described in detail in this doc. We assume that we start from scratch and you don’t have Kong Enterprise deployed. For the sake of simplicity, we will deploy Kong Enterprise and its database in Kubernetes itself. You can safely run them outside Kubernetes as well. Prerequisites Before we can deploy the Kubernetes Ingress Controller with Kong Enterprise, we need to satisfy the following prerequisites: - Kong Enterprise License secret - Kong Enterprise Docker registry access - Kong Enterprise bootstrap password In order to create these secrets, let’s provision the kong namespace first: $ kubectl create namespace kong namespace/kong created Kong Enterprise License secret Kong Enterprise requires a valid license to run. As part of sign up for Kong Enterprise, you should have received a license file. Save the license file temporarily to disk Kong Enterprise bootstrap password Next, we need to create a secret containing the password using which we can login into Kong Manager. Please replace cloudnative with a random password of your choice and note it down. $ kubectl create secret generic kong-enterprise-superuser-password -n kong --from-literal=password=cloudnative secret/kong-enterprise-superuser-password created Once these are created, we are ready to deploy Kong Enterprise Ingress Controller. Install $ kubectl apply -f It takes a little while to bootstrap the database. Once bootstrapped, you should see the Kubernetes Ingress Controller running with Kong Enterprise as its core: $ kubectl get pods -n kong NAME READY STATUS RESTARTS AGE ingress-kong-548b9cff98-n44zj 2/2 Running 0 21s kong-migrations-pzrzz 0/1 Completed 0 4m3s postgres-0 1/1 Running 0 4m3s You can also see the kong-proxy service: $ kubectl get services -n kong NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kong-admin LoadBalancer 10.63.255.85 34.83.95.105 80:30574/TCP 4m35s kong-manager LoadBalancer 10.63.247.16 34.83.242.237 80:31045/TCP 4m34s kong-proxy LoadBalancer 10.63.242.31 35.230.122.13 80:32006/TCP,443:32007/TCP 4m34s kong-validation-webhook ClusterIP 10.63.240.154 <none> 443/TCP 4m34s postgres ClusterIP 10.63.241.104 <none> 5432/TCP 4m34s Note: Depending on the Kubernetes distribution you are using, you might or might not see an external IP assigned to the three LoadBalancer type services. Please see your provider’s guide on obtaining an IP address for a Kubernetes Service of type LoadBalancer. If you are running Minikube, you will not get an external IP address. Setup Kong Manager Next, if you browse to the IP address or host of the kong-manager service in your Browser, which in our case is. Kong Manager should load in your browser. Try logging in to the Manager with the username kong_admin and the password you supplied in the prerequisite, it should fail. The reason being we’ve not yet told Kong Manager where it can find the Admin API. Let’s set that up. We will take the External IP address of kong-admin service and set the environment variable KONG_ADMIN_API_URI: KONG_ADMIN_IP=$(kubectl get svc -n kong kong-admin --output=jsonpath='{.status.loadBalancer.ingress[0].ip}') kubectl patch deployment -n kong ingress-kong -p "{\"spec\": { \"template\" : { \"spec\" : {\"containers\":[{\"name\":\"proxy\",\"env\": [{ \"name\" : \"KONG_ADMIN_API_URI\", \"value\": \"${KONG_ADMIN_IP}\" }]}]}}}}" It will take a few minutes to roll out the updated deployment and once the new ingress-kong pod is up and running, you should be able to log into the Kong Manager UI. As you follow along with other guides on how to use your newly deployed the Kubernetes Ingress Controller, you will be able to browse Kong Manager and see changes reflectded in the UI as Kong’s configuration changes. Using Kong for Kubernetes with Kong Enterprise Let’s setup an environment variable to hold the IP address of kong-proxy service: $ export PROXY_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" service -n kong kong-proxy) Once you’ve installed Kong for Kubernetes Enterprise, please follow our getting started tutorial to learn more. Customizing by use-case The deployment in this guide is a point to start using Ingress Controller. Based on your existing architecture, this deployment will require custom work to make sure that it needs all of your requirements. In this guide, there are three load-balancers deployed for each of Kong Proxy, Kong Admin and Kong Manager services. It is possible and recommended to instead have a single Load balancer and then use DNS names and Ingress resources to expose the Admin and Manager services outside the cluster.
https://docs.konghq.com/kubernetes-ingress-controller/1.0.x/deployment/kong-enterprise/
2021-01-15T21:48:10
CC-MAIN-2021-04
1610703496947.2
[]
docs.konghq.com
Step 2: Create a Simple Application Server Stack - Chef 11 A basic application server stack consists of a single application server instance with a public IP address to receive user requests. Application code and any related files are stored in a separate repository and deployed from there to the server. The following diagram illustrates such a stack. The stack has the following components: A layer, which represents a group of instances and specifies how they are to be configured. The layer in this example represents a group of PHP App Server instances. An instance, which represents an Amazon EC2 instance. In this case, the instance is configured to run a PHP app server. Layers can have any number of instances. AWS OpsWorks Stacks also supports several other app servers. For more information, see Application Server Layers. An app, which contains the information required to install an application on the application server. The code is stored in a remote repository, such as Git repository or an Amazon S3 bucket. The following sections describe how to use the AWS OpsWorks Stacks console to create the stack and deploy the application. You can also use an AWS CloudFormation template to provision a stack. For an example template that provisions the stack described in this topic, see AWS OpsWorks Snippets.
https://docs.aws.amazon.com/opsworks/latest/userguide/gettingstarted-simple.html
2018-08-14T14:47:11
CC-MAIN-2018-34
1534221209040.29
[array(['images/php_walkthrough_arch_2.png', None], dtype=object)]
docs.aws.amazon.com
Queries Data Manipulation Language (DML) is a vocabulary used to retrieve and work with data in SQL Server 2017 and SQL Database. Most also work in SQL Data Warehouse and PDW (review each individual statement for details). Use these statements to add, modify, query, or remove data from a SQL Server database. In This Section The following table lists the DML statements that SQL Server uses. The following table lists the clauses that are used in multiple DML statements or clauses.
https://docs.microsoft.com/en-us/sql/t-sql/queries/queries?view=sql-server-2017
2018-08-14T13:21:09
CC-MAIN-2018-34
1534221209040.29
[array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'], dtype=object) ]
docs.microsoft.com
Timeline. Speed Timeline. Ratio Speed Timeline. Ratio Speed Timeline. Ratio Speed Property Ratio Definition public : double SpeedRatio { get; set; } double SpeedRatio(); void SpeedRatio(double speedratio); public double SpeedRatio { get; set; } Public ReadWrite Property SpeedRatio As double <timeline SpeedRatio="double"/> Property Value double double A finite value greater than 0 that specifies the rate at which time progresses for this timeline, relative to the speed of the timeline's parent. If this timeline is a root timeline, specifies the default timeline speed. The value is expressed as a factor where 1 represents normal speed, 2 is double speed, 0.5 is half speed, and so on. The default value is 1.
https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.media.animation.timeline.speedratio
2018-08-14T13:17:58
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
Workflow engine operation order The workflow engine runs in a predefined order relative to business rules and database operations. It caches commonly-used published workflows to improve performance. The Run after bus. rules run workflow property defines if a workflow is Default or Deferred. The diagram below shows the workflow engine order of operations and when Default and Deferred flows are executed. For a more general overview of engine operation order, see Execution order of scripts and engines. Figure 1. Workflow engine order diagram Workflow caching The workflow engine caches commonly-used published workflows to improve performance. Caching significantly reduces the number of database queries per workflow. By default, the engine caches up to 300 unique workflow versions. Caching very large workflows may reduce this number as the cache size cannot exceed the Java Virtual Machine (JVM) heap size. To change the maximum number of cached workflow versions, navigate to Workflow > Administration > Properties and modify the value of the The max number of models that will be concurrently held in the LRU cache (glide.workflow.model.cache.max) property. You must restart the instance to apply this change.
https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/administer/using-workflows/concept/c_WorkflowEngineOperationOrder.html
2018-08-14T13:52:14
CC-MAIN-2018-34
1534221209040.29
[]
docs.servicenow.com
. - Embed Splunk dashboard elements in third party software Build a report or a dashboard and then IFrame it into a different Web application. Requires some knowledge of HTML and Splunk's view system, as well as a customized login or SSO. - Customize event display Change the way events display in Splunk, for example build a Twitter-like interface to display events as they stream into Splunk in real time. Requires some knowledge of HTML, Splunk's view system, and possibly JavaScript. --related cached items in client browsers, allowing new static content to load. This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7 Feedback submitted, thanks!
http://docs.splunk.com/Documentation/Splunk/4.3.1/Developer/CustomizationOptions
2018-08-14T13:58:46
CC-MAIN-2018-34
1534221209040.29
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Encoding Overview, punctuation marks, and a few emoji., musical symbols, and most emojis. Many of these additional characters are mapped beyond the original plane using an extension mechanism called "surrogate pairs." Over 128,000 code points have already been assigned characters; the rest have been set aside for future use. Unicode also provides Private Use Areas of over 137,000 locations available to applications for user-defined characters, which typically are rare ideographs representing names of people or places, [historical scripts, or constructed languages.]. For all their advantages, Unicode Standards are far from a panacea for internationalization. The code-point positions of Unicode elements do not imply a sort order and the organization is often unrelated to language usage. To be useful, the mapping of Unicode points to fonts that can display them must either be handled by the application or the platform. The Common Locale Data Repository (CLDR) provides regional format and sorting information but requires that either the platform or a service manage that information. "Localizability Overview") several Unicode's BiDi Algorithm.) Figure 1: Precomposed and composite glyphs
https://docs.microsoft.com/en-us/globalization/encoding/encoding-overview
2018-08-14T13:41:03
CC-MAIN-2018-34
1534221209040.29
[array(['/en-us/media/hubs/globalization/IC855516.png', 'Example Glyphs Example Glyphs'], dtype=object)]
docs.microsoft.com
Details and advanced features¶ This is an account of slightly less common Hypothesis features that you don’t need to get started but will nevertheless make your life easier. 200 examples per test (the default) and it turns out 150 of those examples don’t match your needs, that’s a lot of wasted time. The way Hypothesis handles this is to let you specify things which you assume to be true. This lets you abort a test in a way that marks the example as bad rather than failing the test. Hypothesis will use this information to try to avoid similar examples in future. For example suppose had the following test: from hypothesis import given from hypothesis.strategies import floats @given(floats()) def test_negation_is_self_inverse(x): assert x == -(-x) Running this gives us: Falsifying example: test_negation_is_self_inverse(x=float(‘nan’)) hypothesis import given, assume from hypothesis.strategies import floats from math import isnan @given(floats()) def test_negation_is_self_inverse_for_non_nan(x): assume(not isnan(x)) assert x == -(-x) And this passes without a problem. assume throws an exception which terminates the test when provided with a false argument. It’s essentially an assert, except that the exception it throws is one that Hypothesis identifies as meaning that this is a bad example, not a failing test. In order to avoid the easy trap where you assume a lot more than you intended, Hypothesis will fail a test when it can’t find enough examples passing the assumption. If we’d written: from hypothesis import given, assume from hypothesis.strategies import floats @given(floats()) def test_negation_is_self_inverse_for_non_nan(x): assume(False) assert x == -(-x) Then on running we’d have got the exception: Unsatisfiable: Unable to satisfy assumptions of hypothesis test_negation_is_self_inverse_for_non_nan. Only 0 examples found after 0.0791318 seconds:_in_range(1, 1000) is a lot better than assume(1 <= x <= 1000), but assume will take you a long way if you can’t. Settings¶ Hypothesis tries to have good defaults for its behaviour, but sometimes that’s not enough and you need to tweak it. The mechanism for doing this is the Settings object. You can pass this to a @given invocation as follows: from hypothesis import given, Settings @given(integers(), settings=Settings(max_examples=500)) def test_this_thoroughly(x): pass This uses a Settings object which causes the test to receive a much larger set of examples than normal. There is a Settings.default object. This is both a Settings object you can use, but additionally any changes to the default object will be picked up as the defaults for newly created settings objects. >>> from hypothesis import Settings >>> s = Settings() >>> s.max_examples 200 >>> Settings.default.max_examples = 100 >>> t = Settings() >>> t.max_examples 100 >>> s.max_examples 200 You can also override the default locally by using a settings object as a context manager: >>> with Settings(max_examples=150): ... print(Settings().max_examples) .... As well as max_examples there are a variety of other settings you can use. help(Settings) in an interactive environment will give you a full list of them. Seeing intermediate result¶ To see what’s going on while Hypothesis runs your tests, you can turn up the verbosity setting. This works with both find and @given. (The following examples are somewhat manually truncated because the results of verbose output are, well, verbose, but they should convey the idea). >>> from hypothesis import find, Settings, Verbosity >>> from hypothesis.strategies import lists, booleans >>> find(lists(booleans()), any, settings=Settings(verbosity=Verbosity.verbose)) Found satisfying example [True, True, ... Shrunk example to [False, False, False, True, ... Shrunk example to [False, False, True, False, False, ... Shrunk example to [False, True, False, True, True, ... Shrunk example to [True, True, True] Shrunk example to [True, True] Shrunk example to [True] [True] >>> from hypothesis import given >>> from hypothesis.strategies import integers() >>> Settings.default.verbosity = Verbosity.verbose >>> @given(integers()) ... def test_foo(x): ... assert x > 0 ... >>> test_foo() Trying example: test_foo(x=-565872324465712963891750807252490657219) Traceback (most recent call last): ... File "<stdin>", line 3, in test_foo AssertionError Trying example: test_foo(x=565872324465712963891750807252490657219) Trying example: test_foo(x=0) Traceback (most recent call last): ... File "<stdin>", line 3, in test_foo AssertionError Falsifying example: test_foo(x=0) Traceback (most recent call last): ... AssertionError. Definining: >>> from hypothesis.strategies >>> integers() RandomGeometricIntStrategy() | WideRangeIntStrategy() >>> integers(min_value=0) IntegersFromStrategy(0) >>> integers(min_value=0, max_value=10) BoundedIntStrategy(0, 10) If you want to see exactly what a strategy produces you can ask for an example: >>> integers(min_value=0, max_value=10).example() 7 Many strategies are build out of other strategies. For example, if you want to define a tuple you need to say what goes in each element: >>> from hypothesis.strategies import tuples >>> tuples(integers(), integers()).example() (-1953, 85733644253897814191482551773726674360154905303788466954) Further details are available in a separate document. The gory details of given parameters¶ The @given decorator may be used to specify(), y=integers()) def d(x, **kwargs): pass class SomeTest(TestCase): @given(integers()) def test_a_thing(self, x): pass The following are not: @given(integers(), integers(), integers()) def e(x, y): pass @given(x=integers()) def f(x, y): pass @given() def f(x, y): pass The rules for determining what are valid uses of given are as follows: - Arguments passed as keyword arguments must cover the right hand side of the argument list. That is, if you provide an argument as a keyword you must also provide everything to the right of it. - Positional arguments fill up from the right, starting from the first argument not covered by a keyword argument. (Note: Mixing keyword and positional arguments is supported but deprecated as its semantics are highly confusing and difficult to support. You’ll get a warning if you do). - If the function has variable keywords, additional arguments will be added corresponding to any keyword arguments passed. These will be to the right of the normal argument list in an arbitrary order. - varargs are forbidden on functions used with @given. If you don’t have kwargs then the function returned by @given will have the same argspec (i.e. same arguments, keyword arguments, etc) as the original but with different defaults. The reason for the “filling up from the right” behaviour is so that using @given with instance methods works: self will be passed to the function as normal and not be parametrized over. Custom function execution¶ Hypothesis provides you with a hook that lets you control how it runs examples. This lets you do things like set up and tear down around each example, run examples in a subprocess, transform coroutine tests into normal tests, etc. setup_example. Methods of a BasicStrategy however will typically be called whenever. This may happen inside your executor or outside. This is why they have a “Warning you have no control over the lifecycle of these values” attached. Fork before each test¶ An obstacle you can run into if you want to use Hypothesis to test native code is that your C code segfaults, or fails a C level assertion, and it causes the whole process to exit hard and Hypothesis just cries a little and doesn’t know what is going on, so can’t minimize an example for you. The solution to this is to run your tests in a subprocess. The process can die as messily as it likes and Hypothesis will be sitting happily in the controlling process unaffected by the crash. Hypothesis provides a custom executor for this: from hypothesis.testrunners.forking import ForkingTestCase class TestForking(ForkingTestCase): @given(integers()) def test_handles_abnormal_exit(self, i): os._exit(1) @given(integers()) def test_normal_exceptions_work_too(self, i): assert False Exceptions that occur in the child process will be seamlessly passed back to the parent. Abnormal exits that do not throw an exception in the child process will be turned into an AbnormalExit exception. There are currently some limitations to this approach: - Exceptions which are not pickleable will be turned into abormal exits. - Tracebacks from exceptions are not properly recreated in the parent process. - Code called in the child process will not be recorded by coverage. - This is only supported on platforms with os.fork. e.g. it will not work on Windows. Some of these limitations should be resolvable in time. Using Hypothesis to find values¶ You can use Hypothesis’s data exploration features to find values satisfying some predicate: >>> conditition lambda x: <unknown> >>> from hypothesis.strategies import booleans >>> find(booleans(), lambda x: False) Traceback (most recent call last): ... hypothesis.errors.DefinitelyNoSuchExample: No examples of conditition lambda x: <unknown> (all 2 considered) (The “lambda x: unknown” is because Hypothesis can’t retrieve the source code of lambdas from the interactive python console. It gives a better error message most of the time which contains the actual condition) The reason for the two different types of errors is that there are only a small number of booleans, so it is feasible for Hypothesis to enumerate all of them and simply check that your condition is never true. Providing explicit examples¶ You can explicitly ask Hypothesis to try a particular example as follows: from hypothesis import given, example from hypothesis.strategies import text @given(text()) @example("Hello world") @example(x="Some very long string") def test_some_code(x): assert True Hypothesis will run all examples you’ve asked for first. If any of them fail it will not go on to look for more examples. This can be useful both because it’s easier to share and version examples in source code than it is to share the example database, and it can also allow you to feed specific examples that Hypothesis is unlikely to figure out on its own. things like the following will also work: from unittest import TestCase from hypothesis import given, example from hypothesis.strategies import text.
https://hypothesis.readthedocs.io/en/1.5.0/details.html
2018-08-14T13:28:00
CC-MAIN-2018-34
1534221209040.29
[]
hypothesis.readthedocs.io
Build requirements In general, for building i2pd you need several things: - compiler with c++11 support (for example: gcc >= 4.7, clang) - boost >= 1.49 - openssl library - zlib library (openssl already depends on it) Optional tools: - cmake >= 2.8 (or 3.3+ if you want to use precompiled headers on windows) - miniupnp library (for upnp support) - websocketpp (for websocket ui)
https://i2pd.readthedocs.io/en/latest/devs/building/requirements/
2018-08-14T14:21:49
CC-MAIN-2018-34
1534221209040.29
[]
i2pd.readthedocs.io
Tutorial: Amazon EC2 Spot Instances Overview Spot Instances allow you to bid on unused Amazon Elastic Compute Cloud (Amazon EC2) capacity and run the acquired instances you another option for obtaining more compute capacity. Spot Instances can significantly lower your Amazon EC2 costs for batch processing, scientific research, image processing, video encoding, data and web crawling, financial analysis, and testing. Additionally, Spot Instances give you access to large amounts of additional capacity in situations where). It's important to note: the Spot Price goes above your bid and your instance is terminated by Amazon EC2, you will not be charged for any partial hour of usage. This tutorial shows how to use AWS SDK for Java to do the following. Submit a Spot Request Determine when the Spot Request becomes fulfilled Cancel the Spot Request Terminate associated instances Prerequisites To use this tutorial you must have the AWS SDK for Java installed, as well as having met its basic installation prerequisites. See Set up the AWS SDK for Java for more information. Step 1: Setting Up Your Credentials To begin using this code sample, you need to set up AWS credentials. See Set up AWS Credentials and Region for Development for instructions on how to do that. Note We recommend that you use the credentials of an IAM user to provide these values. For more information, see Sign Up for AWS and Create an IAM User. Now that you have configured your settings, you can get started using the code in the example. Step 2: Setting Up a Security Group A security group acts as a firewall that controls the traffic allowed in and out of a group of instances. By default, an instance is started without any security group, which means that all incoming IP traffic, on any TCP port will be denied. So, before submitting our Spot Request, we will set up a security group that allows the necessary network traffic. For the purposes of this tutorial, we will create a new security group called "GettingStarted" that allows Secure Shell (SSH) traffic from the IP address where you are running your application from. To set up a new security group, you need to include or run the following code sample that sets up the security group programmatically. After we create an AmazonEC2 client object, we create a CreateSecurityGroupRequest object with the name, "GettingStarted" and a description for the security group. Then we call the ec2.createSecurityGroup API to create the group. To enable access to the group, we create an ipPermission object with the IP address range set to the CIDR representation of the subnet for the local computer; the "/10" suffix on the IP address indicates the subnet for the specified IP address. We also configure the ipPermission object with the TCP protocol and port 22 (SSH). The final step is to call ec2.authorizeSecurityGroupIngress with the name of our security group and the ipPermission object. // Create the AmazonEC2 client so we can call various APIs. AmazonEC2 ec2 = AmazonEC2ClientBuilder.defaultClient(); // Create a new security group. try { CreateSecurityGroupRequest securityGroupRequest = new CreateSecurityGroupRequest("GettingStartedGroup", "Getting Started Security Group"); ec2.createSecurityGroup(securityGroupRequest); } catch (AmazonServiceException ase) { // Likely this means that the group is already created, so ignore. System.out.println(ase.getMessage()); } String ipAddr = "0.0.0.0/0"; // Get the IP of the current host, so that we can limit the Security // Group by default to the ip range associated with your subnet. try { InetAddress addr = InetAddress.getLocalHost(); // Get IP Address ipAddr = addr.getHostAddress()+"/10"; } catch (UnknownHostException e) { } // Create a range that you would like to populate. ArrayList<String> ipRanges = new ArrayList<String>(); ipRanges.add(ipAddr); // Open up port 22 for TCP traffic to the associated IP // from above (e.g. ssh traffic). ArrayList<IpPermission> ipPermissions = new ArrayList<IpPermission> (); IpPermission ipPermission = new IpPermission(); ipPermission.setIpProtocol("tcp"); ipPermission.setFromPort(new Integer(22)); ipPermission.setToPort(new Integer(22)); ipPermission.setIpRanges(ipRanges); ipPermissions.add(ipPermission); try { // Authorize the ports to the used. AuthorizeSecurityGroupIngressRequest ingressRequest = new AuthorizeSecurityGroupIngressRequest("GettingStartedGroup",ipPermissions); ec2.authorizeSecurityGroupIngress(ingressRequest); } catch (AmazonServiceException ase) { // Ignore because this likely means the zone has // already been authorized. System.out.println(ase.getMessage()); } Note you only need to run this application once to create a new security group. You can also create the security group using the AWS Toolkit for Eclipse. See Managing Security Groups from AWS Explorer for more information. Step 3: Submitting Your Spot Request To submit a Spot request, you first need to determine the instance type, Amazon Machine Image (AMI), and maximum bid price you want to use. You must also include the security group we configured previously, so that you can log into the instance if desired. There are several instance types to choose from; go to Amazon EC2 Instance Types for a complete list. For this tutorial, we will use t1.micro, the cheapest instance type available. Next, we will determine the type of AMI we would like to use. We'll use ami-a9d09ed1, the most up-to-date Amazon Linux AMI available when we wrote this tutorial. The latest AMI may change over time, but you can always determine the latest version AMI by following these steps: Open the Amazon EC2 console. Choose the Launch Instance button. The first window displays the AMIs available. The AMI ID is listed next to each AMI title. Alternatively, you can use the DescribeImagesAPI, but leveraging that command is outside the scope of this tutorial. request bid the On-Demand price ($0.03) to maximize the chances that the bid will be fulfilled. You can determine the types of available instances and the On-Demand prices for instances by going to Amazon EC2 Pricing page. To request a Spot Instance, you simply need to build your request with the parameters you chose earlier. We start by creating a RequestSpotInstanceRequest object. The request object requires the number of instances you want to start and the bid price. Additionally, you need to set the LaunchSpecification for the request, which includes the instance type, AMI ID, and security group you want to use. Once the request is populated, you call the requestSpotInstances method on the AmazonEC2Client object. The following example shows how to request a Spot Instance. // Create the AmazonEC2 client so we can call various APIs. AmazonEC2 ec2 = AmazonEC2ClientBuilder.defaultClient(); // Initializes a Spot Instance Request RequestSpotInstancesRequest requestRequest = new RequestSpotInstancesRequest(); // Request 1 x t1.micro instance with a bid price of $0.03. requestRequest.setSpotPrice("0.03"); requestRequest.setInstanceCount(Integer.valueOf(1)); // Setup the specifications of the launch. This includes the // instance type (e.g. t1.micro) and the latest Amazon Linux // AMI id available. Note, you should always use the latest // Amazon Linux AMI id or another of your choosing. LaunchSpecification launchSpecification = new LaunchSpecification(); launchSpecification.setImageId("ami-a9d09ed1"); launchSpecification.setInstanceType(InstanceType.T1Micro); // Add the security group to the request. ArrayList<String> securityGroups = new ArrayList<String>(); securityGroups.add("GettingStartedGroup"); launchSpecification.setSecurityGroups(securityGroups); // Add the launch specifications to the request. requestRequest.setLaunchSpecification(launchSpecification); // Call the RequestSpotInstance API. RequestSpotInstancesResult requestResult = ec2.requestSpotInstances(requestRequest); Running this code will launch a new Spot Instance Request. There are other options you can use to configure your Spot Requests. To learn more, please visit Tutorial: Advanced Amazon EC2 Spot Request Management or the RequestSpotInstances class in the AWS SDK for Java API Reference. Note You will be charged for any Spot Instances that are actually launched, so make sure that you cancel any requests and terminate any instances you launch to reduce any associated fees. Step 4: Determining the State of Your Spot Request response to our requestSpotInstances request. The following example code shows how to gather request IDs from the requestSpotInstances response and use them to populate an ArrayList. // Call the RequestSpotInstance API. RequestSpotInstancesResult requestResult = ec2.requestSpotInstances(requestRequest); List<SpotInstanceRequest> requestResponses = requestResult.getSpotInstanceRequests(); // Setup an arraylist to collect all of the request ids we want to // watch hit the running state. ArrayList<String> spotInstanceRequestIds = new ArrayList<String>(); // Add all of the request ids to the hashset, so we can determine when they hit the // active state. for (SpotInstanceRequest requestResponse : requestResponses) { System.out.println("Created Spot Request: "+requestResponse.getSpotInstanceRequestId()); spotInstanceRequestIds.add(requestResponse.getSpotInstanceRequestId()); } To monitor your request ID, call the describeSpotInstanceRequests method to determine the state of the request. Then loop until the request is not in the "open" state. Note that we monitor for a state of not "open", rather a state of, say, "active", because the request can go straight to "closed" if there is a problem with your request arguments. The following code example provides the details of how to accomplish this task. // Create a variable that will track whether there are any // requests still in the open state. boolean anyOpen; do { // Create the describeRequest object); After running this code, your Spot Instance Request will have completed or will have failed with an error that will be output to the screen. In either case, we can proceed to the next step to clean up any active requests and terminate any running instances. Step 5: Cleaning Up Your Spot Requests and Instances Lastly, we need(spotInstanceRequestIds); ec2.cancelSpotInstanceRequests(cancelRequest); } catch (AmazonServiceException e) { // Write out any exceptions that may have occurred. System.out.println("Error cancelling instances"); System.out.println("Caught Exception: " + e.getMessage()); System.out.println("Reponse Status Code: " + e.getStatusCode()); System.out.println("Error Code: " + e.getErrorCode()); System.out.println("Request ID: " + e.getRequestId()); } To terminate any outstanding instances, you will need the instance ID associated with the request that started them. The following code example takes our original code for monitoring the instances and adds an ArrayList in which we store the instance ID associated with the describeInstance response. // Create a variable that will track whether there are any requests // still in the open state. boolean anyOpen; // Initialize variables. ArrayList<String> instanceIds = new ArrayList<String>(); do { // Create the describeRequest; } // Add the instance id to the list we will // eventually terminate. instanceIds.add(describeResponse.getInstanceId()); } }); Using the instance IDs, stored in the ArrayList, terminate any running instances using the following code snippet. try { // Terminate instances. TerminateInstancesRequest terminateRequest = new TerminateInstancesRequest(instanceIds); ec2.terminateInstances(terminateRequest); } catch (AmazonServiceException e) { // Write out any exceptions that may have occurred. System.out.println("Error terminating instances"); System.out.println("Caught Exception: " + e.getMessage()); System.out.println("Reponse Status Code: " + e.getStatusCode()); System.out.println("Error Code: " + e.getErrorCode()); System.out.println("Request ID: " + e.getRequestId()); } Bringing It All Together To bring this all together, we provide a more object-oriented approach that combines the preceding steps we showed: initializing the EC2 Client, submitting the Spot Request, determining when the Spot Requests are no longer in the open state, and cleaning up any lingering Spot request and associated instances. We create a class called Requests that performs these actions. We also create a GettingStartedApp class, which has a main method where we perform the high level function calls. Specifically, we initialize the Requests object described previously. We submit the Spot Instance request. Then we wait for the Spot request to reach the "Active" state. Finally, we clean up the requests and instances. The complete source code for this example can be viewed or downloaded at GitHub. Congratulations! You have just completed the getting started tutorial for developing Spot Instance software with the AWS SDK for Java. Next Steps Proceed with Tutorial: Advanced Amazon EC2 Spot Request Management.
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/tutorial-spot-instances-java.html
2018-08-14T14:48:15
CC-MAIN-2018-34
1534221209040.29
[]
docs.aws.amazon.com
Release Notes (in [Tungsten Replicator 2.1 Manual]). Issues: CONT-1380 For more information, see Connector Change User as Ping. (in [Tungsten Replicator 5.0 Manual]). (in [Continuent Tungsten 4.0 Manual]),sten and should be taken into account when deploying or updating to this release. Installation and Deployment Due to a bug within the Drizzle JDBC driver when communicating with MySQL, using the optimizeRowEventsoptions could lead to significant memory usage and subsequent failure. To alleviate the problem. For more information, see Drizzle JDBC Issue 38. Issues: CONT-1115 The tungsten_provision_thl (in [Continuent Tungsten 4.0 Manual]) command would not use the user specified --java-file-encoding(in [Continuent Tungsten 4.0 Manual]) setting, which could lead to data corruption during provisioning. Issues: CONT-1479 A master replicator could fail to finish extracting a fragmented transaction if disconnected during processing. Issues: CONT-1163 A slave replicator could fail to come ONLINE(in [Tungsten Replicator 2.1 Manual]) if the last THL file is empty. Issues: CONT-1164 Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication. Issues: CONT-1412 The replicator incorrectly assigns LOAD DATAstatements to the #UNKNOWNshard. This can happen when the entire length is above 200 characters. Issues: CONT-1431 In some situations, statements that would be unsafe for parallel execution were not serializing into a single threaded execution properly during the applier phase of the target connection. Issues: CONT-1489 CSV files generated during batch loading into datawarehouses would be created within a directory structure within the /tmp. On long-running replictors, automated processes that would clean up the /tmp(in [Tungsten Replicator 2.1 Manual]), then CSV files for the service alpha, CSV files for the first active applier channel will be stored in /opt/continuent/tmp/staging/alpha/staging0. Issues: CONT-1500 The pkey(in [Tungsten Replicator 2.1 Manual]) filter could force table metadata to be updated when the update was not required. Issues: CONT-1162 When using the dropcolumn(in [Continuent Tungsten 4.0 Manual]) filter in combination with the colnames(in [Tungsten Replicator 2.1 Manual]), an issue could arise where differences in the incoming Schema and target schema could result in incorrect SQL statements. The solution is to reconfigure the colnames(in [Tungsten Replicator 2.1 Manual]) on the slave not to extract the schema information from the database but instead to use the incoming data from the source database and the translated THL. Issues: CONT-1495
http://docs.continuent.com/release-notes/release-notes-ct-4-0-3.html
2018-08-14T13:44:09
CC-MAIN-2018-34
1534221209040.29
[]
docs.continuent.com
Platform alerts in Monitoring Splunk Enterprise.: -!
http://docs.splunk.com/Documentation/Splunk/7.1.0/Admin/LicenseUsageReportViewexamples
2018-08-14T13:58:28
CC-MAIN-2018-34
1534221209040.29
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com