content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Add Custom Log Entries and Customize the Default Tracer Behavior - 3 minutes to read This. Tracing Methods The following table list the most important methods of the Tracing class. Custom Tracing: - in the constructor of your platform-agnostic module located in the Module.cs (Module.vb) file; - in the constructor of your application located in the WinApplication.cs (WinApplication.vb) or WebApplication.cs (WebApplication.vb) file; - in the Main method of the WinForms application located in the Program.cs (Program.vb) file, before the WinApplication.Start call; - in the Application_Start method of the ASP.NET Web Forms application located in the Global.asax.cs (Global.asax.vb) file, before the WebApplication.Start call.
https://docs.devexpress.com/eXpressAppFramework/112576/debugging-testing-and-error-handling/add-custom-log-entries-and-customize-the-default-tracer-behavior?v=21.2
2021-10-15T22:43:21
CC-MAIN-2021-43
1634323583087.95
[]
docs.devexpress.com
Your search Results 5 resources - A scoping review of technology in education in LMICs - descriptive statistics and sample search results [internal document] (Working Paper No. 6)Haßler, B., McIntyre, N., Mitchell, J., … Kalifa Damani - 2020 -. Last update from database: 15/10/2021, 17:30 (UTC) Explore EdTech Hub Publications - Working paper (5)
https://docs.edtechhub.org/lib/?creator=%22Martin%2C+Kevin%22
2021-10-15T23:17:32
CC-MAIN-2021-43
1634323583087.95
[]
docs.edtechhub.org
Date: Sun, 24 Mar 2013 18:59:09 +0200 From: [email protected] To: [email protected] Subject: Re: Client Authentication Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help > How about refusing to > relay mail from addresses in a good DNSBL? Bad idea. Legitimate users connecting from dynamic IP-addresses is normal. DNSBLs list a dynamic IP-address permanently or for long time after a zombied Windows spewed spam from it. Some DNSBLs warn about that explicitly, for example: | Caution: Because ZEN includes the XBL and PBL lists, do not use ZEN | on smarthosts or SMTP AUTH outbound servers for your own customers | (or you risk blocking your own customers). Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=113784+0+/usr/local/www/mailindex/archive/2013/freebsd-questions/20130331.freebsd-questions
2021-10-15T23:10:22
CC-MAIN-2021-43
1634323583087.95
[]
docs.freebsd.org
New Relic's integrations include an integration for reporting your Azure Express Route Express Route Express Route data for Port, Circuit, Peering, Connection and Gateway. Express Route Port data Express Route Circuit data Express Route Peering data Express Route Connection data Express Route Gateway.
https://docs.newrelic.com/docs/integrations/microsoft-azure-integrations/azure-integrations-list/azure-express-route-monitoring-integration/
2021-10-16T00:31:41
CC-MAIN-2021-43
1634323583087.95
[]
docs.newrelic.com
Niryo_robot_poses_handlers¶ This package is in charge of dealing with transforms, workspace, grips and trajectories. Poses handlers node¶ Description - Poses handlers¶ The ROS Node is made of several services to deal with transforms, workspace, grips and trajectories. The namespace used is: /niryo_robot_poses_handlers/ Workspaces¶ A workspace is defined by 4 markers that form a rectangle. With the help of the robot’s calibration tip, the marker positions are learned. The camera returns poses (x, y, yaw) relative to the workspace. We can then infer the absolute object pose in robot coordinates. Grips¶ When we know the object pose in robot coordinates, we can’t directly send this pose to the robot because we specify the target pose of the tool_link and not of the actual TCP (tool center point). Therefore we introduce the notion of grip. Each end effector has its own grip that specifies where to place the robot with respect to the object. Currently, the notion of grip is not part of the python/tcp/blockly interface because it would add an extra layer of complexity that is not really necessary for the moment. Therefore we have a default grip for all tools that is selected automatically based on the current tool id. However, everything is ready if you want to define custom grips, e.g. for custom tools or for custom grip positions. The vision pick loop¶ - Camera detects object relative to markers and sends x<sub>rel</sub>, y<sub>rel</sub>, yaw<sub>rel</sub>. - The object is placed on the workspace, revealing the object pose in robot coordinates x, y, z, roll, pitch, yaw. - The grip is applied on the absolute object pose and gives the pose the robot should move to. Dependencies - Poses handlers¶ Services & messages files - Poses handlers¶ GetPose (Service)¶ string name --- int32 status string message niryo_robot_poses_handlers/NiryoPose pose GetTargetPose (Service)¶ string GRIP_AUTO = auto string workspace string grip int32 tool_id # Used if grip_name = GRIP_AUTO float32 height_offset float32 x_rel float32 y_rel float32 yaw_rel --- int32 status string message niryo_robot_msgs/RobotState target_pose GetTrajectory (Service)¶ string name --- int32 status string message geometry_msgs/Pose[] list_poses GetWorkspaceRatio (Service)¶ string workspace --- int32 status string message float32 ratio # width/height GetWorkspaceRobotPoses (Service)¶ string name --- int32 status string message niryo_robot_msgs/RobotState[] poses ManagePose (Service)¶ int32 cmd int32 SAVE = 1 int32 DELETE = -1 niryo_robot_poses_handlers/NiryoPose pose --- int32 status string message ManageTrajectory (Service)¶ int32 cmd int32 SAVE = 1 int32 DELETE = -1 string name string description geometry_msgs/Pose[] poses --- int32 status string message ManageWorkspace (Service)¶ int32 SAVE = 1 int32 SAVE_WITH_POINTS = 2 int32 DELETE = -1 int32 cmd niryo_robot_poses_handlers/Workspace workspace --- int32 status string message NiryoPose (Message)¶ string name string description float64[] joints geometry_msgs/Point position niryo_robot_msgs/RPY rpy geometry_msgs/Quaternion orientation
https://docs.niryo.com/dev/ros/v3.1.2/en/source/ros/niryo_robot_poses_handlers.html
2021-10-15T23:47:05
CC-MAIN-2021-43
1634323583087.95
[]
docs.niryo.com
To Customize the Bootstrap.xml File for deploying the AWS CFT in your production environment, you must copy the following NAT policy rule into your configuration file. You can find the NAT rule and address objects in the bootstrap.xml file in the GitHub repository. Document:VM-Series Deployment Guide NAT Policy Rule and Address Objects in the Auto Scaling Template Last Updated: May 1, 2020 Current Version:
https://docs.paloaltonetworks.com/vm-series/7-1/vm-series-deployment/set-up-the-vm-series-firewall-in-aws/nat-policy-rule-and-address-objects-in-the-auto-scaling-template.html
2021-10-15T23:48:37
CC-MAIN-2021-43
1634323583087.95
[]
docs.paloaltonetworks.com
- Services: Application Performance Monitoring - Release Date: Oct. 13, 2021 This release includes the following new features: - Apdex Configuration: You can now configure Apdex thresholds to measure application performance and user satisfaction based on your specific requirements. For more information, see Create an APM Domain Using the Console. - Synthetic Monitoring Features: - Enable Network Collection: You can now select the Enable Network Collection check box when creating a monitor to enable data collection about network performance at the time a monitor is executed. - Run Once: You can now select the Run Once check box when creating a monitor to ensure that the monitor runs only once. If required, you can edit the monitor at a later time to deselect the Run Once option and select the Interval Between Runs check box and specify a value. For more information, see Use Synthetic Monitoring.
https://docs.public.oneportal.content.oci.oraclecloud.com/en-us/iaas/releasenotes/changes/03d199f1-d376-4b90-ae6e-974c0d955164/
2021-10-16T00:32:35
CC-MAIN-2021-43
1634323583087.95
[]
docs.public.oneportal.content.oci.oraclecloud.com
Update: SentryOne Document is now SolarWinds Database Mapper (DMR). See the Database Mapper product page to learn more about features and licensing. Want to explore Database Mapper? An interactive demo environment is available without any signup requirements. With SolarWinds Database Mapper, you can schedule snapshots to occur at a time of your choosing. This can be useful for automating the snapshot process, and to continually update documentation for your data sources at consistent intervals. For example, you may want to schedule a snapshot weekly for a dev server to document the changes that occur on a weekly basis. Additional Information: For information about: - Manually starting a snapshot, and generating documentation for the first time, see the Generating Documentation article - Creating a solution, and adding solution items, see the Solutions and Solution items articles To begin scheduling a snapshot for your selected remote agent, complete the following steps: 1. Open the Solutions dashboard, and select the Configure Snapshot button for the desired solution to open the Snapshot Configuration window. Copy the first snapshot command. 2. Open the Database Mapper install directory, select SHIFT+ Right Click to open the context menu, and then select the Open PowerShell window here option. Note: In this example, the install directory is C:\Program Files (x86)\SentryOne\SentryOne Document Remote Agent. Database Mapper uses this by default. 3. Paste and execute the copied command. Go back to Database Mapper, then copy the second snapshot command. Note: You only need to copy the argument. You don't need to copy the whole statement including the .exe. 4. Open Windows Task Scheduler and select Action > Create basic Task to open the Create Basic Task Wizard. 5. Enter a name and description. Select Next to continue. 6. Select a task starting point and select Next. Configure your trigger, then select Next to continue. 7. Select Start a program for the action, then select Next. Select Browse to open remote agent install directory, then select the SentryOneCommandLine.exe. Paste the previously copied Snapshot Configuration command in the Arguments box. Select Next to continue. 8. Select Finish to save your task. Success: Your Database Mapper Snapshot is now scheduled!
https://docs.sentryone.com/help/sentryone-document-schedule-snapshots
2021-10-15T23:42:02
CC-MAIN-2021-43
1634323583087.95
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e2868fa6e121c0f60bf6684/n/sentryone-document-install-directory-open-powershell-window-1944.png', 'SentryOne Document File Directory Open Powershell window Version 2020.128 SentryOne Document File Directory Open Powershell window'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e286915ec161c7a647af680/n/sentryone-document-task-scheduler-create-basic-task-1944.png', 'SentryOne Document Create Basic Task Version 2020.128 SentryOne Document Create Basic Task'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e2869e7ec161cc1647af656/n/sentryone-document-create-basic-task-enter-name-1944.png', 'SentryOne Document Basic Task Wizard Enter name Version 2020.128 SentryOne Document Basic Task Wizard Enter name'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e286b678e121cd86cdded16/n/sentryone-document-create-basic-task-finish-1944.png', 'SentryOne Document Basic Task Wizard select Finish Version 2020.128 SentryOne Document Basic Task Wizard select Finish'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5e286b6fec161c20687af623/n/sentryone-document-task-scheduler-scheduled-snapshot-1944.png', 'SentryOne Document Snapshot Scheduled Version 2020.128 SentryOne Document Snapshot Scheduled'], dtype=object) ]
docs.sentryone.com
Contents AS part of the GDPR efforts, Cooladata now enables you to anonymize personal data in your project. Since you control what data you send to Cooladata and what is stored, you are responsible for defining what is personal information and what is not. This feature will allow you to hash properties marked as personal user information. Notice that we automatically collect IPs so if you need the IPs we have collected to be hashed before they are stored you are responsible to use this functionality to set this up. Hashing IP’s will not affect the geolocation enrichment we provide out-of-the box. Keep in mind that this feature will only hash future data being sent to Cooladata and will not update historical data. Marking Properties as Personal Information - In Cooladata, go to the project menu -> properties – to see the project’s properties list - In the Edit Property side bar mark the “anonymize data” to mark this property as data that should be hashed: Notice that STRING type properties will be hashed using MD5 so the data will remain unique per value, but INTEGER,FLOAT and TIMESTAMP data types will be Nullified. Setting up a project wide condition for anonymizing data You can also set up a project-wide condition to define that only data that meets this condition will be hashed. Due to legal issues, this is done offline using a form. Please contact [email protected] or your CSM to set this up. Once you set up a condition, all properties with “anonymize data” checked will be hashed once the condition will be true. Even if the property will be marked after setting up the condition. \
https://docs.cooladata.com/anonymizing-personal-data/?pfstyle=wp
2021-10-15T22:48:24
CC-MAIN-2021-43
1634323583087.95
[array(['https://docs.cooladata.com/wp-content/uploads/2018/04/anonymize-data-1.png', None], dtype=object) ]
docs.cooladata.com
What is an Event? CoolaData collects user behavior data in the form of events. An event is an action performed on your site/app. Each event must be associated with the user who performed it, the time it happened, and the name of the event, and can include additional properties. For each event, you can manage the following meta data: - Event name: The event name as sent to CoolaData and used in queries. Can use any characters, including spaces and capital letters. Event names are not case sensitive, so ‘Login’ and ‘login’ will be treated as the same event. - Category: Logical association. Can be used in queries to query multiple events simultaneously. Can be modified at any time and does not affect data storage. - Include in path: The path is used for CoolaData’s behavioral functions. Events not in path will not be included in these functions. Disable this only for events you want to exclude from behavioral analysis. Please note the maximum amount of events per project is 1500 events. Tracking events CoolaData enables you to track your users’ behavior by sending events (comprised of properties). The following is an example of an event sent to CoolaData as a JSON: { "event_name": "deposit_server", "transaction_id": "63219", "session_os": "PC", "depositamount": "100", "pearlsamount": "0", "blackpearlsamount": "0" } The only mandatory field that you must enter in an event is event_name, (which in this example has the value “deposit_server”). An event name describes the event that occurred. You can specify any event name, for example: item_open, search_performed, game_level_3_achieved. All other properties are optional – add as many as you want, according to your needs. In addition to the properties that you include, each CoolaData Tracker (SDK) automatically enriches each event with various properties. The following shows the same event (as shown above) as it arrives at CoolaData, after a CoolaData tracker automatically added (enriched) the user ID and the event’s timestamp: { "user_id": "123456", "event_time_ts": "1467763537524", "event_name": "deposit_server", "transaction_id": "63219", "session_os": "PC", "depositamount": "100", "pearlsamount": "0", "blackpearlsamount": "0" } Best practice: - Create a spreadsheet listing the events to be sent to CoolaData, including the event name and a description of each event. - For each event, describe the properties to be included in that event, including its property name and a description. Remember – any property can be sent with any event. For example, we recommend creating a single button_clicked event that has a button_name property, rather than creating a separate event for each button clicked, such as: button_ A_clicked, button_ B_clicked, button_ C_clicked, … Mapping modes CoolaData provides two modes for controlling the automatic generation of events and properties. - Self-learned – This is the default option. This means that all event names and all their properties are accepted by CoolaData (and are not defined as invalid). After each event and its properties are automatically defined by CoolaData, all subsequent events must arrive with properties that have the same data type. Otherwise, they are defined as invalid and not entered into CoolaData. We recommend that initially you start working with CoolaData using this option. Afterwards, you can start using the Lock option, described below. - Locked: This option enables you to control which event names and event properties are accepted by CoolaData. This means that only previously defined events and properties are considered valid, whether they were Self-learned or defined by you. - New Events Are Invalid – New events are not accepted and are stored in an Invalid Events table which you can review. - New Properties Are Not Accepted – New properties in an event are not accepted into CoolaData. The event itself (including all its other properties) is accepted into CoolaData. However, the new properties are not accepted – they are simply dropped. - Different Data Types Are Not Accepted – Properties with a different data type than originally defined are not accepted into CoolaData. All these events are listed in an Invalid Events table with a description that you can review. To change the mapping mode: - Go to Project – Settings - Choose either Self-Learned or Lock mode - Click Apply to save the change Best Practice: Either control all the events and properties or start by collecting them and filter them later: - Full control: this option gives you complete control over which events and properties are valid in CoolaData, meaning all other events are not accepted by CoolaData and will appear in the Invalid Events list. - Change your project mapping mode to Lock. - Plan which events and properties will be considered valid. - Define the valid events. - Define the valid properties. - Collect and filter: with this option CoolaData automatically generates events and properties from incoming events. You then get rid of the unwanted ones and lock down (prevent) the generation of new events and properties. - By default, CoolaData operates in Self-learning mode. - Send a few samples of each kind of event to CoolaData. CoolaData will automatically define these events and their properties as valid. - Review all the automatically generated events and properties and then disable all unwanted events and delete all unwanted properties. - Contact your CoolaData customer success representative to change to Lock mode. - Notify all CoolaData integrators not to send the disabled events or the deleted properties because they will be sent to the Invalid Events list. Defining an Event When working in Lock mode, define your events using the events management interface. - Open Project – Events. - Click the Add button in the top right of the window to add a new event: - Fill in the window as follows and click the Save button. - Name (Mandatory): Describe the event that occurred in lowercase without spaces. For example, item_open, search_performed, game_level_3_achieved and so on. - Category: Select the value Other. - Active: Active is the default. If needed, deselect this option to specify that events with this Name (described above) are not loaded into CoolaData (Not Active). Non Active events are not listed in CoolaData’s Invalid Events list. - Include in Path: A path defines a specific sequence of events in the same user session. CoolaData provides various visualization options showing the path of events in a user session, such as a Funnel report or Cohort report. Only events for which this option is selected are included in these visualizations. For example, an event that you may not want to include is a keep_alive event that verifies the devices’ connection. Regardless of the value selected in the Include in Path field (meaning even if it is off), the event still affects the session_duration (which is the duration of the session, in milliseconds) and is still counted in the session_event_counter (which is the number of events in the session). Event Timestamps CoolaData provides these options for specifying the timestamp of an event: - Device’s Timestamp (Default) – The CoolaData Tracker automatically adds the device’s timestamp. CoolaData automatically enriches each event with an event timestamp (named event_time_ts). This timestamp is determined by the device’s timeclock and indicates when the event happened, meaning when it was sent by your app using the CoolaData Tracker. CoolaData automatically stores and presents event timestamps in UTC time standard. The CoolaData Tracker also automatically adds a timezone_offset property to each event specifying the device’s time zone offset from UTC time, in hours. For example, -2.5. - Your Own Timestamp (Optional) – You can add your own timestamp to the event which overrides the device’s timestamp (described above). To specify your own timestamp for each event, simple add the event_time_ts property to the event’s JSON. Specify this timestamp in 13-digit format (milliseconds). CoolaData then uses this timestamp instead of your device’s (event_time_ts – described above). When using JavaScript Tracker or CoolaData REST API in order to include your own timestamp, simple include the event_timestamp_epoch property (instead of event_time_ts – described above) in the event’s JSON. CoolaData then uses this timestamp which is UTC time standard. This option is only relevant when CoolaData is not set to use server timestamp,. - CoolaData Server Timestamp (Optional) – CoolaData assigns its own server’s timestamp to all events. In general the timeclocks of some devices’ may not be accurate. Upon request, CoolaData can configure your system so that CoolaData automatically assigns the timestamp of the CoolaData server to each event instead of the timestamp of the device that sent the event. To activate this feature, contact your CoolaData customer success representative. Benefit – The advantage of using this feature is to ensure that all the collected data is synchronized to the same clock, and that no events are rejected because of invalid dates. When this feature is activated, all timestamps that you send with the event are ignored and an accurate UTC timestamp is assigned instead. If you would like to use this feature and would also like to send your own timestamp, then add your own timestamp to each event in another property. For example, you could name it client_timestamp. CoolaData automatically performs a sanity check of all event timestamps. By default, an event is considered Invalid when its timestamp is more than 12 hours into the future or 30 days into the past. For example, future timestamps may occur when an end user’s clock is set incorrectly. These types of events are not stored in CoolaData and appear in the Invalid List with the following reason – Future event sent or Past event sent.
https://docs.cooladata.com/events/
2021-10-15T23:26:25
CC-MAIN-2021-43
1634323583087.95
[]
docs.cooladata.com
Date: Thu, 7 Apr 2016 19:21:33 +0100 From: RW <[email protected]> To: [email protected] Subject: Re: building linux ports with x11/nvidia-driver-340 instead of x11/nvidia-driver Message-ID: <[email protected]> In-Reply-To: <20160407193035.123cc810@max-BSD> References: <[email protected]> <20160407193035.123cc810@max-BSD> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Thu, 7 Apr 2016 19:30:35 +0200 maxnix wrote: > Il giorno Thu, 7 Apr 2016 16:04:43 +0200 > Vikash Badal <vikashb. This makes it depend on x11/nvidia-driver, when the question as about making it depend on the x11/nvidia-driver-340 legacy driver port. As far as the port dependency installation is concerned it shouldn't make any difference NVIDIA_GL_RUN_DEPENDS=${LINUXBASE}/usr/lib/libGL.so.1:x11/nvidia-driver so if x11/nvidia-driver-340 has already installed libGL.so.1 then it wont try to install x11/nvidia-driver to satisfy the dependency. This used to work with pkg-tools, I'm not sure what happens with pkg. It'll probably complain, but it's worth a try. Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=243713+0+/usr/local/www/mailindex/archive/2016/freebsd-questions/20160410.freebsd-questions
2021-10-16T00:32:58
CC-MAIN-2021-43
1634323583087.95
[]
docs.freebsd.org
Azure Blockchain Development Kit This repository contains content and samples in number of areas, including: - Connect - Connect various data producers and consumers to or from the blockchain - Integrate - Integrate legacy tools, systems and protocols - Accelerators - Deep dive into End-to-End examples, or solutions to common patterns. - DevOps for smart contracts - Bring traditional DevOps practices into a distributed application environment Repo Contents Sample Index Looking for a sample based on a particular service or technology (e.g., using SQL with Blockchain)? The index below will give you pointers to samples based on the technology/service featured. These are a select subset of the total samples available. Prerequisites Where noted, some of the samples in this development kit may need the following Samples using the Ethereum Logic App connector available on the Azure Marketplace require an Ethereum network with a public RPC endpoint If you wish to use the Azure Blockchain Workbench with the Ethereum Logic App connectors you will need a public RPC endpoint. You may use an existing one, or create a new one. Feedback For general product feedback, please visit our forum. To request additional features or samples, visit our UserVoice site.
https://docs.microsoft.com/zh-tw/samples/azure-samples/blockchain-devkit/azure-blockchain-development-kit/
2021-10-16T00:56:43
CC-MAIN-2021-43
1634323583087.95
[]
docs.microsoft.com
Frequently Asked Questions Insight Agent version 3 became generally available on Wednesday, October 14th, 2020. See the following FAQs for some important points about the major changes in this release. What are the main improvements in version 3 of the Insight Agent? The Insight Agent uses a multi-process architecture which integrates diverse technologies as appropriate. Some of these solutions depend on the Python runtime, which we've now upgraded to version 3.8 as part of the new version release. This brings several benefits with it, such as upgrading the OpenSSL software library to version 1.1, leveraging other new libraries, and upgrading several dependencies. This also allows for performance improvements and security fixes. Upgrading to OpenSSL version 1.1 means that Insight Agent version 3 can now support Windows Surface Laptop 3 devices. Supporting these devices required us to update the OpenSSL version because Windows Surface Laptop 3 has an interoperability issue with OpenSSL version 1.0. Do I need to make any changes in my environment to use version 3 of the Insight Agent? No networking changes are required to ensure that the assets in your organization update to Insight Agent version 3. The update will be available automatically in the same manner as any other Insight Agent version update. This major version will start with version 3.0.1. Does version 3 support the same operating systems of prior Insight Agent versions? Insight Agent version 3 won’t be available to assets still running any of the following operating systems or lower (due to interoperability issues between these operating system versions and Python 3.8): - Windows Server 2003 - Windows XP - Windows Vista SP1 - Windows 7 SP0 Will these unsupported operating systems still be able to use their existing Insight Agent? Yes. The Insight Agent will continue to work on these operating systems, however it will do so on the latest 2.x release (Insight Agent version 2.7.22). Insight Agents on these systems will still continue to receive the benefits of the latest security content analysis, as this is applied to the data after it is collected on the Insight Platform.
https://docs.rapid7.com/insight-agent/insight-agent-version-3-faqs/
2021-10-15T23:30:30
CC-MAIN-2021-43
1634323583087.95
[]
docs.rapid7.com
Auth0 Auth0 is an identity provider and a API -based data source. Its logs can produce CloudService documents. To set up Auth0, you’ll need to: - Configure Auth0 to send data to your Collector. - Set up the Auth0 event source in InsightIDR. - Verify the configuration works. Configure Auth0 to send data to your Collector To configure Auth0 for InsightIDR, sign-in to Auth0 and take the following actions from the dashboard: Set up a machine-to-machine application: This will provide the credentials needed to access the Management API. For instructions, see "Create and Authorize Machine-to-Machine Applications for Management API" in the Auth0 documentation: Define token settings for the JSON Web Token: To authenticate to an API endpoint, you'll need a JSON Web Token scopes. The default timeout for the token is around ten hours (36000 seconds). Since the data source fetches a new token on each run, you can safely reduce the timeout down to one hour or less. You can follow the steps listed in this documentation by Auth0 to manage API Access Tokens: Authorize a Machine-to-Machine application: Select these roles: read:logs, read:logs_users. You can follow the steps listed in this documentation by Auth0: Set up Auth0 in InsightIDR - From the left menu, go to Data Collection. - When the Data Collection page appears, click the Setup Event Source dropdown and choose Add Event Source. - From the Security Data section, click the Cloud Data icon. The Add Event Source panel will appear. - Choose your collector and Auth0 from the event source dropdown. - Name your event source. - Optionally choose to send unparsed data. - Specify the user domain that will use the access tokens you set up in the Before your Begin step. - Select a credential you set up in the Before your Begin step. - Click Save. Verify the configuration Complete the following steps to view your logs and ensure events are making it to the Collector: - Click Data Collection in the left menu of InsightIDR and navigate to the Event Sources tab. Find the new event source that was just created and click the View Raw Log button. If you see log messages in the box, then this shows that logs are flowing to the Collector. - Click Log Search in the left menu of InsightIDR. - Select the applicable Log Sets and the Log Names within them. The Log Name will be the name you gave to your event source. Auth0 logs flow into the Cloud Service Activitylog.
https://docs.rapid7.com/insightidr/auth0/
2021-10-15T23:38:59
CC-MAIN-2021-43
1634323583087.95
[]
docs.rapid7.com
Manage metric rollup policies with configuration files If you have access to the configuration files for your deployment, you can manually configure metric rollup policies for your source metric indexes. See Roll up metrics data for faster search performance and increased storage capacity for a conceptual overview of metric rollup policies. You should have already identified or created a source metric index and one or more target metric indexes before you create a metric rollup policy configuration. These indexes must be discoverable on the search head. If you use distributed search you have to create stand-in indexes and set up data forwarding to enable metric rollup policies. See Index prerequisites for metric rollup policies. By default, the Splunk software gives metric rollup policies that you create with Splunk Web the context of the Search & Reporting app. If you are manually configuring your metric rollup policies, you can create metric rollup policies for other apps by adding a metric_rollups.conf file to into the etc/apps/<app-name>/local directory for the app and then putting the configuration for the rollup policy in that file. This file placement creates a metric rollup policy that is owned by "nobody" and shared to all users of the app. You cannot create a metric rollup policy that is private or owned by a specific user. Metric rollup policy configurations created in etc/users/<user-name>/<app-name> are ignored by the Splunk software. App-specific rollup policies generate the scheduled searches that populate their rollup summaries in a savedsearches.conf file in etc/apps/<app-name>/local. The app context of these saved searches is included in the names of their configuration stanzas, which double as their object name. The names of these searches fit the following syntax: _ss_mrollup_<source_index>_<span>_<target_index>_<app_name>. Metric rollup feature extensions that are not available in Splunk Web When you manage metric rollup policies through direct edits to configuration files, you can take advantage of optional feature extensions that are not yet available in Splunk Web. You can also manage this extended functionality through REST API operations on the metric rollup endpoints. See Metrics Catalog endpoint descriptions in the REST API Reference Manual. Specify a metric rollup policy stanza in metric_rollups.conf To configure a metric rollup policy you need to add a stanza to your metric_rollups.conf file. The configuration syntax for a metric rollup policy stanza is as follows: [index:<Metric Index Name>] defaultAggregation = <'#' separated list of aggregation functions> rollup.<summary number>.rollupIndex = <string Index name> rollup.<summary number>.span = <time range string> metricList = <comma-separated list of metrics> metricListType = <excluded/included> dimensionList = <comma-separated list of dimensions> dimensionListType = <excluded/included> aggregation.<metric_name> = <'#' separated list of aggregation functions> The following table defines these settings. It explains which settings are required and which are optional. Change the minimum span allowed for a rollup summarization search The rollup.<summary number>.span setting has a lower boundary that is determined by the minspanallowed limit for the [rollup] stanza in limits.conf. minspanallowed is set to 300 seconds, or 5 minutes, by default. If you provide a span for a rollup summarization search that is lower than minspanallowed, you will see an error message. This limit is meant to prevent you from setting up rollup summarization searches with a frequency that would likely lead to search concurrency problems, where scheduled searches fail to run when they should because there are too many searches running at once. However, if you need to change this limit, you can. Do not set minspanallowed to a value lower than 60 seconds. This documentation applies to the following versions of Splunk® Enterprise:!
https://docs.splunk.com/Documentation/Splunk/8.0.5/Metrics/ManageMRollupConf
2021-10-16T00:37:03
CC-MAIN-2021-43
1634323583087.95
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Curve Display Reference - Mode Edit Mode - パネル When in Edit Mode, curves have special overlays to control how curves are displayed in the 3D Viewport. - Handles Toggles the option to display the Bézier handles. - Normals Toggles the display of the curve normals. - Normal Size Length of the axis that points the direction of the normal.
https://docs.blender.org/manual/ja/dev/modeling/curves/curve_display.html
2021-10-15T23:03:49
CC-MAIN-2021-43
1634323583087.95
[]
docs.blender.org
Date: Thu, 12 Sep 1996 10:57:10 -0400 (EDT) From: "Christopher J. Michaels" <[email protected]> To: [email protected] Subject: Netscape? Message-ID: <Pine.GSO.3.93.960912105530.7418B-100000@destrier.acsu.buffalo.edu> Next in thread | Raw E-Mail | Index | Archive | Help Hi, I wouldn't ask this question if I didn't have to. Is there a netscape for FreeBSD and if so where? I looked in the ports collection and the netscape directories are relatively empty. And the sysinstall utility can't find them at all. Thanks -Christopher Michaels Student Consultant:645-3542 *************************************************************************** * E-Mail: [email protected] * * Homepage: * *************************************************************************** Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=609506+0+/usr/local/www/mailindex/archive/1996/freebsd-questions/19960908.freebsd-questions
2021-10-16T00:38:06
CC-MAIN-2021-43
1634323583087.95
[]
docs.freebsd.org
Meet the Azure Analysis Services team at upcoming user group meetings Come meet the Analysis Services team in person as they answer your questions on Analysis Services in Azure. Learn about the new service and features available. SQL Saturday Silicon Valley – April 22nd Microsoft Technology Center, 1065 La Avenida, Mountain View, CA Group Site Register Now Boston Power BI User Group – April 25th 6:30pm – 8:30pm MS Office 5 Wayside Road, Burlington, MA Group Site Register Now New York Power BI User Group – April 27th 6pm-8:30pm MS Office Times Square, NY Group Site Register Now Philadelphia Power BI User Group – May 1st 3pm-6pm MS Office Malvern, PA Group Site Register Now Philadelphia SQL User Group – May 2nd Group Site Register Now Portland Power BI User Group Meeting – late May CSG Pro Office 734 NW 14th Ave Portland OR 97209 Group Site Registration: Coming Soon! New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
https://docs.microsoft.com/en-us/archive/blogs/analysisservices/meet-the-azure-analysis-services-team-at-upcoming-user-group-meetings
2021-10-15T23:14:16
CC-MAIN-2021-43
1634323583087.95
[]
docs.microsoft.com
Update: SentryOne Document is now SolarWinds Database Mapper (DMR). See the Database Mapper product page to learn more about features and licensing. Want to explore Database Mapper? An interactive demo environment is available without any signup requirements. Custom Metadata import Adding Custom Metadata import Additional Information: You need to add a Database Mapper Solution before adding a Custom Metadata Import Solution item. For more information about adding a solution, see the Configuring Solutions and Database Mapper Solutions articles. Add a Custom Metadata import Custom metadata import from the Source type drop-down list. 4. Enter the file paths for the three necessary files (object file, property file, and lineage file), then select OK to add the solution item. Note: Object files contain the names of the objects within in the database. Property files contain information about the objects within the database. See the blog post in the Tutorial section below to learn more about these files. Additional Information: You need to take a snapshot of your Solution and Custom Metadata Import solution item before viewing any documentation or lineage. For more information about taking a snapshot, see the Generating Documentation and Scheduling a Snapshot articles. After adding a Solution with your Custom Metadata Import solution Item, and taking a snapshot, you are ready to view your documentation. Additional Information: For more information about the Documentation tab in Database Mapper, see the Documentation article. After adding a solution with your Custom Metadata Import solution Item, and taking a snapshot, you are ready to view your solution's lineage within the environment. Additional Information: For more information about the Lineage tab in Database Mapper, see the Lineage article. Tutorial See the Manually Add a Metadata Source in Database Mapper blog post for a walk through how you can use Custom Metadata Import to manually insert any metadata source.
https://docs.sentryone.com/help/document-custom-metadata-import
2021-10-15T23:35:18
CC-MAIN-2021-43
1634323583087.95
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/6088856d6e121c446b5a5c1a/n/dmr-database-mapper-open-solution-2021.png', 'Database Mapper Solution Configuration Tool Open Solution Version 2021.12 Database Mapper Solution Configuration Tool Open Solution'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60888576ad121ccc78368213/n/dmr-database-mapper-add-solution-item-2021.png', 'Database Mapper Solution Configuration Tool Add Solution Item Version 2021.12 SentryOne Document Solution Configuration Tool Add Solution Item'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60db7d33bc4539733b7b2430/n/dmr-database-mapper-add-solution-item-custom-metadata-import-202112.png', 'Database Mapper Add Solution Item Custom Metadata Import Version 2021.12 Database Mapper Add Solution Item Custom Metadata Import'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/60db7d3d0a572bbb5a7b24b6/n/dmr-database-mapper-add-solution-item-custom-metadata-import-ok-202112.png', 'Database Mapper Add Solution Item Custom Metadata Import Credentials Version 2021.12 Database Mapper Add Solution Item Custom Metadata Import Credentials'], dtype=object) ]
docs.sentryone.com
Inset Straight Skeleton¶ This add-on makes ‘ can. Activation¶ Open Blender and go to Preferences then the Add-ons tab. Click Mesh then Inset Straight Skeleton to enable the script. Description¶ Enter mesh Edit Mode on a mesh object, and select one or more faces. - Scale - Percent Means that amounts are a percentage of the amount for a full inset. - Absolute Means that the amounts are in units. - Amount The distance to move the edges inward. - Height The distance to move the inset polygons upward. - Region If checked, treat all selected faces as a region to be inset, otherwise inset each face individually. - Quadrangulate Todo. Technical Details¶ ‘roofs’ with a fixed pitch. Reference - Category Mesh - Description Make an inset inside selection using straight skeleton algorithm. - Location 3D Viewport operator - File mesh_inset folder - Author Howard Trickey - License GPL - Note This add-on is bundled with Blender.
https://docs.blender.org/manual/en/2.93/addons/mesh/inset_straight_skeleton.html
2021-10-16T00:04:24
CC-MAIN-2021-43
1634323583087.95
[]
docs.blender.org
. Accessible forms/workflows can assist users with visual and motor impairments. frevvo v6.1 can be used to build accessible forms/workflows that meet Section 508 and WCAG 2.0. For users requiring assistive technology, frevvo supports the following screen readers: When using JAWS, the latest version of IE (ie11 presently) is recommended for optimal results../workflow accessible in the designers, check the Accessibility property on the form/workflow properties panel. This property is by default unchecked and when checked enables specific advanced accessibility behavior. For Example, Required controls will include an asterisk in the label. frevvo workflow). configured, works well with Wet Signature controls in forms/workflows.
https://docs.frevvo.com/d/display/frevvo91/Accessibility+and+Live+Forms
2021-10-15T23:57:14
CC-MAIN-2021-43
1634323583087.95
[]
docs.frevvo.com
Monitor AppSec Data with Dashboards. For many chart types, you can click into each data point to drill down to related data. Dashboard Permissions You can only view and edit dashboard data associated with apps you have access to. list - Access this list of all the dashboards created by users in your organization by clicking the All Dashboards button. - Dashboard card library - A dashboard is made of data visualization cards, which are all contained in this library. To better visualize associated data, you can interact directly with individual cards and drill down into the data. Drill-down data is available in the following chart types: - Donut card - Bar chart - Line graph - Heat map.
https://docs.rapid7.com/insightappsec/monitor-appsec-data-with-dashboards/
2021-10-16T00:41:10
CC-MAIN-2021-43
1634323583087.95
[array(['/api/docs/file/product-documentation__master/insightappsec/images/dashboard-menu-item.png', 'Dashboard in the navigation menu'], dtype=object) ]
docs.rapid7.com
The Conditions pane is displayed on the right side of the SQL Sentry client by default, and its scope is determined by items selected in the Navigator pane. The Conditions pane is used to configure actions in response to conditions being met as part of the SQL Sentry Alerting and Response System. Actions can be defined in response to certain conditions being met within your environment. Choose from a variety of actions, depending on which condition is being responded to. All conditions work on the principle of inheritance. This means that if you configure an action in response to a condition being met at the global level (All Targets), it automatically passes down to all applicable objects beneath it. Define global conditions for the most common issues across your environment once and have those passed down to every monitored server automatically. Further refine conditions at each level as needed. For a visual representation of how inheritance works within SQL Sentry, see the alerting and response system hierarchy diagram. Each condition that you configure in your environment will have an associated behavior. The behavior controls how the condition is carried out relative to any inherited conditions. There are three condition behaviors available: - Override inherited actions - Combine with inherited actions - Disabled For a complete explanation and example usage scenarios for each behavior, see the Action Behavior topic. View Menu The Conditions pane is displayed on the right side of the screen by default. If you don't see the Conditions pane, use the View menu (View > Conditions). Each condition type has an associated actions tab where actions can be configured in response to conditions being met. The following are types of conditions: Snoozing a Condition/Action Conditions/Actions can be snoozed or suppressed for a period of time by right-clicking on the condition/action in the Conditions pane and selecting one of the following options: Additionally, select whether the snooze affects just the hierarchical object that you selected (site, group, target, or instance) or if it affects all objects. Note: Conditions that have been snoozed have a gray background in the Conditions pane. Once a condition/action is snoozed, right-click on the snoozed object and select Unsnooze to unsnooze the object. Alternatively, unsnooze all snoozed conditions/actions through Tools > Unsnooze All.
https://docs.sentryone.com/help/conditions-pane
2021-10-16T00:29:36
CC-MAIN-2021-43
1634323583087.95
[array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5fc0fa488e121cd1221cd8c1/n/sentryone-conditions-pane-2020.png', 'SQL Sentry Conditions Pane Version 2021.12 SQL Sentry Conditions Pane'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5fc0fa1f8e121c23201cdb9a/n/sentryone-conditions-pane-snooze-alerts-context-menu-options-2020.png', 'SQL Sentry Conditions Pane Snooze Alerts Context Menu Options Version 2021.12 Conditions Pane Snooze Alerts Context Menu Options'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5fc0fa028e121c7e1c1ce302/n/sentryone-conditions-pane-unsnooze-object-example-2020.png', 'SQL Sentry Conditions Pane Unsooze Alerts Version 2021.12 Conditions Pane Unsooze Alerts'], dtype=object) array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/610d87dd3da5d8ff057b23f1/n/sql-sentry-tools-unsnooze-all-202112.png', 'SQL Sentry Tool Unsooze All menu option Version 2021.12 SQL Sentry Tools Unsnooze All menu option'], dtype=object) ]
docs.sentryone.com
Editor Overview Thank. The Editor is part of Telerik UI for ASP.NET AJAX, a professional grade UI library with 120+ components for building modern and feature-rich applications. To try it out sign up for a free 30-day trial. The Editor is part of Telerik UI for ASP.NET AJAX, a professional grade UI library with 120+ components for building modern and feature-rich applications. To try it out sign up for a free 30-day trial. Hottest Key features - To create a basic RadEditor: - ensure you have a script manager on the page (use <asp:ScriptManager> tag to declare one) - use the <telerik:RadEditor> tag to declare the editor and to set some of its properties Get started with the editor declaration and enabling some of its features <telerik:RadEditor <Content> <b>Setting inline properties</b> </Content> </telerik:RadEditor>
https://docs.telerik.com/devtools/aspnet-ajax/controls/editor/overview
2021-10-16T00:05:00
CC-MAIN-2021-43
1634323583087.95
[array(['images/radeditorpreview.png', None], dtype=object)]
docs.telerik.com
25 Tips for Managing your team in Appcues Our Best Practices for Managing your team in Appcues guide is the best place to start and set a foundation for a successful Appcues launch within your organization and with cross-functional teams. If you’re looking for even more inspiration, we have 25 ideas and tips to help you get started, stay organized, and create the best Appcues experience for you (and your Users). - Have a plan: Internally, have a plan for deployment, updating, governance, and ownership. Use the Appcues Team DocTemplate to foster the right discussions and keep you and your team focused and organized. - Kickoff: Before kicking off Appcues, ensure you are including all the right parties and representatives from your organization to secure early adoption and understanding. - Leverage Permissions: Give all your teams access to the Appcues account and flow creation, or monitor by Editor, Publisher, and Administrator roles to balance access, flexibility, and oversight all at once. - Embrace Tags:. - Establish one "Appcues Lead" per team. This person should generally be responsible for all campaigns and flows for the team, managing their teammates, and ensuring their content plays well with others. In many organizations, this individual is part of the Product, CX, or Marketing team but choose the team and individual who best aligns with your overall goals. - Create a meeting cadence. Meeting weekly or biweekly to review new requests and interactions, and assessing the performance of your current flows prevents a backlog and ensures your User experience is on point. - Customer chair. Have a conflict about what to show? Use the ‘customer chair’ method as a way to invite that perspective to your planning and decision making. - Define your goals. This seems obvious but many fail to define the end goals they hope to achieve. For some, this might be clarifying the User Onboarding process, deflecting support tickets, or providing usage expansion tracks. Whatever you ultimately want to see as the outcome, define that and form a measurable metric around it. - North Star metrics. Dovetailing on the prior tip, using North Star metrics can help you and your team prioritize the singular metric that supports your business goals. - Define an approval process anchored in goals. Once you have defined your goals and measurable metrics, it’s a great idea to establish an approval process that aligns to those goals. For example, you may create an internal request form for new or updated Appcues flows and on this form, ask the requestor to define how this request will support and align to company goals. - Identify your tracking mechanism. There are many ways to track your flows and flow requests but it’s helpful to determine how your internal teams will stay organized early on. Many customers opt for Spreadsheets, Slack/Chat channels or project tracking systems like Asana, Clubhouse, or Jira. To see how one of our customers tracks their projects, check out this customer hangout session! - Utilize flow diagnostics. The Appcues Flow Doctor can help you identify why a flow isn't showing to an intended user(s) and provide options for fixing. When multiple teams are involved, it's important to have a clear picture on who you are displaying a flow to and why. - Train everyone! You may think only a handful of individuals within your organization need to use Appcues, however, we often see several teams and departments wanting to take advantage of the Appcues tool once it's implemented. To get ahead of this, we recommend training any and all teams and individuals who may want to take advantage of Appcues to ensure you are all starting off on the same foot. - Set design guidelines. Having an internally agreed upon design outline for things like fonts, colors, styling, etc. will help all team members stay on-brand and maintain a unified design experience for all end-users. - Establish copy guidelines. Similar to having a design guideline, you may also consider creating a brief, internal guideline on copy, phrasing, and terminology to ensure all flow creators are using clear and consistent phrasing. - Know your audience. Are your users less-than-tech savvy, on tight timelines, or need consultative assistance? Provide added layers of assistance or an option to contact someone on your team to get to the core of what your audience needs and wants. Understanding who your Flow is targeting and what they want to get out of their experience will help each time you create or update a flow. - Test drive. Appcues provides tools to review and share your flows with others on your team before they go live to an end-user. Take advantage of Preview Mode, Test Mode, Publishing to an internal environment, and/or Creating internal segments to get the full end-user experience before you go live. - Collect feedback on flows. Once flows are live, be sure to collect internal and external feedback on how users are interacting with them. You can also use Appcues' Flow Analytics page, to give you better insights into performance, engagement and success of your flows. - Quarterly strategy reviews. Outside of regular touch points on your flows, you may also find it helpful to revisit your product led strategy quarterly either after or alongside your regular business quarterly planning to see how and where Appcues can help drive your goals and outcomes. - Have an internal communication channel. Whether your organization uses Slack, Teams, or another chat platform, make communicating fast and easy by creating a dedicated Appcues channel for discussing, posting updates, and sharing details about your flows. - Use Launchpad to stay organized. Launchpad allows you to provide in-product tutorials and announcements in a compartmentalized area to avoid intruding on the user's experience. It also allows your users to replay a flow they've seen previously. This can be helpful for providing information from cross-functional teams with many messages without overloading your Users. - Snooze a flow. If you find that you need several flows to satisfy all of your teams and initiatives, you may also consider using ‘snooze a flow’ through the use of custom buttons and tracked events. This allows your end-users to come back to a flow at a later time if needed or if they want to explore other actions first. - Connect with professional services to consult on guidelines. Have a great idea but lack the time, knowledge, or resources to get this out the door? No sweat! Our in-house Professional Services team can consult or help you build your ideal user experience in Appcues! - Create a (required) naming convention for flows. Being able to easily read a flow's title and know what it is can increase your team's efficiency and keep your content organized. Consider creating a naming convention that all users will adopt to keep consistency across teams, such as [SURVEY]Usability task survey: Understand Analytics (Feb18) (Tristan). - Get some inspiration and Really Good UX. Looking for ideas or curious about how other organizations have created cohesive flows? Check out the RGUX space to get creative and aligned as a team!
https://docs.appcues.com/article/729-25-tips-for-managing-your-team-in-appcues
2021-05-06T01:17:12
CC-MAIN-2021-21
1620243988724.75
[]
docs.appcues.com
A newer version of this page is available. Switch to the current version. ListEditor.ModelApplying Event Occurs when customizations from the Application Model are applied to the List Editor’s control. Namespace: DevExpress.ExpressApp.Editors Assembly: DevExpress.ExpressApp.v19.1.dll Declaration public event EventHandler<CancelEventArgs> ModelApplying Public Event ModelApplying As EventHandler(Of CancelEventArgs) Event Data The ModelApplying event's data class is CancelEventArgs. The following properties provide information specific to this event: Remarks Handle this event to prohibit applying customizations from the Application Model to the List Editor’s control. For this purpose, set the event handler’s CancelEventArgs.Cancel parameter to true. See Also Feedback
https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Editors.ListEditor.ModelApplying?v=19.1
2021-05-06T01:44:02
CC-MAIN-2021-21
1620243988724.75
[]
docs.devexpress.com
LibreOffice » tools Predates sal - string functions, url manipulation, stream stuff, polygons, etc. Exact history is lost before Sept. 18th, 2000, but old source code comments show that part of the tools library dates back until at least April 1991. This directory will be removed in the near future (see tdf#39445 or tdf#63154). Generated by Libreoffice CI on lilith.documentfoundation.org Last updated: 2021-04-30 00:01:05 | Privacy Policy | Impressum (Legal Info)
https://docs.libreoffice.org/tools.html
2021-05-06T01:44:54
CC-MAIN-2021-21
1620243988724.75
[]
docs.libreoffice.org
Documentation From Genesys Documentation Genesys Technical Documentation Home to documentation and videos for Genesys products, solutions, and use cases Genesys Use Cases Use Cases are the way we visualize great experiences, discover business opportunities, and design solutions. Multicloud Deliver competitively superior customer experiences and digital transformation at any scale. Shared Find information for Genesys products, features, and functionality that span the platforms.
https://all.docs.genesys.com/index.php?title=Documentation&veaction=edit&section=1
2021-05-06T01:03:13
CC-MAIN-2021-21
1620243988724.75
[]
all.docs.genesys.com
SparkTime (community library) Summary A Spark Library for NTP time Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. SparkTime NTP Time Client Code for the Spark Core Browse Library Files
https://docs.particle.io/cards/libraries/s/SparkTime/
2021-05-06T00:42:44
CC-MAIN-2021-21
1620243988724.75
[]
docs.particle.io
field. This works just like GUI.TextField, but correctly responds to select all, copy, paste etc. in the editor, and it can have an optional label in front. Text field in an Editor Window. using UnityEngine; using UnityEditor; // Changes the name of the selected Objects to the one typed in the text field class EditorGUITextField : EditorWindow { string objNames = ""; [MenuItem("Examples/Bulk Name change")] static void Init() { var window = GetWindow<EditorGUITextField>(); window.Show(); } void OnGUI() { EditorGUI.DropShadowLabel(new Rect(0, 0, position.width, 20), "Select the objects to rename."); objNames = EditorGUI.TextField(new Rect(10, 25, position.width - 20, 20), "New Names:", objNames); if (Selection.activeTransform) { if (GUI.Button(new Rect(0, 50, position.width, 30), "Bulk rename!")) { foreach (Transform t in Selection.transforms) { t.name = objNames; } } } } void OnInspectorUpdate() { Repaint(); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2020.1/Documentation/ScriptReference/EditorGUI.TextField.html
2021-05-06T00:53:50
CC-MAIN-2021-21
1620243988724.75
[]
docs.unity3d.com
(Emmett Till and Carolyn Bryant) At a time when the “Black Lives Matter” movement continues to gain momentum it is interesting to contemplate what the turning point was for the Civil Rights Movement. In his new book THE BLOOD OF EMMETT TILL, Timothy B. Tyson argues that the lynching of Emmett Till on August 28, 1955, by two white men in rural Mississippi was the tipping point. It appears their actions were in part motivated by the 1954 Supreme Court’s Brown v. the Board of Education of Topeka, KA decision outlawing “separate but equal,” a landmark case that lit a fire under white supremacists in the south. Shortly thereafter, Rosa Parks refused to move to the back of the bus in Birmingham, AL and events began to snowball. Tyson reexamines the murder of Till and explores what really happened that night. The author includes new material gained from his 2007 interview of Carolyn Bryant who was supposedly the victim of some sort of offensive behavior that violated Mississippi’s unwritten code that existed between whites and blacks. It seems that Bryant’s memory of what transpired after fifty years has changed, which makes it even more disconcerting in exploring the plight of Emmett Till. In her interview Bryant changed her story from the testimony given in the trial of her husband Roy Bryant, and brother-in-law, J.W. “Big” Miam who were accused of murdering Till. Her testimony “that Till grabbed her around the waist and uttered obscenities” was not true. Till did not grab her, but the all-white jury acquitted both men of the murder. Till a fourteen year old boy and his cousin, Wheeler Parker who lived in Chicago’s south side were visiting their uncle Reverend Moses Wright who was a sharecropper on the G.C. Plantation in the Mississippi Delta. Both boys were not ignorant of the mores of white-black relations in Mississippi, but what is key to the story is what actually happened when Till entered the Milam country store and interacted with Mrs. Bryant. That night Till was seized from Wright’s house by Roy Bryant and J.W. Milam and was severely beaten, shot in the head, and dumped in a river twelve miles from the murder scene. Tyson provides detailed accounts of August 28, 1955, the return of Till’s body to Chicago, the arrest and trial of the two men, the effect on American society, the burgeoning Civil Rights Movement, and the world wide reaction to the verdict which played into the hands of the Soviet Union in the heart of the Cold War. For Tyson, the key to the reaction to Till’s murder was the behavior and strategy pursued by his mother, Mamie Bradley. Once she learned of her son’s kidnapping she decided that “she would not go quietly” and began calling Chicago newspapers as she realized there were no officials in Mississippi she could appeal to. Sheriff Clarence Strider of Tallahatchie County was put in charge of the investigation despite the fact the murder occurred outside his jurisdiction. For Strider and other county officials the goal was to bury Till as soon as possible and let the situation blow over. Bradley refused to cooperate and demanded that her son be returned to Chicago for burial. Once that occurred Bradley’s only weapon to make sure her son’s death had meaning was his body. During the viewing and funeral she made sure that the casket was open so the public could learn the truth of how her son was tortured and then murdered, and learn what Mississippi “justice” was all about. Because of the new medium of television and newspaper photographs of the mutilated body the entire country was now a witness to the results of the lynching. (J.W. “Big” Milam, Roy Bryant and their wives) Tyson does an excellent job bringing the reader inside the courtroom and explaining why the two murderers were acquitted. He digs deep into Mississippi’s historical intolerance of African-Americans and how they should behave and be employed. Tyson reviews the plight of Black America through World War II and touches on the hope that returning black veterans who fought for democracy would be treated differently after the war. This did not occur nationwide, particularly in Mississippi. However, as the Civil Rights Movement shifted its strategy toward enforcing its voting rights and employing the economic weapon, Mississippians grew scared and became even more violent towards African-Americans, and with the Brown decision men like Bryant and Milam were exorcised to the point of lynching Till. (The mutilation of Emmett Till) Tyson presents a concise history of intimidation, violence, and murder that African-Americans confronted each day in Mississippi. As the NAACP grew and demands for voting rights and desegregation expanded the powers that be in Mississippi grew worried. They relied on people like Thomas Brady, a Mississippi Circuit Court Judge and occupant of a seat on the state’s Supreme Court to create the “Citizens Council Movement” to espouse the propaganda of race mixing and the threat to southern womanhood as the gospel of the white south. In fact, the defense in the Till trial leaned on the threat of southern womanhood in its argument that gained the acquittal. The fact that the trial itself took place only twenty days after the murder in of itself reflects the lack of proper investigation. Further, the threats and coercion to prevent witnesses from testifying is testimony to the lack of justice. In fact, a few who did testify for the prosecution, uprooted their lives in Mississippi and moved to Chicago for fear of retribution. (Carolyn Bryant, then and now) The person in this drama who should feel ashamed of themselves is Carolyn Bryant whose lies contributed to the acquittal of Till’s murderers. It is a shame that there is a statute of limitations for perjury because she was certainly guilty. Her show of “conscience,” for this reader is fifty years too late. Reading this book can only make one angry about America’s past and one would hope that race would no longer be a factor in our society. But in fact it is. We witnessed race baiting throughout the last presidential campaign and as a society we have not come to terms with the idea of “equal justice under the law.” Tyson’s book should be read in the context of history, but also as a vehicle to contemporary understanding. As Tyson aptly points out, the death of Emmett Till “was caused by the nature and history of America itself and by a social system that has changed over the decades, but not as much as we pretend.” (208) One wonders if the shooting of Michael Brown in Ferguson, MO will be as transformative an episode as the death of Emmett Till. (Emmett Till and Carolyn Bryant)
https://docs-books.com/2017/06/05/the-blood-of-emmett-till-by-timothy-b-tyson/
2021-05-06T00:58:06
CC-MAIN-2021-21
1620243988724.75
[]
docs-books.com
HTTP API (Developer) The Appcues API provides an HTTP endpoint for recording user events and profile information to the Appcues platform. These events and profile updates (or "user activity"), are used for targeting the right content to the right end users. Base URL The root URL for the Appcues API is Making Requests The user activity endpoint accepts POST requests, JSON-formatted, which must contain a Content-Type: application/json header. Submission URL Submissions must include: - your Appcues account ID (from the account page) - the end user's ID (the first parameter to your Appcues.identify()call). - A `Content-Type: application/json` header. Note: If the end user ID submitted does not exist in Appcues, a new user will be created with that ID. When complete, the URL should look like this:{account id}/users/{user id}/activity Request body The request body must be a JSON-formatted object, containing one or both of the following parameters: eventsArray of event objects, defined below. profile_updateObject containing arbitrary key-value data to update in the user's stored profile. request_id(Optional) Arbitrary string which can be used to identify this request. The request_id will be returned in the response. Sending events Each event you send is an object included in an events array. Each event object should contain the following parameters: nameEvent name. This is the main mechanism for grouping and targeting on specific events. Max 127 characters. timestampTime at which the event occurred, in either integer format (as Unix time) or string format (as ISO 8601 compliant date, e.g. 2016-01-13T13:05:22.000Z). attributesObject containing arbitrary key-value data describing details of the event. Optional "events": [ { "name": "clicked widget", "timestamp": 1452854179, "attributes": { "widget_color": "blue", "ab_group": "a" } }, { "name": "purchased a plan", "timestamp": 1452855000, "attributes": { "plan_tier": "Enterprise", } } ] Note on reserved event names: Appcues reserves events with names beginning with appcues: for internal use. Sending user profile properties Each profile update you send is a key value pair in the profile_update object, with no required parameters. "profile_update": { "team": "Product", "tier": "Beginner" } Example request Here is an example of a valid API request, including both events and profile updates, with the account ID 123 and user ID 456: curl \ -X POST -H "Content-Type: application/json" -d ' { "request_id": "abc123", "events": [ { "name": "clicked widget", "timestamp": 1452854179, "attributes": { "widget_color": "blue", "ab_group": "a" } } ], "profile_update": { "favorite team": "Red Sox", "favorite boat type": "Duck" } } ' Response format A successful user activity submission will result in a response with status code 200, and a response body including: request_idRequest ID for the submission (if provided). contentsAn array of content for which the user currently qualifies. {"request_id":"abc123","contents":[]} Error Handling The response codes that will be returned for each submission: - 202: Successful submission - 400: Update failed, likely malformed JSON - 502: Activity ingestion failed, please retry We recommend building retry logic into your data uploader to handle potential 502 responses.
https://docs.appcues.com/article/254-http-api
2021-05-06T01:41:29
CC-MAIN-2021-21
1620243988724.75
[]
docs.appcues.com
Connecting with your customers just got a lot easier! With the My Biz Page you will be able to create your very own business page and add your official contact number to your business in order to make it more convenient for your customers. All you need to do is follow the steps below and you will be able to create and market your business through WhatsApp and make it more convenient for your customers to contact you. - Login to your Vepaar account. - Click on the My Biz tab under the Dashboard. - Then fill in the information and create your own page for your business! My Biz Page is going to become the official page for your business where you can place all the helpful information about your business to your customers. You will also get a link, be able to upload a logo, allow your customers to place orders, and much more! Check this one of the awesome business page examples! Excited? Create your Biz Page now!
https://docs.vepaar.com/support/solutions/articles/82000277936-my-biz-page
2021-05-06T00:40:02
CC-MAIN-2021-21
1620243988724.75
[array(['https://s3-ap-south-1.amazonaws.com/ind-cdn.freshdesk.com/data/helpdesk/attachments/production/82001698166/original/LbwrpyV_uanlh2EcuPhQplp9FHTUgiN6Rg.png?1609592231', None], dtype=object) array(['https://s3-ap-south-1.amazonaws.com/ind-cdn.freshdesk.com/data/helpdesk/attachments/production/82001698217/original/nWSvGMoKUyfMzg3r2FivZDRGedSZVax46A.png?1609592395', None], dtype=object) ]
docs.vepaar.com
RHMesh (community library) Summary Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. RHMesh A Particle library for RHMesh RHMesh library to your project and follow this simple example: #include "RHMesh.h" RHMesh rHMesh; void setup() { rHMesh.begin(); } void loop() { rHMesh.process(); } See the examples folder for more details. Documentation TODO: Describe RHMesh RHMesh_myname to add the library to a project on your machine or add the RHMesh
https://docs.particle.io/cards/libraries/r/RHMesh/
2021-05-06T00:41:56
CC-MAIN-2021-21
1620243988724.75
[]
docs.particle.io
HSVToRGB node¶ This documentation is for version 1.0 of HSVToRGB (net.sf.openfx.HSVToRGB). Description¶ Convert from HSV color model (hue, saturation, value, as defined by A. R. Smith in 1978) to linear RGB. H is in degrees, S and V are in the same units as RGB. No gamma correction is applied to RGB after conversion.
https://natron.readthedocs.io/en/rb-2.3/plugins/net.sf.openfx.HSVToRGB.html
2021-05-06T01:22:57
CC-MAIN-2021-21
1620243988724.75
[]
natron.readthedocs.io
What’s New In Python 3.0¶ This. Common Stumbling Blocks¶ This section lists those few changes that are most likely to trip you up if you’re used to Python 2.5. Print Is A Function¶ The print() function, with keyword arguments to replace most of the special syntax of the old: - The print()function doesn’t support the “softspace” feature of the old print "A\n", "B"would write "A\nB\n"; but in Python 3.0, print("A\n", "B")writes "A\n B\n". - Initially, you’ll be finding yourself typing the old print xa lot in interactive mode. Time to retrain your fingers to type print(x)instead! - When using the 2to3source-to-source conversion tool, all print()function calls, so this is mostly a non-issue for larger projects. Views And Iterators Instead Of Lists¶ Some well-known APIs no longer return lists: dictmethodsloop . Ordering Comparisons¶ Python 3.0 has simplified the rules for ordering comparisons: - The ordering comparison operators ( <, <=, >=, >) raise a TypeError exception when the operands don’t have a meaningful natural ordering. Thus, expressions like 1 < '', 0 > Noneor len <= lenare no longer valid, and e.g. None < Noneraises TypeErrorinstead of returning False. A corollary is that sorting a heterogeneous list no longer makes sense – all the elements must be comparable to each other. Note that this does not apply to the ==and !=operators: objects of different incomparable types always compare unequal to each other. builtin.sorted()and list.sort()no longer accept the cmp argument providing a comparison function. Use the key argument instead. N.B. the key and reverse arguments are now “keyword-only”. - The cmp()function should be treated as gone, and the __cmp__()special method is no longer supported. Use __lt__()for sorting, __eq__()with __hash__(), and other rich comparisons as needed. (If you really need the cmp()functionality, you could use the expression (a > b) - (a < b)as the equivalent for cmp(a, b).) Integers¶ - PEP 237: Essentially, longrenamed to int. That is, there is only one built-in integral type, named int; but it behaves mostly like the old longtype. - PEP 238: An expression like 1/2returns a float. Use 1//2to get the truncating behavior. (The latter syntax has existed for years, at least since Python 2.2.) - The sys.maxintconstant was removed, since there is no longer a limit to the value of integers. However, sys.maxsizecan be used as an integer larger than any practical list or string index. It conforms to the implementation’s “natural” integer size and is typically the same as sys.maxintin previous releases on the same platform (assuming the same build options). - The repr()of a long integer doesn’t include the trailing Lanymore, so code that unconditionally strips that character will chop off the last digit instead. (Use str()instead.) - Octal literals are no longer of the form 0720; use 0o720instead. Text Vs. Data Instead Of Unicode Vs. 8-bit¶ Everything you thought you knew about binary data and Unicode has changed. - Python. To be prepared in Python 2.x, start using unicodefor all unencoded text, and strfor binary or encoded data only. Then the 2to3tool will do most of the work for you. - You can no longer use u"..."literals for Unicode text. However, you must use b"..."literals for binary data. - As the strand bytestypes cannot be mixed, you must always explicitly convert between them. Use str.encode()to go from strto bytes, and bytes.decode()to go from bytesto str. You can also use bytes(s, encoding=...)and str(b, encoding=...), respectively. - Like str, the bytestype is immutable. There is a separate mutable type to hold buffered binary data, bytearray. Nearly all APIs that accept bytesalso accept bytearray. The mutable API is based on collections.MutableSequence. - All backslashes in raw string literals are interpreted literally. This means that '\U'and '\u'escapes in raw strings are not treated specially. For example, r'\u20ac'is a string of 6 characters in Python 3.0, whereas in 2.6, ur'\u20ac'was the single “euro” character. (Of course, this change only affects raw string literals; the euro character is '\u20ac'in Python 3.0.) - The built-in basestringabstract type was removed. Use strinstead. The strand bytestypes don’t have functionality enough in common to warrant a shared base class. The 2to3tool (see below) replaces every occurrence of basestringwith str. - Files opened as text files (still the default mode for open()) always use an encoding to map between strings (in memory) and bytes (on disk). Binary files (opened with a bin the mode argument) always use bytes in memory. This means that if a file is opened using an incorrect mode or encoding, I/O will likely fail loudly, instead of silently producing incorrect data. It also means that even Unix users will have to specify the correct mode (text or binary) when opening a file. There is a platform-dependent default encoding, which on Unixy platforms can be set with the LANGenvironment variable (and sometimes also with some other platform-specific locale-related environment variables). In many cases, but not all, the system default is UTF-8; you should never count on this default. Any application reading or writing more than pure ASCII text should probably have a way to override the encoding. There is no longer any need for using the encoding-aware streams in the codecsmodule. -. - Filenames are passed to and returned from APIs as (Unicode) strings. This can present platform-specific problems because on some platforms filenames are arbitrary byte strings. (On the other hand, on Windows filenames are natively stored as Unicode.) As a work-around, most APIs (e.g. open()and many functions in the osmodule) that take filenames accept bytesobjects as well as strings, and a few APIs have a way to ask for a bytesreturn value. Thus, os.listdir()returns a list of bytesinstances if the argument is a bytesinstance, and os.getcwdb()returns the current working directory as a bytesinstance. Note that when os.listdir()returns a list of strings, filenames that cannot be decoded properly are omitted rather than raising UnicodeError. - Some system APIs like os.environand sys.argvcan also present problems when the bytes made available by the system is not interpretable using the default encoding. Setting the LANGvariable and rerunning the program is probably the best approach. - PEP 3138: The repr()of a string no longer escapes non-ASCII characters. It still escapes control characters and code points with non-printable status in the Unicode standard, however. - PEP 3120: The default source encoding is now UTF-8. - PEP 3131: Non-ASCII letters are now allowed in identifiers. (However, the standard library remains ASCII-only with the exception of contributor names in comments.) - The StringIOand cStringIOmodules are gone. Instead, import the iomodule and use io.StringIOor io.BytesIOfor text and data respectively. - See also the Unicode HOWTO, which was updated for Python 3.0. Overview Of Syntax Changes¶ This section gives a brief overview of every syntactic change in Python 3.0. New Syntax¶statement. Using nonlocal xyou can now assign directly to a variable in an outer (but non-global) scope. nonlocalis a new reserved word. PEP 3132: Extended Iterable Unpacking. You can now write things like a, b, *rest = some_sequence. And even *rest, a = stuff. The restobject 274or B, and there is a new corresponding built-in function, bytes(). Changed Syntax¶ PEP 3109 and PEP 3134: new raisestatement syntax: raise [expr [from expr]]. See below. asand withare now reserved words. (Since 2.6, actually.) True, False, and Noneare reserved words. (2.6 partially enforced the restrictions on Nonealready.) Change from exceptexc, var to exceptexc asvar..) Removed Syntax¶ - PEP 3113: Tuple parameter unpacking removed. You can no longer write def foo(a, (b, c)): .... Use def foo(a, b_c): b, c = b_cinstead. - Removed backticks (use repr()instead). - Removed <>(use !=instead). - Removed keyword: exec()is no longer a keyword; it remains as a function. (Fortunately the function syntax was also accepted in 2.x.) Also note that exec()no longer takes a stream argument; instead of exec(f)you can use exec(f.read()). - Integer literals no longer support a trailing lor L. - String literals no longer support a leading uor U. - The frommodule import *syntax is only allowed at the module level, no longer inside functions. - The only acceptable syntax for relative imports is from .[module] import name. All importforms not starting with .are interpreted as absolute imports. (PEP 328) - Classic classes are gone. Changes Already Present In Python 2.6¶. - PEP 343: The ‘with’ statement. The withstatement is now a standard feature and no longer needs to be imported from the __future__. Also check out Writing Context Managers and The contextlib module. - PEP 366: Explicit Relative Imports From a Main Module. This enhances the usefulness of the -moption when the referenced module lives in a package. - PEP 370: Per-user site-packages Directory. - PEP 371: The multiprocessing Package. - PEP 3101: Advanced String Formatting. Note: the 2.6 description mentions the format()method for both 8-bit and Unicode strings. In 3.0, only the strtype (text strings with Unicode support) supports this method; the bytestype does not. The plan is to eventually make this the only API for string formatting, and to start deprecating the %operator in Python 3.1. - PEP 3105: print As a Function. This is now a standard feature and no longer needs to be imported from __future__. More details were given above. - PEP 3110: Exception-Handling Changes. The exceptexc asvar syntax is now standard and exceptexc, var is no longer supported. (Of course, the asvar part is still optional.) - PEP 3112: Byte Literals. The b"..."string literal notation (and its variants like b'...', b"""...""", and br"...") now produces a literal of type bytes. - PEP 3116: New I/O Library. The iomodule is now the standard way of doing file I/O. The built-in open()function is now an alias for io.open()and has additional keyword arguments encoding, errors, newline and closefd. Also note that an invalid mode argument now raises ValueError, not IOError. The binary file object underlying a text file object can be accessed as f.buffer(but beware that the text object maintains a buffer of itself in order to speed up the encoding and decoding operations). - PEP 3118: Revised Buffer Protocol. The old builtin buffer()is now really gone; the new builtin memoryview()provides (mostly) similar functionality. - PEP 3119: Abstract Base Classes. The abcmodule and the ABCs defined in the collectionsmodule plays a somewhat more prominent role in the language now, and built-in collection types like dictand listconform to the collections.MutableMappingand collections.MutableSequenceABCs, respectively. - PEP 3127: Integer Literal Support and Syntax. As mentioned above, the new octal literal notation is the only one supported, and binary literals have been added. - PEP 3129: Class Decorators. - PEP 3141: A Type Hierarchy for Numbers. The numbersmodule is another new use of ABCs, defining Python’s “numeric tower”. Also note the new fractionsmodule which implements numbers.Rational. Library Changes¶ 4. Others were removed as a result of the removal of support for various platforms such as Irix, BeOS and Mac OS 9 (see PEP 11). Some modules were also selected for removal in Python 3.0 due to lack of use or because a better replacement exists. See PEP 3108 for an exhaustive list. The bsddb3package 8, or for various other reasons. Here’s the list: A common pattern in Python 2.x is to have one version of a module implemented in pure Python, with an optional accelerated version implemented as a C extension; for example, picklepair received this treatment. The profilemodule is on the list for 3.1. The StringIOmodule has been turned into a class in the iomodule. Some related modules have been grouped into packages, and usually the submodule names have been simplified. The resulting new packages are: dbm( anydbm, dbhash, dbm, dumbdbm, gdbm, whichdb). html( HTMLParser, htmlentitydefs). http( httplib, BaseHTTPServer, CGIHTTPServer, SimpleHTTPServer, Cookie, cookielib). tkinter(all Tkinter-related modules except turtle). The target audience of turtledoesn’t really care about tkinter. Also note that as of Python 2.6, the functionality of turtlehas been greatly enhanced. urllib( urllib, urllib2, urlparse, robotparse). xmlrpc( xmlrpclib, DocXMLRPCServer, SimpleXMLRPCServer). Some other changes to standard library modules, not covered by PEP 3108: - Killed sets. Use the built-in set()class. - Cleanup of the sysmodule: removed sys.exitfunc(), sys.exc_clear(), sys.exc_type, sys.exc_value, sys.exc_traceback. (Note that sys.last_typeetc. remain.) - Cleanup of the array.arraytype: the read()and write()methods are gone; use fromfile()and tofile()instead. Also, the 'c'typecode for array is gone – use either 'b'for bytes or 'u'for Unicode characters. - Cleanup of the operatormodule: removed sequenceIncludes()and isCallable(). - Cleanup of the threadmodule: acquire_lock()and release_lock()are gone; use acquire()and release()instead. - Cleanup of the randommodule: removed the jumpahead()API. - The newmodule is gone. - The functions os.tmpnam(), os.tempnam()and os.tmpfile()have been removed in favor of the tempfilemodule. - The tokenizemodule has been changed to work with bytes. The main entry point is now tokenize.tokenize(), instead of generate_tokens. string.lettersand its friends ( string.lowercaseand string.uppercase) are gone. Use string.ascii_lettersetc. instead. (The reason for the removal is that string.lettersand friends had locale-specific behavior, which is a bad idea for such attractively-named global “constants”.) - Renamed module __builtin__to builtins(removing the underscores, adding an ‘s’). The __builtins__variable found in most global namespaces is unchanged. To modify a builtin, you should use builtins, not __builtins__! PEP 3101: A New Approach To String Formatting¶ Changes To Exceptions¶ The APIs for raising and catching exception have been cleaned up and new powerful features added: PEP 352: All exceptions must be derived (directly or indirectly) from BaseException. This is the root of the exception hierarchy. This is not new as a recommendation, but the requirement to inherit from BaseExceptionis new. (Python 2.6 still allowed classic classes to be raised, and placed no restriction on what you can catch.) As a consequence, string exceptions are finally truly and utterly dead. Almost all exceptions should actually derive from Exception; BaseExceptionshould only be used as a base class for exceptions that should only be handled at the top level, such as SystemExitor KeyboardInterrupt. The recommended idiom for handling all exceptions except for this latter category is to use except Exception. StandardErrorwas removed. Exceptions no longer behave as sequences. Use the argsattributeinstead of except SomeException, variable. Moreover, the variable is explicitly deleted when the exceptblock is left. PEP 3134: Exception chaining. There are two cases: implicit chaining and explicit chaining. Implicit chaining happens when an exception is raised in an exceptor finallyhandleris now %1 is not a valid Win32 application. Strings now deal with non-English locales. Miscellaneous Other Changes¶ Operators And Special Methods¶ !=now returns the opposite of ==, unless ==returns NotImplemented. - The concept of “unbound methods” has been removed from the language. When referencing a method as a class attribute, you now get a plain function object. __getslice__(), __setslice__()and __delslice__()were killed. The syntax a[i:j]now translates to a.__getitem__(slice(i, j))(or __setitem__()or __delitem__(), when used as an assignment or deletion target, respectively). - PEP 3114: the standard next()method has been renamed to - The __oct__()and __hex__()special methods are removed – oct()and hex()use __index__()now to convert the argument to an integer. - Removed support for __members__and __methods__. - The function attributes named func_Xhave been renamed to use the __X__form, freeing up these names in the function attribute namespace for user-defined attributes. To wit, func_closure, func_code, func_defaults, func_dict, func_doc, func_globals, func_namewere renamed to __closure__, __code__, __defaults__, __dict__, __doc__, __globals__, __name__, respectively. __nonzero__()is now __bool__(). Builtins¶ - PEP 3135: New super(). You can now invoke super()without arguments and (assuming this is in a regular instance method defined inside a classstatement) the right class and instance will automatically be chosen. With arguments, the behavior of super()is unchanged. - PEP 3111: raw_input()was renamed to input(). That is, the new input()function reads a line from sys.stdinand returns it with the trailing newline stripped. It raises EOFErrorif the input is terminated prematurely. To get the old behavior of input(), use eval(input()). - A new built-in function next()was added to call the __next__()method on an object. - The round()function rounding strategy and return type have changed. Exact halfway cases are now rounded to the nearest even result instead of away from zero. (For example, round(2.5)now returns 2rather than 3.) round(x[, n])now delegates to x.__round__([n])instead of always returning a float. It generally returns an integer when called with a single argument and a value of the same type as xwhen called with two arguments. - Moved intern()to sys.intern(). - Removed: apply(). Instead of apply(f, args)use f(*args). - Removed callable(). Instead of callable(f)you can use isinstance(f, collections.Callable). The operator.isCallable()function is also gone. - Removed coerce(). This function no longer serves a purpose now that classic classes are gone. - Removed execfile(). Instead of execfile(fn)use exec(open(fn).read()). - Removed the filetype. Use open(). There are now several different kinds of streams that open can return in the iomodule. - Removed reduce(). Use functools.reduce()if you really need it; however, 99 percent of the time an explicit forloop is more readable. - Removed reload(). Use imp.reload(). - Removed. dict.has_key()– use the inoperator instead. Build and C API Changes¶ Due to time constraints, here is a very incomplete list of changes to the C API. - Support for several platforms was dropped, including but not limited to Mac OS 9, BeOS, RISCOS, Irix, and Tru64. - PEP 3118: New Buffer API. - PEP 3121: Extension Module Initialization & Finalization. - PEP 3123: Making PyObject_HEADconform to standard C. - No more C API support for restricted execution. PyNumber_Coerce(), PyNumber_CoerceEx(), PyMember_Get(), and PyMember_Set()C APIs are removed. - New C API PyImport_ImportModuleNoBlock(), works like PyImport_ImportModule()but won’t block on the import lock (returning an error instead). - Renamed the boolean conversion C-level slot and method: nb_nonzerois now nb_bool. - Removed METH_OLDARGSand WITH_CYCLE_GCfrom the C API. Performance¶! Porting To Python 3.0¶ For porting existing Python 2.5 or 2.6 source code to Python 3.0, the best strategy is the following: - (Prerequisite:) Start with excellent test coverage. - Port to Python 2.6. This should be no more work than the average port from Python 2.x to Python 2.(x+1). Make sure all your tests pass. - (Still using 2.6:) Turn on the -3command line switch. This enables warnings about features that will be removed (or change) in 3.0. Run your test suite again, and fix code that you get warnings about until there are no warnings left, and all your tests still pass. - Run the 2to3source-to-source translator over your source code tree. (See 2to3 - Automated Python 2 to 3 code translation for more on this tool.) Run the result of the translation under Python 3.0. Manually fix up any remaining issues, fixing problems until all tests pass again. It is not recommended to try to write source code that runs unchanged under both Python 2.6 and 3.0; you’d have to use a very contorted coding style, e.g. avoiding 2to3 translator again, rather than editing the 3.0 version of the source code. For porting C extensions to Python 3.0, please see Porting Extension Modules to Python 3.
http://docs.activestate.com/activepython/3.6/python/whatsnew/3.0.html
2018-08-14T11:28:03
CC-MAIN-2018-34
1534221209021.21
[]
docs.activestate.com
Release Notes for Buildbot v0.8.8¶ The following are the release notes for Buildbot v0.8.8 Buildbot v0.8.8 was released on August 22, 2013 Master¶ Features¶ - The MasterShellCommand step now correctly handles environment variables passed as list. - The master now poll the database for pending tasks when running buildbot in multi-master mode. - The algorithm to match build requests to slaves has been rewritten in pull request 615. The new algorithm automatically takes locks into account, and will not schedule a build only to have it wait on a lock. The algorithm also introduces a canStartBuild builder configuration option which can be used to prevent a build request being assigned to a slave. - buildbot stop and buildbot restart now accept --clean to stop or restart the master cleanly (allowing all running builds to complete first). - The IRC bot now supports clean shutdown and immediate shutdown by using the command 'shutdown'. To allow the command to function, you must provide allowShutdown=True. - CopyDirectory has been added. - BuildslaveChoiceParameter has been added to provide a way to explicitly choose a buildslave for a given build. - default.css now wraps preformatted text by default. - Slaves can now be paused through the web status. - The latent buildslave support is less buggy, thanks to pull request 646. - The treeStableTimer for AnyBranchScheduler now maintains separate timers for separate branches, codebases, projects, and repositories. - SVN has a new option preferLastChangedRev=True to use the last changed revision for got_revision - The build request DB connector method getBuildRequests can now filter by branch and repository. - A new SetProperty step has been added in buildbot.steps.master which can set a property directly without accessing the slave. - The new LogRenderable step logs Python objects, which can contain renderables, to the logfile. This is helpful for debugging property values during a build. - 'buildbot try' now has an additional --property option to set properties. Unlike the existing --properties option, this new option supports setting only a single property and therefore allows commas to be included in the property name and value. - The Git step has a new config option, which accepts a dict of git configuration options to pass to the low-level git commands. See Git for details. - In ShellCommand Shell properties are no longer hardcoded, but instead consult as default values via renderables. Renderable are used in favor of callables for syncAllBranches) Deprecations, Removals, and Non-Compatible Changes¶ - The split_file function for SVNPoller may now return a dictionary instead of a tuple. This allows it to add extra information about a change (such as project or repository). - The workdir build property has been renamed to builddir. This change accurately reflects its content; the term "workdir" means something different. workdir is currently still supported for backwards compatability, but will be removed eventually. - The Blocker step has been removed. - Several polling ChangeSources are now documented to take a pollInterval argument, step in buildbot.steps.shell has been renamed to SetPropertyFromCommand. - The EC2 and libvirt latent slaves have been moved to buildbot.buildslave.ec2 and buildbot.buildslave.libirt respectively. - Pre v0.8.7 versions of buildbot supported passing keyword arguments to buildbot.process.BuildFactory.addStep, but this was dropped. Support was added again, while still being deprecated, to ease transition. Changes for Developers¶ - Added an optional build start callback to buildbot.status.status_gerrit.GerritStatusPush This release includes the fix for bug #2536. - An optional startCB callback to GerritStatusPush can be used to send a message back to the committer. See the linked documentation for details. - bb:sched:ChoiceStringParameter has a new method getChoices that can be used to generate content dynamically for Force scheduler forms. Slave¶ Features¶ - The fix for Twisted bug #5079 is now applied on the slave side, too. This fixes a perspective broker memory leak in older versions of Twisted. This fix was added on the master in Buildbot-0.8.4 (see bug #1958). - The --nodaemon option to buildslave start now correctly prevents the slave from forking before running. Details¶ For a more detailed description of the changes made in this version, see the git log itself: git log v0.8.7..v0.8.8 Older Versions¶ Release notes for older versions of Buildbot are available in the master/docs/relnotes/ directory of the source tree. Newer versions are also available here:
http://docs.buildbot.net/0.8.9/relnotes/0.8.8.html
2018-08-14T11:31:05
CC-MAIN-2018-34
1534221209021.21
[]
docs.buildbot.net
Sparkling Water¶ What is Sparkling Water?. What are the advantages of using Sparkling Water compared with H2O? Sparkling Water contains the same features and functionality as H2O but provides a way to use H2O with Spark, a large-scale cluster framework. Sparkling Water is ideal for H2O users who need to manage large clusters for their data processing needs and want to transfer data from Spark to H2O (or vice versa). There is also a Python interface available to enable access to Sparkling Water directly from PySpark. How do I filter an H2OFrame using Sparkling Water? Filtering columns is easy: just remove the unnecessary columns or create a new H2OFrame from the columns you want to include ( Frame(String[] names, Vec[] vec)), then make the H2OFrame wrapper around it ( new H2OFrame(frame)). Filtering rows is a little bit harder. There are two ways: - Create an additional binary vector holding 1/0for the in/outsample (make sure to take this additional vector into account in your computations). This solution is quite cheap, since you do not duplicate data - just create a simple vector in a data walk. or - Create a new frame with the filtered rows. This is a harder task, since you have to copy data. For reference, look at the #deepSlice call on Frame ( H2OFrame) How do I save and load models using Sparkling Water? Models can be saved and loaded in Sparkling Water using the water.support.ModelSerializationSupport class. For example: #Test model val model = h2oModel #Export model on disk ModelSerializationSupport.exportModel(model, destinationURI) # Export the POJO model ModelSerializationSupport.exportPOJOModel(model, desinationURI) #Load the model from disk val loadedModel = ModelSerializationSupport.loadModel(pathToModel) Note that you can also specify type of model to be loaded: val loadedModel = ModelSerializationSupport.loadMode[TYPE_OF_MODEL]l(pathToModel) How do I inspect H2O using Flow while a droplet is running? If your droplet execution time is very short, add a simple sleep statement to your code: Thread.sleep(...) How do I change the memory size of the executors in a droplet? There are two ways to do this: - Change your default Spark setup in $SPARK_HOME/conf/spark-defaults.conf or - Pass --confvia spark-submit when you launch your droplet (e.g., $SPARK_HOME/bin/spark-submit --conf spark.executor.memory=4g --master $MASTER --class org.my.Droplet $TOPDIR/assembly/build/libs/droplet.jar I received the following error while running Sparkling Water using multiple nodes, but not when using a single node - what should I do? onExCompletion for water.parser.ParseDataset$MultiFileParseTask@31cd4150 water.DException$DistributedException: from /10.23.36.177:54321; by class water.parser.ParseDataset$MultiFileParseTask; class water.DException$DistributedException: from /10.23.36.177:54325; by class water.parser.ParseDataset$MultiFileParseTask; class water.DException$DistributedException: from /10.23.36.178:54325; by class water.parser.ParseDataset$MultiFileParseTask$DistributedParse; class java.lang.NullPointerException: null at water.persist.PersistManager.load(PersistManager.java:141) at water.Value.loadPersist(Value.java:226) at water.Value.memOrLoad(Value.java:123) at water.Value.get(Value.java:137) at water.fvec.Vec.chunkForChunkIdx(Vec.java:794) at water.fvec.ByteVec.chunkForChunkIdx(ByteVec.java:18) at water.fvec.ByteVec.chunkForChunkIdx(ByteVec.java:14) at water.MRTask.compute2(MRTask.java:426) at water.MRTask.compute2(MRTask.java:398) This error output displays if the input file is not present on all nodes. Because of the way that Sparkling Water distributes data, the input file is required on all nodes (including remote), not just the primary node. Make sure there is a copy of the input file on all the nodes, then try again. Are there any drawbacks to using Sparkling Water compared to standalone H2O? The version of H2O embedded in Sparkling Water is the same as the standalone version. How do I use Sparkling Water from the Spark shell? There are two methods: Use $SPARK_HOME/bin/spark-shell --packages ai.h2o:sparkling-water-core_2.11:2.1.12 or bin/sparkling-shell The software distribution provides example scripts in the examples/scripts directory: bin/sparkling-shell -i examples/scripts/chicagoCrimeSmallShell.script.scala For either method, initialize H2O as shown in the following example: import org.apache.spark.h2o._ val h2oContext = H2OContext.getOrCreate(spark) import h2oContext._ After successfully launching H2O, the following output displays: Sparkling Water Context: * number of executors: 3 * list of used executors: (executorId, host, port) ------------------------ (1,Michals-MBP.0xdata.loc,54325) (0,Michals-MBP.0xdata.loc,54321) (2,Michals-MBP.0xdata.loc,54323) ------------------------ Open H2O Flow in browser: (CMD + click in Mac OSX) How do I use H2O with Spark Submit? Spark Submit is for submitting self-contained applications. For more information, refer to the Spark documentation. First, initialize H2O: import org.apache.spark.h2o._ val h2oContext = new H2OContext(sc).start() The Sparkling Water distribution provides several examples of self-contained applications built with Sparkling Water. To run the examples: bin/run-example.sh ChicagoCrimeAppSmall The “magic” behind run-example.sh is a regular Spark Submit: $SPARK_HOME/bin/spark-submit ChicagoCrimeAppSmall --packages ai.h2o:sparkling-water-core_2.11:2.1.12 --packages ai.h2o:sparkling-water-examples_2.11:2.1.12 How do I use Sparkling Water with Databricks? Refer to Using H2O Sparking Water with Databricks for information on how to use Sparkling Water with Databricks. How do I develop applications with Sparkling Water? For a regular Spark application (a self-contained application as described in the Spark documentation), the app needs to initialize H2OServices via H2OContext: import org.apache.spark.h2o._ val h2oContext = new H2OContext(sc).start() For more information, refer to the Sparkling Water development documentation. How do I connect to Sparkling Water from R or Python? After starting H2OServices by starting H2OContext, point your client to the IP address and port number specified in H2OContext.
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/faq/sparkling-water.html
2018-08-14T11:30:29
CC-MAIN-2018-34
1534221209021.21
[]
docs.h2o.ai
Entering rates for an Agent is identical to entering a Default Rate. You simply go to SETUP|RATES|DEFAULT RATES and click Add Single Rate. Then, give the rate a unique name for example, Agent1, and then write in the value of the rate for a daily, weekend, and weekly value. Once the Agent1 rate has been saved, you will use it when booking rooms for Agent1 and you will use it as the online rate for your Agent1 when they book from your website(s).
https://docs.bookingcenter.com/display/MYPMS/Special+Rates+for+Agents
2018-08-14T11:12:52
CC-MAIN-2018-34
1534221209021.21
[]
docs.bookingcenter.com
Performing garbage collection Perform garbage collection (GC) using the Perform GC option in Nodes administration of OpsCenter monitoring. Perform garbage collection (GC) using the Perform GC option in Nodes administration of OpsCenter monitoring. Performing GC forces the Java Virtual Machine (JVM) on the selected node to perform garbage collection. For more information, see the corresponding nodetool garbagecollect command. If OpsCenter role-based security is enabled, be sure that the permission for the Garbage Collection option in Node Operations is enabled for the appropriate user roles. Prerequisites Procedure - Click . - In the List view, select one or more nodes. - From the Other Actions menu, click Perform GC.Tip: The Perform GC option is also available from the Actions menu in the Node Details dialog.The Garbage Collect dialog appears and warns of a spike in latency. - Click Run GC.A message in the top banner indicates garbage collection is in progress. Click the Show Details link to view the progress in the Activities page General tab. The banner message indicates when the garbage collection is complete.
https://docs.datastax.com/en/opscenter/6.5/opsc/online_help/garbageCollection.html
2018-08-14T11:09:25
CC-MAIN-2018-34
1534221209021.21
[]
docs.datastax.com
Transaction Functions Built-In Transaction Functions Datomic Cloud provides the following built-in transaction functions: :db/retractEntity The :db/retractEntity function takes an entity id as an argument. It retracts all the attribute values where the given id is either the entity or value, effectively retracting the entity's own data and any references to the entity as well. Entities that are components of the given entity are also recursively retracted. The following example transaction data retracts two entities, specifying one of the entities by id, and the other by a lookup ref. [[:db/retractEntity id-of-jane] [:db/retractEntity [:person/email "[email protected]"]]] :db/cas The :db/cas (compare-and-swap) function takes four arguments: an entity id, an attribute, an old value, and a new value. If the entity has the old value for attribute, then the new value will be asserted. Otherwise, the transaction will abort and throw an exception. You can use nil for the old value to specify that the new value should be asserted only if no value currently exists. The following example transaction data sets entity 42's :account/balance to 110, if and only if :account/balance is 100 at the time the transaction executes: [[:db/cas 42 :account/balance 100 110]] Classpath Transaction Functions You can use Ions to implement classpath transaction functions. This simple yet powerful facility enables: - atomic transformation functions in transactions - integrity checks and constraints - spec-based validators - and much more - your imagination is the limit! Full source code for the examples below is available in the ion-starter project. Creating a Transaction Function Transaction functions have the following characteristics: - Transaction functions must be pure functions, i.e. free of side effects. - A transaction function must expect to be passed a database value as its first argument. This is to allow transaction function to issue queries etc. - a transaction function must return transaction data i.e. data in the same form as expected under the :tx-datakey in transact. For example, the following transaction function finds the value of an attr of entity, and returns transaction data that adds amount to the value: )]])) Testing a Transaction Function You can test a transaction function interatively at the REPL or in a test suite. Given a database value, simply invoke the function: ;; get a connection from somewhere (def db (d/db conn)) (datomic.ion.starter/create-item db [:inv/sku "SKU-42"] :large :green :hat) => [#:inv{:sku [:inv/sku "SKU-42"], :color :green, :size :large, :type :hat}] Deploying a Transaction Function To deploy a transaction transaction. Calling a Transaction Function Calls to your transaction functions are just ordinary transaction lists, with the fully qualified name of the function followed by its arguments. You must leave out the first (database) argument, and Datomic will pass you the in-transaction value for the database automatically. For example: (def tx-data ['(datomic.ion.starter/create-item [:inv/sku "SKU-42"] :large :green :hat)] (d/transact conn {:tx-data tx-data})
https://docs.datomic.com/cloud/transactions/transaction-functions.html
2018-08-14T10:44:03
CC-MAIN-2018-34
1534221209021.21
[]
docs.datomic.com
Clothesline Installation Hills District Sydney? Are you in need of clotheslines or washing line supply and installation in Sydney's Hills District area? We have exactly what you need, including Hills Clothesline installation! We provide Clothesline Installation Hills District Sydney. Lifestyle Clotheslines facilitates the clotheslines installation of many clothesline models in the Hills District area. These clothesline types include: - Retracting and extending clothesline models - Foldown and folding frame clotheslines, either wall mounted or free standing models - Rotary style clotheslines, including the traditional Hills Hoist clotheslines Map of Hills District area covered for clothesline installations Suburbs in the Hills District area we service are: Our Top 7 Clothesline Recommendations for Hills District residents - Hills Supa Fold Duo - Durable folding frame clothesline with optional extras to suit individual needs - Hills Slim 6 Retractable - Popular wall or post mounted retractable clothes unit - Hills Portable 170 - Perfect portable washing line for all living situations - Brabantia Wallfix Clothesline - Unique wall mounted clothesline compliments any home setting - Hills Hoist Heritage 4 - Ideal option for those seeking to replace their dated 1960's clothes hoist - Hills Rotary 8 - Compact clothesline to handle the family's entire wash load - Instahanger - Innovative and space saving clothes drying system If you need any further assistance in choosing the right clothesline or washing line for your needs or for Clothesline Installation Hills District the Hills Shire Council website for any potential restrictions. - To learn more about our clothesline installation services in Sydney, visit this link. - To learn more about the Lifestyle Clotheslines installation process, then click here.
https://docs.lifestyleclotheslines.com.au/article/624-clothesline-installation-hills-district-sydney
2018-08-14T11:30:53
CC-MAIN-2018-34
1534221209021.21
[]
docs.lifestyleclotheslines.com.au
VAT ID Validation VAT ID Validation automatically calculates the required tax for B2B transactions that take place within the European Union (EU), based on the merchant and customer locale. Magento performs VAT ID validation using the web services of the European Commission server. VAT-related tax rules do not influence other tax rules, and do not prevent the application of other tax rules. Only one tax rule can be applied at a given time. - VAT is charged if the merchant and customer are located in the same EU country. - VAT is not charged if the merchant and customer are located in different EU countries, and both parties are EU-registered business entities. The store administrator creates more than one default customer group that can be automatically assigned to the customer during account creation, address creation or update, and checkout. The result is that different tax rules are used for intra-country (domestic) and intra-EU sales. Important: If you sell virtual or downloadable products, which by their nature do not require shipping, the VAT rate of a customer’s location country should be used for both intra-union and domestic sales. You must create additional individual tax rules for product tax classes that correspond to the virtual products. If VAT ID Validation is enabled, after registration each customer is proposed to enter the VAT ID number. However only those who are registered VAT customers are expected to fill this field. After a customer specifies the VAT number and other address fields, and chooses to save, the system saves the address and sends the VAT ID validation request to the European Commission server. According to the results of the validation, one of the default groups is assigned to a customer. This group can be changed if a customer or an administrator changes the VAT ID of the default address or changes the whole default address. The group can be temporarily changed (group change will be emulated) in some cases during one-page checkout. If enabled, you can override VAT ID Validation for individual customers by selecting the checkbox on the Customer Information page. If a customer’s VAT validation is performed during checkout, the VAT request identifier and VAT request date are saved in the Comments History section of the order. The system behavior concerned with the VAT ID validation and the customer group change during the checkout depends on how the Validate on Each Transaction and the Disable Automatic Group Change settings are configured. This section describes the implementation of the VAT ID Validation functionality for the checkout on the frontend. In case a customer uses Google Express Checkout, PayPal Express Checkout or another external checkout method, when the checkout is performed completely on the side of the external payment gateway, the Validate on Each Transaction setting cannot be applied. Thus the customer group cannot change during checkout.
https://docs.magento.com/m1/ee/user_guide/tax/vat-validation.html
2018-08-14T10:39:47
CC-MAIN-2018-34
1534221209021.21
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/vat-id-validation2.png', None], dtype=object)]
docs.magento.com
Designing Code for Reusability This content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release. Once you've mapped out the overall design for your solution, spend some time thinking about how to structure your code within the solution. There's always more than one way to write the code, and with a little additional effort, you can maximize the reusability of your code. The work you do now to make your code reusable can save you and other developers time in the future, when you can reuse code rather than rewrite it. If you write code with the intention of reusing it in other scenarios, your code may also be easier to maintain because you'll tend to write smaller, more compact procedures that are simpler to modify. Whenever practical, write procedures that are atomic — that is, each procedure performs one task. Procedures written in this manner are easier to reuse. Adopt a set of coding standards and stick to them. Chapter 3, "Writing Solid Code," makes suggestions for standardizing your coding practices. Document your procedures and modules well by adding comments, so that you and other developers can easily figure out later what your code is doing. Group similar procedures or procedures that are called together in a module. Not only does this help you to organize your code functionally, but it can also enhance your solution's performance. By default, VBA compiles code on demand, meaning that when you run a procedure, VBA compiles that module and the modules containing any procedures that that procedure calls. Compiling fewer modules results in improved performance. (To further improve performance, be sure to save your code in a compiled state.) Develop a library of procedures that you can reuse. Use a code library tool to store procedures and code snippets so that they are immediately available when you need them. Office 2000 Developer includes such a tool, the Code Librarian, which is an add-in for the Visual Basic Editor. Create custom objects by encapsulating related code in class modules. You can add properties, methods, and events to custom objects, and organize them into custom hierarchical object models. Once you've built and tested a custom object, you can treat it as a "black box" — you and other programmers can use the object without thinking about the code that it contains. When you do need to maintain the code, you only need to modify it once, within the class module. Develop interfaces to extend custom objects. An interface provides a basic set of properties, methods, and events to any custom object that implements the interface. By using an interface, you can create sets of closely related, yet distinct, objects that share common code. If you have Visual Basic, you can create Automation (formerly OLE Automation) servers, which are .dll or .exe files that contain custom objects. The advantage to packaging custom objects in separate .dll or .exe files is that you can call them from any VBA project, just as you would call any Automation server, without having to include the code for the object in your project. For more information, see the documentation for Visual Basic 5.0 and Visual Basic 6.0. For more information about effective code design, see Chapter 3, "Writing Solid Code," Chapter 7, "Getting the Most Out of Visual Basic for Applications," Chapter 9, "Custom Classes and Objects," and Chapter 10, "The Windows API and Other Dynamic-Link Libraries."
https://docs.microsoft.com/en-us/previous-versions/office/developer/office2000/aa140806(v=office.10)
2018-08-14T10:58:25
CC-MAIN-2018-34
1534221209021.21
[]
docs.microsoft.com
Planning and Installation This article provides general safety standards to adhere to when installing or connecting a vEdge 1000 router or its components. General Safety Standards Caution: Before removing or installing router modules and components, ensure that the router chassis is electrically connected to ground. Ensure that you attach an ESD grounding strap to an ESD point and place the other end of the strap around your bare wrist making good skin contact. Failure to use an ESD grounding strap could result in damage to the router. Note: Some router components are hot-swappable and hot-insertable. You can remove and replace them without powering off or disconnecting power to the router. Do not, however, install the router or any of its component if they appear to be damaged. - Install your vEdge router. - Locate the emergency power-off switch in the room in which you are working. In case of an electrical accident, quickly turn off the power. - Disconnect power before installing or removing the router. - If an electrical accident occurs, use caution and immediately turn off power to the router. - Make sure that grounding surfaces are thoroughly cleaned and well-finished before grounding connections are made. - Do not work alone if hazardous conditions exist. - Always check that power is disconnected from a circuit. Never assume that it is disconnected. - Carefully inspect your work area for possible hazards, such as moist floors, worn-out power cords, ungrounded power extension cords, and missing safety grounds. - Operate the device within marked electrical ratings and product usage instructions. - To ensure that the router and the FRUs function safely and correctly, use the specified cables and connectors, and make certain they are in good condition.
https://sdwan-docs.cisco.com/Product_Documentation/vEdge_Routers/vEdge_1000_router/03Planning_and_Installation
2018-08-14T11:25:44
CC-MAIN-2018-34
1534221209021.21
[]
sdwan-docs.cisco.com
Edge Compute Walkthrough This guide provides a complete walkthrough for using Losant Edge Compute to read data from a piece of industrial equipment using Modbus and then report that data to the cloud. Introduction This system is made up of a kegerator with a Banner Temperature & Humidity Sensor installed inside it, a Banner Wireless Controller and a Raspberry Pi as the Edge Compute device. The Banner Wireless Controller reads sensor data from any number of remote sensors and then exposes that information to your local network over Modbus TCP. The Edge Compute device can then read that Modbus endpoint and report the data to the cloud. Banner is just an example used for this walkthrough. The same lessons apply to any Modbus TCP capable devices you may have. Register the Devices Edge Compute device types can report state for themselves or for any number of peripheral devices. This example includes one peripheral, which is the kegerator itself. The Raspberry Pi will be the Edge Compute device that's responsible for reading the Modbus endpoint. Using the Devices -> Add Device main application menu, first create the Edge Compute device. In this example, the device is named "Raspberry Pi Gateway", however you can name it anything you'd like. The only other required setting is to make the device type Edge Compute. This device doesn't need any attributes because the actual temperature and humidity information will be added to our peripheral device. Click the Create Device button to add this new device to your application. Next, create a second device for the Kegerator peripheral. In this example we named the device "Kegerator", but you can name it anything you'd like. Set the device type to Peripheral and set the gateway to our previously created Edge Compute device. This is primarily an access control setting, which restricts which gateways are allowed to report state on behalf of this peripheral. Lastly, add the two attributes for temperature and humidity. When done, click the Create Device button to add this device to your application. Create Access Key and Secret In order for the Raspberry Pi to communicate with Losant, it requires authentication details. Using the main Security application menu, create a new access key and secret for this device. We always recommend creating a dedicated access key for each device. With that said, select the Restricted To Specific Devices radio and choose your previously created Edge Compute device. After clicking Create Access Key, you'll be presented with a popup with your newly created access key and secret. Losant does not store the secret, so it's very important to copy/paste this somewhere safe. You'll be using this information later in this guide. Install Losant Edge Agent Losant Edge Compute works by deploying edge workflows to your devices. In order to receive, process, and manage these workflows on the device, the Losant Edge Agent ("Agent") must be installed. Follow the Agent Installation Guide for step-by-step instructions. Configure and Run the Edge Agent Once the Agent is installed, you can now run it with the access key and secret you created earlier. There are two ways to configure the Agent. You can either use a configuration file or use environment variables. Generally we recommend using a configuration file. Create a configuration file at /var/lib/losant-edge-agent/config.toml with the following contents. [logger] out = '/data/losant-edge-agent-log.log' level = 'verbose' [gateway] id = '<your-edge-compute-device-id>' key = '<your-access-key>' secret = '<your-access-secret>' [store] path = '/data/losant-edge-agent-store.db' The Agent also stores data locally, so it's recommended to create a local folder and mount it inside the Docker container when you run it. This way, the data persists between restarts of the container. sudo mkdir -p /var/lib/losant-edge-agent/data sudo chmod a+rwx -R /var/lib/losant-edge-agent These commands will create a data folder at /var/lib/losant-edge-agent/data and set the required permissions. You will only ever have to run these commands once. You can now run the Agent using the following Docker command. docker run -d --restart always --name losant-edge-agent \ -v /var/lib/losant-edge-agent/data:/data \ -v /var/lib/losant-edge-agent/config.toml:/etc/losant/losant-edge-agent-config.toml \ losant/edge-agent If you open the Edge Compute device page prior to running this command, you'll see it log some connection information when the Agent is started. If nothing shows in the device log when the Agent is started, try inspecting the logs locally using the command below. docker logs losant-edge-agent You should see an output similar to the image below. If there are any errors, they will be displayed and should provide helpful information to debug any issues. If you need to fix or change any configuration fields, you can simply edit the config.toml file and run docker restart losant-edge-agent to apply the changes. Build the Workflow Now that you have the Agent installed and running, you can build and deploy an edge workflow to start reading Modbus information. You can create an edge workflow by clicking the Workflows -> Create Workflow main application menu. It's important to ensure the version you select for Minimum Agent Version is the same or below whatever you have installed on your edge devices. Losant continually updates the Agent with new capabilities. This setting ensures the workflows you're about to build will be supported by the Agent version you have installed. We always default this option to the latest Agent version, so if you just installed the Agent, you should be good to go. Once you click Create Workflow, you'll be presented with a blank edge workflow canvas. The workflow you'll create in this example will read two Modbus registers every minute, convert them from raw data to actual measurements, and then report that data to Losant's cloud. Start by first dragging a Timer Node, a Modbus Read Node, and a Debug Node onto the canvas. Connect each node together and then configure the Modbus Node to read your specific Modbus endpoint. The configuration below is for our Banner Wireless Controller, which is reading the temperature sensor in the kegerator. The Modbus Read Node has three primary configuration sections. The first is the endpoint itself, which on our network is located at the address 192.168.1.117 on port 502. You can also adjust the Unit ID or Endianness if your environment requires those to be changed. The next section includes the registers to read. This will change greatly depending on what piece of equipment you're reading from, but our Banner Wireless Controller is configured to expose the humidity on Holding Register 0, the temperature in Celsius on Holding Register 1, and the temperature in Fahrenheit on Holding Register 2. Each register has a required length field as well. In this case, the data is stored in a single register, so the length is set to 1. As you read registers, an object is being built on the workflow payload with the value of each register stored on whatever you specified in the Result Key field. The last configuration section is the Destination Path. This is where the final result object, with each key specified by the registers, will be placed on the payload. So in this example, an object with three keys (humidity, tempC, and tempF) will be put on the payload at working.modbus. The value at each key will be an array equal to the length configured for that register. { "working" : { "modbus" : { "humidity" : [5859], "tempC" : [111], "tempF" : [855] } } } At this point, no data is being sent to the cloud, but this is a good time to deploy and test that this workflow is properly reading Modbus data. Click the Save button on the top right of the screen and then click the Deploy button. Losant keeps track of every version of every workflow deployed to Edge Compute devices. This means a workflow must first be versioned before it can be deployed. This popup provides a quick way to version this workflow, which defaults to the current date and time in UTC. You can change it to whatever you'd like. You next need to choose which Edge Compute devices to deploy this workflow version to. Choose the Edge Compute device you created earlier in this walkthrough. Click the Deploy Version button to schedule this deployment. Under normal conditions and if your device is currently connected, a workflow version will be successfully deployed to an Edge Compute device in a matter of a few seconds. You can monitor the process by clicking the Deployments tab on the menu along the right side of the screen. After the deploy completes, open the Debug tab and select your Edge Compute device from the dropdown. This will allow you to stream any debug information from that device to your browser. You can use this to verify that your Modbus read operations are working correctly. When you hover over a debug message, a notice will show up over the canvas area instructing you to switch versions to see the "node path". This is because you're currently viewing the develop version of the workflow, but you just created and deployed a different version to the edge device. The "node path" is a helpful visual indicator to know which path in the workflow was taken to reach the Debug Node that generated whatever message you're currently hovering over. Since workflows can change drastically between versions, you'll have to be viewing the same version that generated the debug message in order to highlight the path. If you want to check it out, you can switch versions by clicking the Versions menu on the right side of the screen. Since it looks like this workflow is successfully reading Modbus data, we can continue editing the workflow to translate the values and report them to the cloud. Information stored in Modbus registers is not always in a friendly format. Below is the table from the Banner temperature sensor that describes how the data is stored and how it translates to actual values. As you can see, we're getting a value between 0 and 10000 for humidity, and between -32768 and 32767 for Celsius and Fahrenheit. So in order to convert these to actual values, we need to divide humidity by 100 and divide the temperatures by 20. These values were found by dividing the raw range by the actual range. So for humidity, the raw range is 0 to 10000, which translates to an actual range of 0 to 100. 10000 divided by 100 gives us our conversion factor of 100. Fortunately, this is easily done with a Math Node. Add three expressions, one for each conversion, to the Math Node. 1. {{ working.modbus.humidity.[0] }} / 100 2. {{ working.modbus.tempC.[0] }} / 20 3. {{ working.modbus.tempF.[0] }} / 20 Each expression uses a template to reference the raw value and then divides them by the appropriate conversion factor. The result of each expression is then put back on the payload at state.humidity, state.tempC, and state.tempF. Once you save and deploy this workflow, you can see these converted values in the debug output. The last step is to report these converted values to our kegerator peripheral. Add a Device State Node to do this. Change the radio to Select a specific device and choose the peripheral device created earlier. In this example, the peripheral device is named "Kegerator". This device has two attributes, one for humidity and one for temperature. Set the humidity value to {{ state.humidity }} and set the temperature value to {{ state.tempF }}. In this example, the temperature in Celsius won't be reported to the cloud, however you could add additional nodes to perform more edge processing on the Celsius data if you'd like. After you save and deploy this workflow again, data is now being reported from your edge device to the cloud. You can verify this by opening the Debug tab on your peripheral device page. The device's debug tab allows you see recent state messages reported by this device. You can also view real-time activity by inspecting the Device Log, which is on the right side of every device tab. At this point, you've successfully deployed a working Edge Compute device that reads local Modbus data over TCP, performs some processing on that data, and then reports it to the cloud. You can now trigger a cloud workflow for further processing or alerting, build a Dashboard to visualize this data, or use Experiences to create an entire custom application.
http://docs.prerelease.losant.com/edge-compute/walkthrough/
2018-08-14T10:34:36
CC-MAIN-2018-34
1534221209021.21
[array(['../../images/edge-compute/walkthrough/edge-walkthrough-diagram.jpg', 'Edge Walkthrough Diagram Edge Walkthrough Diagram'], dtype=object) array(['../../images/edge-compute/walkthrough/add-device.png', 'Add Device Menu Add Device Menu'], dtype=object) array(['../../images/edge-compute/walkthrough/edge-compute-device.png', 'Edge Compute Device Edge Compute Device'], dtype=object) array(['../../images/edge-compute/walkthrough/peripheral-device.png', 'Peripheral Device Peripheral Device'], dtype=object) array(['../../images/edge-compute/walkthrough/security-menu.png', 'Security Menu Security Menu'], dtype=object) array(['../../images/edge-compute/walkthrough/add-access-key.png', 'Add Access Key Add Access Key'], dtype=object) array(['../../images/edge-compute/walkthrough/access-key-settings.png', 'Access Key Settings Access Key Settings'], dtype=object) array(['../../images/edge-compute/walkthrough/access-key-secret.png', 'Access Key and Secret Access Key and Secret'], dtype=object) array(['../../images/edge-compute/walkthrough/edge-agent-connection.png', 'Edge Agent Connection Edge Agent Connection'], dtype=object) array(['../../images/edge-compute/walkthrough/docker-logs.png', 'Docker Logs Docker Logs'], dtype=object) array(['../../images/edge-compute/walkthrough/create-workflow-menu.png', 'Create Workflow Menu Create Workflow Menu'], dtype=object) array(['../../images/edge-compute/walkthrough/create-edge-workflow.png', 'Create Edge Workflow Create Edge Workflow'], dtype=object) array(['../../images/edge-compute/walkthrough/edge-agent-version.png', 'Edge Agent Version Edge Agent Version'], dtype=object) array(['../../images/edge-compute/walkthrough/blank-edge-workflow.png', 'Blank Edge Workflow Blank Edge Workflow'], dtype=object) array(['../../images/edge-compute/walkthrough/modbus-node-config.png', 'Modbus Node Configuration Modbus Node Configuration'], dtype=object) array(['../../images/edge-compute/walkthrough/deploy-popup.png', 'Deploy Popup Deploy Popup'], dtype=object) array(['../../images/edge-compute/walkthrough/debug-output.png', 'Debug Output Debug Output'], dtype=object) array(['../../images/edge-compute/walkthrough/modbus-registers.png', 'Modbus Registers Modbus Registers'], dtype=object) array(['../../images/edge-compute/walkthrough/math-node.png', 'Math Node Math Node'], dtype=object) array(['../../images/edge-compute/walkthrough/device-state-node.png', 'Device State Node Device State Node'], dtype=object) array(['../../images/edge-compute/walkthrough/device-debug-tab.png', 'Device Debug Tab Device Debug Tab'], dtype=object) ]
docs.prerelease.losant.com
Download PDF Download page EUM Accounts, Licenses, and App Keys. EUM Accounts, Licenses, and App Keys This page covers account and licensing information for End User Monitoring (EUM). View License Usage To view your EUM license consumption in the Controller, go to> License. Overages If your license does not allow overages, AppDynamics stops reporting metrics after your license limit has been reached. If your license does allow overages and your usage exceeds the limit, AppDynamics continues reporting EUM metrics and bills you for the overages at the unit rate stipulated by your license agreement. License Key If you have at least one EUM license type, you have an EUM Account, which has a name and a license key associated with it. This is the unique identifier that AppDynamics uses to associate EUM data to your account. You only need to know this information for troubleshooting purposes. The same key applies to all four EUM services. However, each product has its own types and metrics for allowed usage. EUM App Key This is the unique identifier that AppDynamics uses to associate end user data to specific EUM applications. Each EUM application will be associated with one EUM App Key. For your applications to be monitored, they will need to include the EUM App Key when reporting data. The EUM App Keys are also associated with an EUM account name and license key. Changing the license key does not affect existing EUM App Keys. License Optimization and Usage Periods For EUM license metric definitions and data retention, see License Entitlements and Restrictions.
https://docs.appdynamics.com/appd/22.x/latest/en/end-user-monitoring/eum-accounts-licenses-and-app-keys
2022-09-25T00:52:09
CC-MAIN-2022-40
1664030334332.96
[]
docs.appdynamics.com
Booking already enabled for each booking in MyPMS and is seamlessly integrated with BookingCenter Customized Letters. To implement the Esign- Digital Signing Process, follow these steps. Read below for detailed step-by-step instructions. eSign - Setup: Setup your Default Signature Letters - Create Default eSign Letter eSign - Process: Learn how to send a Digital Signature Request Letter - Send eSign Request Letter eSign - Digital Signature Letter Storage: See where the signed digital letters are stored. - Digital Signature, Initial, Image, and Document Letter Storage eSign special functions : signatures, initials, image uploading, and document uploading. These are special options available for eSign letters (these options only work with eSign Booking Letters) that enable a Letter to be signed or initialed (using the 'Signature' or 'Initial' option); using the Image Place Holder a guest can submit an image uploaded from a device (any computer, tablet, or smart phone can work in .jpg, .png, .jpeg, .gif formats only); an Upload Document that allows documents (we support .jpg, .png, .jpeg, .gif, .pdf, .docx, .rtf, .txt, .doc, and .tiff file formats only) to be uploaded from a device (any computer, tablet, or smart phone can work); a Text Area to allow a Guest to place 'free form' text into a box and have the text 'saved'. These merge elements are available only for eSign Letters, they will not work outside of the eSign process, If desiring to use 2 different Self Check-in 'processes' for your Guests, or 2 eSign Letters for other purposes, consider a PMS Agent that can differentiate between bookings and thus requiring a unique eSign - Digital Document Signing process based on the 'Suppress Agent Rate' process. This adds an entirely different eSign- Digital Document Signing process and may be useful for certain businesses requiring distinct processes for automating different types of eSign processes. Please note that in order for Digital Signing to work with a Booking, it must have a unique link (RUID), which was added on August 7, 2018. Therefore, only Bookings created since August 7, 2018 have a RUID and are enabled for eSign. Bookings created prior to this date, DO NOT have an RUID and the Digital Signing function will not work. All new Bookings created will have the unique RUID and can use eSign - Digital Document Signing. Note that using EDIT on booking will not create a RUID. Learn more about Booking RUID Learn more about the Booking RUID
https://docs.bookingcenter.com/display/MYPMS/eSign+-+Digital+Document+Signing
2022-09-25T01:14:59
CC-MAIN-2022-40
1664030334332.96
[]
docs.bookingcenter.com
registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 The following sections provide an overview and instructions for using image tags in the context of container images for working with OKD image streams and their. OKD provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. Images evolve over time and their tags reflect this. Generally, an image tag always points to the latest image built. If there is too much information embedded in a tag name, like v2.0.1-may-2019, the tag points to just one revision of an image and is never updated. Using default image pruning options, such an image is never removed. In very large clusters, the schema of creating new tags for every revised image could eventually fill up the etcd datastore with excess tag metadata for images that are long outdated. If the tag is named v2 can experience increasing resource usage caused by retaining old images. An image stream in OKD OKD definitions you may notice they contain definitions of ImageStreamTag and references to DockerImage, but nothing related to ImageStreamImage. This is because the ImageStreamImage objects are automatically created in OKD when you import or tag an image into the:
https://docs.okd.io/4.6/openshift_images/managing_images/tagging-images.html
2022-09-25T02:46:49
CC-MAIN-2022-40
1664030334332.96
[]
docs.okd.io
Appearance Qs-Pay Checkout What is Qs-Pay Checkout? Qs-Pay Checkout is a shopping cart developers can easily integrate into any website in minutes. With our Qs-Pay Dashboard you can add and customize checkout steps, fields, and styles. Succeeding with our product requires minimal development knowledge (HTML, CSS, JavaScript). Many non-technical merchants work with developers to build unique e-commerce experiences with Qs-Pay Checkout. How it works Unlike many e-commerce solutions, Qs-Pay Checkout lives on your site, in its HTML client-side code. You add the shopping cart to your site just by redirecting a subdomain with a DNS Record from your domain registrar. You just need to provide a Product object that has simple fields like name, price, description, etc.—to our SDK components. Usually, developers only need to use Qs-Pay Add To Cart Button component. You manage orders within a user-friendly merchant dashboard. Unlike the cart, this dashboard doesn't live on your site—it's hosted on our own servers. You can access it at any time at this URL: What's next? Now that you know how Qs-Pay Checkout works, you're likely wondering: - How secure is Qs-Pay Checkout? - Well it's secure by default. We use a secure protocol to communicate with our servers. - All communications are secured through HTTPS. - There is a fraud mechanism in place to prevent fraud. - We use a PCI-compliant payment gateway.
https://docs.qs-pay.com/qs-checkout/guide/introduction.html
2022-09-25T01:53:40
CC-MAIN-2022-40
1664030334332.96
[]
docs.qs-pay.com
Initiate-request extensions (IRX) are used for initiating requests. It is of variable length, is not initialized by DBCHINI, and is allocated by the application. The DBCAREA Extension Pointer field points to IRX. IRX does not exist if there is a zero in the DBCAREA Extension Pointer field. The IRQ Ext provides support for parameterized SQL, which is used by the Preprocessor2. It also provides support for TDSP, providing MultiTSR options. IRX Fields The IRX extension contains two logical sections: - Header - Element
https://docs.teradata.com/r/Teradata-Call-Level-Interface-Version-2-Reference-for-Workstation-Attached-Systems/October-2021/DBCAREA-Extensions/Initiate-request-Extensions/Usage-Notes
2022-09-25T01:39:31
CC-MAIN-2022-40
1664030334332.96
[]
docs.teradata.com
WSO2 IoT Server (IoTS) is the core of WSO2's Open IoT Platform. Take a look at the WSO2 IoT Server presentation on Your foundation for the Internet of things to expand your knowledge on IoTS. This is a great place for you to start if you are new to WSO2 IoTS. Deep Dive into WSO2 IoT Server To know more about WSO2 IoTS click a section below,
https://docs.wso2.com/pages/diffpages.action?originalId=59575812&pageId=61053244
2022-09-25T02:46:46
CC-MAIN-2022-40
1664030334332.96
[]
docs.wso2.com
The Vitis Analyzer must be used to view and analyze trace data. After the trace data has been captured either using XRT or XSDB, you should have all the data needed to open the Event Trace view in the Vitis Analyzer. Figure 1. Event Trace in Vitis Analyzer Open the run summary file with the Vitis Analyzer to view event trace data. An example of the XSDB flow is as follows: vitis_analyzer ./aie_trace_profile.run_summary An example of the XRT flow is as follows: vitis_analyzer ./xrt.run_summary Limitations - Due to limited resources, overruns can be seen from event trace. Refer to Using Multiple Event Trace Streams to configure the number of trace streams to minimize the overruns issue. - For detailed event trace information, the --xlopt=0option must be specified in compile. If omitted, the default setting --xlopt=1is used which might cause functions to be in-lined and limits debug capability. - Host code is required to invoke graph.end()to ensure that the XRT flow completes properly.
https://docs.xilinx.com/r/en-US/ug1076-ai-engine-environment/Viewing-and-Analyzing-the-Trace-Data-Using-Vitis-Analyzer
2022-09-25T02:33:51
CC-MAIN-2022-40
1664030334332.96
[]
docs.xilinx.com
ARM Nodes Image Caching How ARM base image caching works ARM-based nodes only Intel-based nodes do not need to have the base image locally in order to run a VM. Quick navigation Overview | Free Space Management | Examples Overview In order to run an ARM-based VM on a specific node, the base image needs to be transferred to the node first. Once the image is transferred to the node, it is cached, so it can be reused for other VMs. This means that the initial deployment on a specific node takes longer, while all other deployments are significantly faster. Free Space Management Cached based images use disk space on the ARM-based node. As this space is limited, there might not be enough space to cache a new base image. If a new image needs to be cached and there is not enough space, the old base images are deleted until the new image can be cached. Orka looks for free disk space that is twice as much as the image. This is needed because of the way VMs are using the base image. This means that if 90GB image is to be cached, the node needs to have at least 180GB free space. If not, the old base images are deleted until there is enough space. Examples In a scenario with a host with 1TB disk space, there is roughly 900GB free. Here are some examples showing how many images can be cached on such a host: - 5 90GB images can be cached on the host - 2 200GB images can be cached on the host - 1 290GB image can be cached on the host - 2 90GB images and 1 200GB image can be cached on the host - Image bigger than 450GB cannot be deployed at all Updated 25 days ago
https://orkadocs.macstadium.com/docs/arm-base-image-caching
2022-09-25T01:51:35
CC-MAIN-2022-40
1664030334332.96
[]
orkadocs.macstadium.com
The telemetry endpoint allows you to send additional usage data to the CloudZero system to better allocate cloud spend. The most common use cases for telemetry is to break down shared resources for allocation among teams, customers, features and the such. The Telemetry endpoint supports a very flexible data model in which you can target any portion of your cloud spend and represent usage of the systems generating that spend by sending records saying that "named elements" consumed a certain amount of that infrastructure or cloud spend. CloudZero Telemetry API Endpoint API Authorization - You can obtain a CloudZero API Key from - Add an Authorization: <cloudzero-api-key>header to each request Limits - Each organization can send up 1m records per day and 300k records per minute - Max request size: 10KB size limit per record and 5MB size limit per request Example For example, if you wanted to transmit the activity for the customer Hooli on January 25th, 2021, during the whole day, where your billing feature processed 250M records you would send. { "records": [ { "timestamp": "2020-01-25T00:00:00Z", "granularity": "DAILY", "filter": { "custom:feature": [ "billing" ], "tag:environment": [ "prod" ], "tag:owner": [ "frank", "sandy" ] }, "element-name": "<HOOLI-CUSTOMER-ID>", "telemetry-stream": "billing-records-processed", "value": "250000000" } ] } Filter Limitations - Each telemetry record can’t have more than 5 filter keys and each set of values can’t have more than 20 values. - When multiple filter values are specified, those values are aggregated together. When multiple types of filter keys are present, those types of filters are combined using the intersection. - For example: { "account": ["1", "2"], "service": ["A"] } translates to WHERE (account = “1” OR account = “2”) AND service = “A” - All records within a telemetry-stream must have the same set of keys defined in their filter.
https://docs.cloudzero.com/reference/telemetry
2022-09-25T01:30:41
CC-MAIN-2022-40
1664030334332.96
[]
docs.cloudzero.com
Next: Text Formatting, Up: Publishing Markup [Contents][Index] To use Publishing Markup, start by typing ‘##’ or ‘%%’ at the beginning of a new line. For MATLAB compatibility ‘%%%’ is treated the same way as ‘%%’. The lines following ‘##’ or ‘%%’ start with one of either ‘#’ or ‘%’ followed by at least one space. These lines are interpreted as section. A section ends at the first line not starting with ‘#’ or ‘%’, or when the end of the document is reached. A section starting in the first line of the document, followed by another start of a section that might be empty, is interpreted as a document title and introduction text. See the example below for clarity: %% Headline title % % Some *bold*, _italic_, or |monospaced| Text with % a < link to GNU Octave>. %% # "Real" Octave commands to be evaluated sombrero () ## Octave comment style supported as well # # * Bulleted list item 1 # * Bulleted list item 2 # # # Numbered list item 1 # # Numbered list item 2
https://docs.octave.org/v7.1.0/Using-Publishing-Markup-in-Script-Files.html
2022-09-25T02:15:22
CC-MAIN-2022-40
1664030334332.96
[]
docs.octave.org
Jawi Facts - Language: Jawi - Alternate names: Djaui, Towahi, Tohawi, Ewenu, Ewenyoon, Chowie, Djaoi, Djau, Dyao, Dyawi, Ewanji, Ewenyun, I:wanja, Jarrau, Tohau-i, Djawi - Language code: djw - Language family: Australian, Nyulnyulan (SIL classification) - Number of speakers: <3 -. The Jawi language is very close to Bardi these days, differing mostly in the use of some words; however, there is some evidence that it used to be more different, and the two varieties converged over the course of the 20th Century.
https://docs.verbix.com/EndangeredLanguages/Jawi
2022-09-25T01:10:33
CC-MAIN-2022-40
1664030334332.96
[]
docs.verbix.com
Integration Points ProL2TP offers several mechanisms to assist with integrating L2TP functions in a system. Hook scripts prol2tpd can execute optional user-provided scripts or applications on specific events. Scripts are run by a prol2tp-scriptd helper process. In busy systems, event scripts might be started in rapid succession. Scripts are serialised for each instance. Scripts are also scheduled to limit the maximum number running at a given time. See the prol2tpd man page for details. Event watching In active systems, it can be useful to quickly monitor activity to see when L2TP and PPP instances come and go. Command line utilities are provided to watch events from prol2tpd and propppd and print events to the terminal. L2TP events The prol2tpwatch utility listens for events from prol2tpd. ubuntu@lns:~$ sudo prol2tpwatch SESSION CREATED: handle=3 tid=2871686446 sid=2447900232 name='' profile='spp0' pw_profile='pp0' SESSION UP: handle=3 tid=2871686446 sid=2447900232 peer_tid=65483 peer_sid=1487060649 SESSION DOWN: handle=3 tid=2871686446 sid=2447900232 peer_tid=65483 peer_sid=1487060649 result=3 error=0 message='' SESSION DELETED: handle=3 tid=2871686446 sid=2447900232 See the prol2tpwatch man page for details. PPP events The propppwatch utility listens for events from propppd. ubuntu@lns:~$ sudo propppwatch PPP_CREATED_IND: handle=0x193c9c800000000 name='session-4' PPP_UP_IND: handle=0x193c9c800000000 name='session-4' ifname='ppp0' user='lns' local=10.90.0.10 peer=10.90.0.11 unit=0 linktype=pppoltp local_id=0x8bf5b556 peer_id=0x3dc74aa7 PPP_DOWN_IND: handle=0x193c9c800000000 name='session-4' ifname='ppp0' user='lns' unit=0 linktype=pppoltp reason='User request' reason_code=0 local_id=0x8bf5b556 peer_id=0x3dc74aa7 bytes_in/out=0/0 packets_in/out=0/0 connect_time=3s PPP_DESTROYED_IND: handle=0x193c9c800000000 name='session-4' See the propppwatch man page for details. Extensions Since L2TP is an extensible protocol, we designed ProL2TP from the ground up so that the core L2TP protocol could be extended or tuned by plugins. If log_level is info or higher, you may see debug messages about these when prol2tpd starts up. There are several extension types: - AVP extensions - these add, remove or modify AVPs in outbound or inbound L2TP messages, or implement vendor-specific AVPs. AVP extensions can be selectively enabled per tunnel using the avp_extensions parameter. For RFC-compliant peers, no avp_extensions should be needed. - Pseudowire extensions - these implement the interface to the dataplane for each pseudowire type. ProL2TP is distributed with pseudowire extensions for ppp, ethernet and vlan pseudowire types using the Linux kernel. - Socket extensions - these implement interfaces to external components, e.g. Linux kernel. In future versions, an SDK will be available with which third parties can develop extensions. If you are interested in this, please contact your sales representative.
https://docs.prol2tp.com/manual_integration-points.html
2022-09-25T01:04:16
CC-MAIN-2022-40
1664030334332.96
[]
docs.prol2tp.com
- Categories: - DROP FAILOVER GROUP¶ Removes a failover group from the account. In this Topic: Usage Notes¶ Only an account administrator (user with the ACCOUNTADMIN role) or the group owner (role with the OWNERSHIP privilege on the group) can execute this SQL command. A primary failover group can only be successfully dropped if no linked secondary failover groups exist. A database that is included in a failover group is not dropped when the failover group is dropped. If a secondary failover group is dropped, any database previously included in the group loses read-only protection and becomes writable. If the secondary failover group is re-created from the same primary failover group as before, the databases in the group are overwritten by the databases in the primary failover group during the first refresh. These databases are read-only. To retrieve the set of accounts in your organization that are enabled for replication, use SHOW REPLICATION ACCOUNTS. To retrieve the list of failover groups in your organization, use SHOW FAILOVER GROUPS.
https://docs.snowflake.com/en/sql-reference/sql/drop-failover-group.html
2022-09-25T02:33:15
CC-MAIN-2022-40
1664030334332.96
[]
docs.snowflake.com
Instance How to make changes to application instances. On this page: This page is currently incomplete and more information will be provided at a later date. An instantiated application will have a corresponding instance object. This object contains metadta about the specific instance. If you want to learn more about the instance and the affiliated API you can read the technical documentation about this under “API”. Substatus As an app-owner you can set a substatus for the instance, this is to allow the end user further information about which condition the instance is currently in. Substatus is displayed both in the Altinn message box and in the receipt page. The substatus is an object which can be set in the instance object. How this is done is described under API. Substatus is a simple object which contains label and description. These fields can either contain clean text, or a text key that refers to the application texts. It is worth noting that we do not support variables in text for these texts. In the message box, label is limited to 25 symbols and if it contains more than 25 symbols, only the first 22 symbols will be used and “…” will be added to the end. Example of a status object: { "label": "some.label", "description": "Description in clear text" } Below you see an example of how substatus looks like in the message box and in the receipt where the substatus is set up in the following way: { "label": "Accepted", "description": "Your application has been accepted by the king." } Substatus in message box Substatus in receipt Automatic deletion of drafts As an application owner you can imagine some cases where deleting a user’s draft after a certain time is necessary. To achieve this three steps are required: - The application needs to be configured to allow the service owner to delete instances - Identify which instances that haven’t been completed via requesting storage - Delete the instance via an exposed endpoint within the application Step 1: Configuring the application Service owners are not allowed to delete instances by default. To get the required permissions a rule must be added into policy.xml, placed in App/config/authorization. The rule can be copied from our rule library. Step 2: Identify which instances are incomplete by sending a request to storage Storage exposes a set of query parameters which can be used to retrieve a set of instances. The example below retrieves all non-submitted instances of a given application that was instantiated on the 30. of september 2020. You can try query parameters for your service here. HTTP GET{org}/{app}&created=lte:2020-09-30&process.currentTask=Task_1 Step 3: Delete instance via endpoint exposed in the application After identifying the instances that are to be deleted, you can send a call to the application to delete these with the instance id (instanceOwner.partyId/instanceGuid). HTTP DELETE{instanceOwner.partyId}/{instanceGuid}
https://docs.altinn.studio/app/development/api/instance/
2022-09-25T02:43:13
CC-MAIN-2022-40
1664030334332.96
[array(['meldingsboks.png', 'Substatus in message box Substatus in message box'], dtype=object) array(['app.png', 'Substatus in receipt Substatus in receipt'], dtype=object) ]
docs.altinn.studio
Lab Portal: Database ConnectionEstimated reading time: 1 minute Data Connection for a Database Step 1: Start off by navigating to . Once logged in you can setup a data connection by clicking on the Data Connections icon. Step 2: In the Data Connections page select the New Connection button in the top right-hand corner. Step 3: In the Connection Type field, make sure Database is selected. Step 4: The Connection Details page will contain the following information for the new connection. - Name: A unique friendly name used when connecting a Data Portal to the Data Connection - Description (optional): description of what the connection string is connecting to - Connection String: used by INTERJECT to connect to the specified server & database Step 5: After adding the required information, click on the Save button to create the new data connection The Database Data Connection is now ready to be used in a Data Portal. Testing Connection String from within Excel Before setting up a Data Connection to a Database, you can verify that the connection can be established within Excel. Step 1: With Excel open, go to the INTERJECT Ribbon menu and click Advanced Menu (Skip this step if already at Advanced menu) Step 2: Click System drop-down and select Check Connection Step 3: Paste the database connection string you will be using to configure the Data Connection into the text-box. When the connection functions it will display a message such as the one below.
https://docs.gointerject.com/wPortal/L-Database-Connection.html
2022-09-25T01:10:14
CC-MAIN-2022-40
1664030334332.96
[array(['/images/Database/01.png', None], dtype=object) array(['/images/Database/02.png', None], dtype=object) array(['/images/Database/03.png', None], dtype=object) array(['/images/Database/04.png', None], dtype=object) array(['/images/Database/05.png', None], dtype=object) array(['/images/Database/06.png', None], dtype=object) array(['/images/Database/07.png', None], dtype=object) array(['/images/Database/08.png', None], dtype=object) array(['/images/Database/09.png', None], dtype=object)]
docs.gointerject.com
Menu Item: List All Categories From Joomla! Documentation Revision as of 23:41, 4 August 2022 by Franz.wohlkoenig (talk | contribs) Description[edit] The List All Categories menu item type is used to show Categories in a hierarchical list. Depending on the selected options for this layout, you can click on a category Title to show the articles in that category. How To Access[edit] To add a Menu Item: - Menus → [name of the menu] - click the New toolbar button. - click the Menu Item Type Select button. - select the Articles item. - select the List All Categories in an Article Category Tree item. To edit a Menu Item: - Menus → [name of the menu] - select a Title from the list Screenshot[edit] Form Fields[edit] - Title. The title that will display for this menu item. - Alias. The internal name of the menu item. Normally, you can leave this blank and Joomla will fill in a default value Title in lower case and with dashes instead of spaces. Learn more. Details[edit] Left Panel - Menu Item Type. The Menu Item Type selected when this menu item was created. This can be one of the core menu item types or a menu item type provided by an installed extension. - Select the Top Level Category. - Root: Include all article categories. - Otherwise, select the desired top-level category. All child categories of the selected category will show in the menu item. - Link. The system-generated link for this menu item. This field cannot be changed and is for information only. - Target Window. Select from the dropdown list. - Template Style. Select from the dropdown list. Right Panel - Menu. Shows which menu the link will appear in. Categories[edit] The Categories Options allow you to control various aspects of the menu item. Note: Options include "Use Global". If this is selected, the setting from the Articles: Options will be used. - Top Level Category Description. Show the description for the top-level category. - Alternative Description. Enter an description to override the category description for the menu item. - Subcategory Levels. Control how many levels of subcategories to show. - Empty Categories. Show categories that don't contain any articles or subcategories. - Subcategories Descriptions. Show the description for subcategories. - # Articles in Category. Show a count of the total number of articles in each category. Category[edit] The Options control the way that category information is shown in the menu item. - Category Title. Show the title of the category. - Category Description. Show the description for the category. - Category Image. Show the category image. - Subcategory Levels. Control how many levels of subcategories to show. - Empty Categories. Show categories that don't contain any articles or subcategories. - No Articles Message. Show a message "There are no articles in this category". - Subcategories Descriptions. Show the descriptions for subcategories. - # Articles in Category. Show a count of the total number of articles in each category. Blog Layout[edit] Blog Layout Options control the appearance of the category drill down if that results in a blog layout. Note: If there is a Category Blog menu item defined for a category, the drill down will navigate to that menu item and the options for that menu item will control the layout. The options here only take effect if there is no menu item for a category. - Leading Article Class. You can add any CSS class for your own styling ideas. Add a border on top with class boxed.For image position use for example image-left, image-right. Add image-alternate for alternate ordering of intro images. - Article Class. You can add any CSS class for your own styling ideas. Add a border on top with class boxed.For image position use for example image-left, image-right. Add image-alternate for alternate ordering of intro images. - #. - # Links. The number of Links to display in the Links area of the page. These links allow a User to link to additional Articles, if there are more Articles than can fit on the first page of the Blog Layout. - Include Subcategories. - None: Only articles from the current category will show. - All: All articles from the current category and all subcategories will show. - 1-5: All articles from the current category and subcategories up to and including that level will show. - Category Order. - No Order: Articles are ordered only by the Article Order, without regard to Category. - Title Alphabetical: Categories are displayed in alphabetical order (A to Z). - Title Reverse Alphabetical: Categories are displayed in reverse alphabetical order (Z to A). - Category Order: Categories are ordered according to the Order column entered in Articles: Categories. - Article Order. - Featured Articles Order: Articles are ordered according to the Order column entered in Articles: Featured. -. - Article Order: Articles are ordered according to the Order column entered in Articles. - Article Reverse Order: Articles are ordered reverse to the according of the Order column entered in Articles. - Date for Ordering. The date used when articles are sorted by date. - Created: Use the article created date. - Modified: Use the article modified date. - Published: Use the article start publishing date. List Layouts[edit] The Options control the appearance of the category drill-down page when that is presented as a Category List. - Display Select. Show the Display # control that allows the user to select the number of articles to show. - Filter Field. Show a text field in the Frontend where a user can filter the articles.Options in the Backend menu item edit. - Hide: Don't show a filter field. - Title: Filter on article title. - Author: Filter on the author's name. - Hits: Filter on the number of article hits. - Table Headings. Show a heading in the article list in the Frontend. - Date. Show a date in the list. - Hide: Don't show any date. - Created: Show the created date. - Modified: Show the date of the last modification. - Published: Show the start publishing date. - Date Format. Optional format string to control the format of the date. Quick Tip - Hits. Show the number of hits for articles. - Author. Show the name of the author. - # Articles to List. Number of articles shown in the list. [edit] The Shared Options apply for Shared Options in List, Blog and Featured unless they are changed by the menu settings. - Pagination. Pagination provides page links at the bottom of the page that allow the User to navigate to additional pages. These are needed if the Articles will not fit on one page. - Hide: Pagination links not shown. Note: Users will not be able to navigate to additional pages. - Show: Pagination links shown if needed. - Auto: Pagination links shown if needed. - Pagination Summary. Show the current page number and total pages (e.g., "Page 1 of 2") at the bottom of each page. Options[edit] The Options determine how the articles will show in the list menu item. Layout - Choose a Layout. Select from the dropdown list. - Title. Show the Article's Title. - Linked Titles. Show the title as a link to the article. - Intro Text. - Show: The Intro Text of the article will show when you drill down to the article. - Hide: Only the part of the article after the Read More break will show. Category - Category. Show the Article's Category Title. - Link Category. Show the title as a link to that Category.Note: You can set this to be either a blog or list layout with the Choose a Layout option in the Category Tab. - Parent Category. Show the Article's Parent Category Title. - Link Parent Category. Show the title as a link to that Category.Note: You can set this to be either a blog or list layout with the Choose a Layout option in the Category Tab. Author - Author. Show the author of the Article. - Link to Author's Contact Page. Show it as a link to a Contact layout for that author.Note: The author must be set up as a Contact. Also, a link will not show if there is an Author Alias value for the article. Date - Create Date. Show the Article's create date. - Modify Date. Show the Article's modify date. - Publish Date. Show the Article's start publishing date. Options - Navigation. Show a navigation link 'Prev' or 'Next' when you drill down to the article. - "Read More" Link. Show the Read More link to link from the part of the article before the Read More break to the rest of the Article. - Read More with Title. - Show: The article title is part of the Read More link. The link will be in the format "Read More: [article title]". - Hide: The link will be "Read more". - Hits. Show the number of times the article has been displayed by a user. - Unauthorised Links. - Yes: The Intro Text for restricted articles will show. Clicking on the Read more link will require users to log in to view the full article content. - No: Articles that the user is not authorised to view (based on the viewing access level for the article) will not show. Integration[edit] The Options determine whether a news feed will be available for the page and what information it will show. - RSS Feed Link. If set to Show, a Feed Link will show up as a feed icon in the address bar of most browsers. - Include in Feed. - Intro Text: Only the article's intro text will show in the feed. - Full Text: The entire text of the article will show in the feed. Common Options[edit] See Menus: New Item for help on fields common to all Menu Item types, including: Toolbar[edit] At the top of the page you will see the toolbar shown in the Screenshot above. -. Quick Tips[edit] - 5. - If you set up category titles as linkable, the user can drill down on the category. When they do, they will normally see either a Category List or Category Blog menu item, depending on which option is selected. If there is a pre-existing menu item for this category (for example, a Category Blog or blog in 2 places. - In Articles: Options you can set the default value for all categories. - In Category: Edit you can set a value for a specific category. If this is set, it overrides the default value. - Customise 'Date Format': You can use D M Y for day month year or d-m-y for a short version for example 17-08-05 (see PHP: Date - Examples). If left blank, it uses DATE_FORMAT_LC1 from your language file.
https://docs.joomla.org/index.php?title=Help4.x:Menu_Item:_List_All_Categories&oldid=945147
2022-09-25T03:26:46
CC-MAIN-2022-40
1664030334332.96
[]
docs.joomla.org
Overview¶ Note This section contains information on upgrading Argo CD. Before upgrading please make sure to read details about the breaking changes between Argo CD versions. Argo CD uses the semver versioning and ensures that following rules: - The patch release does not introduce any breaking changes. So if you are upgrading from v1.5.1 to v1.5.3 there should be no special instructions to follow. - The minor release might introduce minor changes with a workaround. If you are upgrading from v1.3.0 to v1.5.2 please make sure to check upgrading details in both v1.3 to v1.4 and v1.4 to v1.5 upgrading instructions. - The major release introduces backward incompatible behavior changes. It is recommended to take a backup of Argo CD settings using disaster recovery guide. After reading the relevant notes about possible breaking changes introduced in Argo CD version use the following command to upgrade Argo CD. Make sure to replace <version> with the required version number: Non-HA: kubectl apply -n argocd -f<version>/manifests/install.yaml HA: kubectl apply -n argocd -f<version>/manifests/ha/install.yaml Warning Even though some releases require only image change it is still recommended to apply whole manifests set. Manifest changes might include important parameter modifications and applying the whole set will protect you from introducing misconfiguration.
https://argo-cd.readthedocs.io/en/latest/operator-manual/upgrading/overview/
2022-09-25T02:47:25
CC-MAIN-2022-40
1664030334332.96
[]
argo-cd.readthedocs.io
System related slots for Hyperlambda This project contains “system slots” to be able to invoke system commands. More specifically the project contains the following slots. - [system.terminal.create] - Creates a new terminal process on the server - [system.terminal.write-line] - Writes a line/command to a previously created terminal process on the server - [system.terminal.destroy] - Destroys/kills a previously created terminal process on the server - [system.execute] - Execute the specified command returning the result to caller By combining the above slots with for instance the magic.lambda.sockets project, you can spawn off terminal/bash processes on your server, creating “virtual web based terminal sessions” on your server. To create a new terminal process, use something such as the following. system.terminal.create:my-terminal folder:/ /* * STDOUT callback, invoked when something is channeled over STDOUT. */ .stdOut /* * Do standard out stuff here, with the incoming [.arguments]/[cmd] command. */ log.info:x:@.arguments/*/cmd /* * STDERROR callback, invoked when something is channeled over STDOUT. */ .stdErr /* * Do standard out stuff here, with the incoming [.arguments]/[cmd] command. */ log.info:x:@.arguments/*/cmd The above [.stdOut] and [.stdErr] lambda objects are invoked when output data or error data is received from the process, allowing you to handle it any ways you see fit. The [folder] argument is the default/initial folder to spawn of the process in. All of these arguments are optional. The name or the value of the [system.terminal.create] invocation however is important. This becomes a unique reference for you, which you can later use to de-reference the instance, while for instance feeding the terminal lines of input, using for instance the [system.terminal.write-line] slot. To write a command line to an existing terminal window, such as the one created above, you can use something such as the following. system.terminal.write-line:my-terminal cmd:ls -l The above will execute the ls -l command in your previously create “my-terminal” instance, and invoke your [.stdOut] callback once for each line of output the command results in. To destroy the above created terminal, you can use something such as the following. system.terminal.destroy:my-terminal All terminal slots requires a name to be able to uniquely identify which instance you wish to create, write to, or destroy. This allows you to create as many terminals as you wish on your server, only restricted by memory on your system, and/or your operating system of choice. The terminal slots works transparently for both Windows, Linux and Mac OS X, except of course the commands you pass into them will differ depending upon your operating system. Notice - If you don’t reference a terminal session for more than 30 minutes, the process will be automatically killed and disposed, and any future attempts to reference it, will resolve in an error. This is to avoid having hanging processes on the server, in case a terminal process is started, and then something happens, which disconnects the client, resulting in “hanging sessions”. [system.execute] If you only want to execute a specific program in your system you can use [system.execute], and pass in the name of the command as a value, and any arguments as children, optionally applying a [structured] argument to signifiy if you want each line of input to be returned as a single node or not. Below is an example. system.execute:ls structured:true .:-l The above will result in something such as follows. system.execute .:total 64 .:"-rw-r--r-- 1 thomashansen staff 495 9 Nov 10:37 Dockerfile" .:"-rw-r--r-- 1 thomashansen staff 1084 29 Oct 14:51 LICENSE" .:"-rw-r--r-- 1 thomashansen staff 604 29 Oct 14:51 Program.cs" .:"drwxr-xr-x 3 thomashansen staff 96 29 Oct 16:53 Properties" .:"-rw-r--r-- 1 thomashansen staff 3154 29 Oct 14:51 Startup.cs" .:"-rw-r--r-- 1 thomashansen staff 1458 11 Nov 10:57 appsettings.json" .:"-rw-r--r-- 1 thomashansen staff 650 9 Nov 08:26 backend.csproj" .:"drwxr-xr-x 3 thomashansen staff 96 9 Nov 10:23 bin" .:"-rw-r--r-- 1 thomashansen staff 700 9 Nov 07:29 dev_backend.csproj" .:"drwxr-xr-x 9 thomashansen staff 288 12 Nov 10:31 files" .:"drwxr-xr-x 9 thomashansen staff 288 9 Nov 19:05 obj" .:"drwxr-xr-x 6 thomashansen staff 192 29 Oct 14:51 slots" .:"-rw-r--r-- 1 thomashansen staff 1905 29 Oct 14:51 web.config" Notice - If you ommit the [structured] argument, or set its value to “false”, the result of the above invocation will return a single string. Querying operating system version In addition to the above slots this project also contains the following slots. - [system.os] - Returns description of your operating system - [system.is-os] - Returns true if your underlaying operating system is of the specified type The last slot above takes an argument such as “Windows”, “OSX, “Linux”, etc, and will return true of the operating system you are currently running on belongs to the specified family of operating systems. Below is example usage of both. system.is-os:OSX system.os Project website The source code for this repository can be found at github.com/polterguy/magic.lambda.system, and you can provide feedback, provide bug reports, etc at the same place.
https://docs.aista.com/documentation/magic.lambda.system/
2022-09-25T02:34:02
CC-MAIN-2022-40
1664030334332.96
[]
docs.aista.com
Quickstart Before you call any Aspose REST API, you need to create Aspose Cloud Account and obtain Client Id and Client Secret. Instructions to perform these steps are given below: Create Aspose Cloud account - Please visit website. You will be redirected to Aspose Single Sign On authentication service. - If you have GitHub or Google account, simply Sign Up. Otherwise, click on the Create a new Account button and provide the required information. Congratulations! your account has been successfully created and you can access Aspose Cloud Dashboard. Obtain Client Id and Client Secret - Once Logged In, go to Applications View. - Create a new applications and make sure you specify a default storage for it (if you have no storage, first create one of your choice). - After creating, your new app will have an auto-generated set of keys called Client Idand Client Secret. - Edit the Application to see them, you may need to click on the Lock icon to unhide your Client Secret. Finally, you are ready to call Aspose REST APIs. Aspose Cloud SDKs It is recommended that you use Aspose Cloud SDKs to call Aspose REST APIs as SDKs take care of low-level details of authenticating, making requests and handling responses and let you focus on writing code specific to your project. SDKs are provided for different programming languages and mobile platforms. Moreover, the SDKs are free and open source. API Reference Call Aspose Cloud APIs directly from your browser by accessing the desired product UIs at. Make an API request As an example, let’s call Aspose.Words REST API to convert a PDF document to Word. First, download Cloud SDK of your required platform as explained below: Now use following code to convert a PDF document to Word. Please set Client Id and Client Secret.
https://docs.aspose.cloud/total/getting-started/quickstart/
2022-09-25T02:09:07
CC-MAIN-2022-40
1664030334332.96
[]
docs.aspose.cloud
MessageMedia is a business SMS and unified messaging service provider that helps businesses connect with customers. When you integrate MessageMedia with your CRM, it keeps your contacts, messages, and more in sync. MessageMedia documentation: API guide, Authentication Additional reference: Supported MessageMedia APIs Contents - A. Set up a MessageMedia connection - B. Describe the MessageMedia connection - C. Provide required MessageMedia account information - D. Edit advanced MessageMedia settings - E. Test the connection A. Set up a MessageMedia connection Start establishing a connection to MessageMedia MessageMedia. The Create connection panel opens with required and advanced settings. B. Describe the MessageMedia MessageMedia account information At this point, you’re presented with a series of options for providing MessageMedia authentication. API key (required): Enter the API key of your MessageMedia account. API secret (required): Enter the API secret of your MessageMedia account. Multiple layers of protection, including AES 256 encryption, are in place to keep your API secret safe. When editing this connection, you must re-enter this value each time; it is stored only when the connection is saved and never displayed as text. - Navigate to Configuration > API Settings. - Click Create new key to generate a new key and secret. Create basic API key panel opens. - Enter the API key label name. - Click Create key. - Copy the new API key and secret. Important: If you do not copy your new API token now, you will need to create another API key and API secret the next time you need to retrieve them as plain text; they will not be shown again. D. Edit advanced MessageMedia settings Before continuing, you have the opportunity to provide additional configuration information, if needed, for the MessageMedia connection. E. Test the connection Once you have configured the MessageMedia.
https://docs.celigo.com/hc/en-us/articles/4409943575579-Set-up-a-connection-to-MessageMedia
2022-09-25T01:19:34
CC-MAIN-2022-40
1664030334332.96
[array(['/hc/article_attachments/4409958366363/CreateConnection.png', None], dtype=object) array(['/hc/article_attachments/4409950300571/General.png', None], dtype=object) array(['/hc/article_attachments/4409950327707/Application.png', None], dtype=object) array(['/hc/article_attachments/4413616607131/MM_Login.png', None], dtype=object) array(['/hc/article_attachments/4413616615323/API.png', None], dtype=object) array(['/hc/article_attachments/4413606362779/Label.png', None], dtype=object) array(['/hc/article_attachments/4414072399643/secret.png', None], dtype=object) array(['/hc/article_attachments/4409478834587/pricefx-adv.png', None], dtype=object) array(['/hc/article_attachments/4409488873627/amazon-redshift-confirm.png', None], dtype=object) ]
docs.celigo.com
Salesforce Marketing Cloud On This Page Salesforce Marketing Cloud is a marketing automation platform that enables you to create and manage marketing relationships.. SFMC uses the concept of tenants and subdomains to host the data for your projects. Tenant: Depending on the tenant type, a tenant can represent the top-level enterprise account, core account, top-level agency account, or your client account. Subdomain: A subdomain is created within your main domain based on the tenant type. This subdomain is then added to the Application Programming Interface (API) which creates the tenant-specific endpoints. Hevo uses these endpoints to query the data. To enable Hevo to access data from your Salesforce Marketing Cloud environment, you need to provide the Client ID and the Client Secret to Hevo. Prerequisites An active Salesforce Marketing Cloud instance is running. The Client ID and the Client Secret are created in Salesforce Marketing Cloud. Configuring Salesforce Marketing Cloud as a Source Perform the following steps to configure Salesforce Marketing Cloud as a Source in your Pipeline: Click PIPELINES in the Asset Palette. Click + CREATE in the Pipelines List View. In the Select Source Type page, select Salesforce Marketing Cloud. In the Configure your Salesforce Marketing Cloud Source page, specify the following: Pipeline Name: A unique name for your Pipeline. Client ID: The API token created in Salesforce Marketing Cloud to enable Hevo to read data from your account. Client Secret: The API Secret Key for your API token. Sub Domain: A domain name within your main domain based on the tenant type. It is a 28 character string that begins with mc. Read Locating the Subdomain Historical Sync Duration: The duration for which the past data must be ingested. Click TEST & CONTINUE. Proceed to configuring the data ingestion and setting up the Destination. Creating the Client ID and Client Secret Salesforce Marketing Cloud uses the concept of a package to create API integrations, install custom apps, or add custom components. You need to create an API Integration package to generate the Client ID and the Client Secret to allow Hevo to read your Salesforce Marketing Cloud data. To do this: Log in to your Marketing Cloud Instance. In the top right, click the drop-down next to your username, and then click Setup. In the left navigation pane, under Platform Tools, click the Apps drop-down and then click Installed Packages. In the Installed Packages page, click New. In the New Package Details window, specify the following: Name: A unique name for the package. Description: A brief description of the package. Click Save. In the Components section, click Add Component. In the Add Component window, select API integration as the component type, and then, click Next. Select Server-to-Server as the integration type and click Next. Note: A Server-to-Server integration allows server interaction without user involvement. Select the Read check box to grant read-onlypermission for the following objects: Email, OTT, Push, SMS, Social, Web, Documents and Images, Saved Content, Journeys, and List and Subscribers. Click Save. Locate your Client Id and Client Secret under Components, API Integration. Locating the Subdomain Once you create the Client ID and the Client Secret, locate the subdomain below the Client Id and the Client Secret under Authentication Base URI. Data Replication Note: The custom frequency must be set in hours, as an integer value. For example, 1, 2, 3 but not 1.5 or 1.75. Historical Data: Once you create the Pipeline, all data associated with your Salesforce Marketing Cloud account is ingested by Hevo and loaded into the Destination. Incremental Data: Once the historical load is completed, each subsequent run of the Pipeline: Fetches the entire data for the campaign, campaign_asset, journey, activity, outcomeobjects. Fetches incremental data for all the other objects. Source Consideration The earliest date from which the data is fetched is 1st Jan, 2013. By default, all available historical data is synchronized. Schema and Primary Keys Hevo uses the following schema to upload the records in the Destination: Data Model Hevo uses the following data model to ingest data from your Salesforce Marketing Cloud account: Limitations - Hevo currently supports limited Salesforce Marketing Cloud objects. To view the total list of objects available in Salesforce Marketing Cloud, see this. Revision History Refer to the following table for the list of key updates made to this page:
https://docs.hevodata.com/sources/mkt-analytics/sfmc/
2022-09-25T02:58:30
CC-MAIN-2022-40
1664030334332.96
[]
docs.hevodata.com
Usage Notes The Message Return Code is a 2-byte field that is the CLI return code for any error that occurred while constructing the message. If no error occurred, binary zeroes are returned. CLI returns the following error codes: - 0 – Message returned successfully - 1 – Truncated error message
https://docs.teradata.com/r/Teradata-Call-Level-Interface-Version-2-Reference-for-Workstation-Attached-Systems/October-2021/DBCAREA/Individual-Field-Descriptions-Alphabetical-Listing/Message-Return-Code
2022-09-25T01:35:29
CC-MAIN-2022-40
1664030334332.96
[]
docs.teradata.com
1. Linux installation from binaries¶ The instructions for installing eProsima Fast DDS in a Linux environment from binaries are provided in this page. 1.1. Install¶ The latest release of eProsima Fast DDS for Linux is available at the eProsima website Downloads tab. Once downloaded, extract the contents in your preferred directory. Then, to install eProsima Fast DDS and all its dependencies in the system, execute the install.sh script with administrative privileges: cd <extraction_directory> sudo ./install.sh Note By default, eProsima Fast DDS does not compile tests. To activate them, please refer to the Linux installation from sources page. 1.1.1. Contents¶ The src folder contains the following packages: foonathan_memory_vendor, an STL compatible C++ memory allocator library. fastcdr, a C++ library for data serialization according to the CDR standard (Section 10.2.1.2 OMG CDR). fastrtps, the core library of eProsima Fast DDS library. fastddsgen, a Java application that generates source code using the data types defined in an IDL file. In case any of these components is unwanted, it can be simply renamed or removed from the src directory. 1.1.2. Run an application¶ When running an instance of an application using eProsima Fast DDS, it must be linked with the library where the packages have been installed, /usr/local/lib/. There are two possibilities: Prepare the environment locally by typing in the console used for running the eProsima Fast DDS instance the command: export LD_LIBRARY_PATH=/usr/local/lib/ Add it permanently to the PATHby executing: echo 'export LD_LIBRARY_PATH=/usr/local/lib/' >> ~/.bashrc 1 1.2. Uninstall¶ To uninstall all installed components, execute the uninstall.sh script (with administrative privileges): cd <extraction_directory> sudo ./uninstall.sh Warning If any of the other components were already installed in some other way in the system, they will be removed as well. To avoid it, edit the script before executing it.
https://fast-dds.docs.eprosima.com/en/v2.7.1/installation/binaries/binaries_linux.html
2022-09-25T02:13:49
CC-MAIN-2022-40
1664030334332.96
[]
fast-dds.docs.eprosima.com
The Parameters Tab in Setup contains the items you will use for inventory, payments, custom letters, guest categories and other settings. Go to SETUP | PARAMETERS to access and manage this information. The default view of the Parameters Tab is Sources which will appear as a list with all of the Sources of your Guest Bookings. See Sources In the Parameters Tab you can:
https://docs.bookingcenter.com/display/MYPMS/Parameters+Tab
2022-09-25T01:26:33
CC-MAIN-2022-40
1664030334332.96
[]
docs.bookingcenter.com
Export to PDF Export to Word Preview Workflow History Feb 22, 2022 by healy Viewing: HIST-298 : Global Pandemics in History Last approved: Tue, 22 Feb 2022 19:03:33 GMT Last edit: Tue, 19 Oct 2021 17:51:45 GMT Justification for this inactivation request %deletejustification.eschtml% Term Changes Take Effect SP22 Academic Level Undergraduate College College of Arts and Sciences Department History Subject History Course Number 298 Is this course cross-listed with any other course? Title Global Pandemics in History Short Title Global Pandemics in History Description. Prerequisites: An adjunct faculty member or instructor Please explain: Course Type Course Type - Major/Minor Should the course satisfy a major or minor requirement or elective? Yes Satisfies an elective for the major or minor List the major(s) and/or minor(s) for which the course satisfies an elective or requirement History major and minor Health Studies Please list the sub-group, area or category it satisfies. (Example: Category II in English, or Global Governance in IA.) History - general elective Health Studies - elective in Global/Cultural Approaches to Wellness/Disease: Global Perspectives Historical Perspectives Please see attached syllabus. Course starts in ancient Greece, moves to Black Death in Europe, transatlantic exchanges and disease, revolutionary France, smallpox in Africa, pandemic of 1918, HIV/AIDS in global context Please see attached syllabus. Weeks 13-14 of the semester are on HIV/AIDS and COVID pandemics. Weeks 1-12 of the semester cover material pre-1945. Explanation Departmental Start-up cost 0 Capital Equipment 0 Ongoing Expenses 0 Library Resources (books, periodicals, interlibrary loan, etc.): If currently held library resources are inadequate, what increase is needed? Adjunct instructor may want to order copies of assigned books for Watzek so they can be placed on reserve. I believe current resources are adequate. Computing and/or Media Resources 0 Please explain in detail the reasons for adding or modifying this course. In your response, be sure to respond to the following: How does the addition of this course or the change to this course modify or advance the goals/objectives of the overall department/program/major/minor? This one-time course is an opportunity to for majors and minors to take a course on the history of medicine, not a topic our permanent faculty are able currently able to offer How does the addition of this course or the change to this course relate to other courses outside of the department/program/major/minor? In consultation with Jerusha Detweiler-Bedell, I have proposed that this course could count for the Global/Cultural elective category for the Health Studies minor. How does the addition of this course or the change to this course relate to development in the discipline and to the course offerings and goals/objectives of comparable departments/programs/majors/minors in our peer and aspirational institutions? May appeal to students interested in health fields Please explain what other curricular changes will occur in the department/program/major/minor in order to permit this course to be taught at the proposed frequency? none. The History department has an opportunity to hire an adjunct to teach one course for us in Spring 2022. (Due to a tenured faculty member getting an external grant that includes one course buy out). We will not be able to offer this course regularly. Please explain what curricular changes will occur outside of the department/program/major/minor to permit this course to be taught at the proposed frequency? What impact may it have on other departments/programs/majors/minors or General Education? What departments/programs are involved? See above requests to include as Gen Ed Global Perspectives and Historical Perspectives courses. And as Global/Cultural elective for Health Studies if that program approves. Are there any other changes you wish to make to this course or information the Committee should know (please explain)? The syllabus attached is a sample of the course that our anticipated hire is currently teaching at another school. If she is hired, I will help her to tweak the syllabus to correspond to the LC spring 2022 schedule. Attach a syllabus. (A draft syllabus is acceptable if you are currently developing the course.) HS 3590 Global Pandemics.docx: 6
https://docs.lclark.edu/specialtopicsadmin/6/
2022-09-25T02:10:52
CC-MAIN-2022-40
1664030334332.96
[]
docs.lclark.edu
Pega Robotic Automation release and build notes Release notes contain important information that you should know before you install a release of Pega Robotic Automation. Build notes describe the changes that are included in each build that is created for the various Pega Robotic Automation releases. A release is a major update to Pega Robotic Automation, such as 19.1. A build can include bug fixes and enhancements, such as 19.1.95, or it could include a new component, like the addition of the Pega Browser Extension (PBE) in build 19.1.115. Before you install one of these builds, familiarize yourself with the changes, new features, and resolved issues that are listed in the following table. To download a build, see Download Pega RPA software. For information on system requirements, review the installation instructions. Build notes for the following products are available: Previous topic What's new in Pega Robotic Automation v19.1 Next topic Pega Robotic Automation release notes
https://docs.pega.com/pega-rpa/pega-robotic-automation-release-and-build-notes
2022-09-25T02:31:09
CC-MAIN-2022-40
1664030334332.96
[]
docs.pega.com
Using Python Client with Hazelcast¶ This chapter provides information on how you can use Hazelcast data structures in the Python client, after giving some basic information including an overview to the client API, operation modes of the client and how it handles the failures. Python Client API Overview¶ Hazelcast Python client is designed to be fully asynchronous. See the Basic Usage section to learn more about the asynchronous nature of the Python Client. If you are ready to go, let’s start to use Hazelcast Python client. The first step is configuration. See the Configuration Overview section for details. The following is an example on how to configure and initialize the HazelcastClient to connect to the cluster: client = hazelcast.HazelcastClient( cluster_name="dev", cluster_members=[ "198.51.100.2" ] ) This client object is your gateway to access all the Hazelcast distributed objects. Let’s create a map and populate it with some data, as shown below. # Get a Map called 'my-distributed-map' customer_map = client.get_map("customers").blocking() # Write and read some data() Python Client Operation Modes¶ The client has two operation modes because of the distributed nature of the data and cluster: smart and unisocket. Refer to the Setting Smart Routing section to see how to configure the client for different operation modes. Smart Client¶ In the smart mode, the clients connect to all the cluster members. Since each data partition uses the well known and consistent hashing algorithm, each client can send an operation to the relevant cluster member, which increases the overall throughput and efficiency. Smart mode is the default mode. Unisocket Client¶ member addresses. This single member will behave as a gateway to the other members. For any operation requested from the client, it will redirect the request to the relevant member and return the response back to the client returned from this member. Handling Failures¶ There are two main failure cases you should be aware of. Below sections explain these and the configurations you can perform to achieve proper behavior. Handling Client Connection Failure¶ While the client is trying to connect initially to one of the members in the cluster_members, all the members might not be available. Instead of giving up, throwing an error and stopping the client, the client retries to connect as configured. This behavior is described in the Configuring Client Connection Retry section. The client executes each operation through the already established connection to the cluster. If this connection(s) disconnects or drops, the client will try to reconnect as configured. Handling Retry-able Operation Failure¶ tuned by passing the invocation_timeout argument to the client. The client will retry an operation within this given period, of course, if it is a read-only operation or you enabled the redo_operation as stated in the above.. I f you enabled redo_operation, it means this operation may run again, which may cause two instances of the same object in the queue. When invocation is being retried, the client may wait some time before it retries again. This duration can be configured using the invocation_retry_pause argument. The default retry pause time is 1 second. Using Distributed Data Structures¶ Most of the distributed data structures are supported by the Python client. In this chapter, you will learn how to use these distributed data structures. Using Map¶ Hazelcast Map is a distributed dictionary. Through the Python client, you can perform operations like reading and writing from/to a Hazelcast Map with the well known get and put methods. For details, see the Map section in the Hazelcast Reference Manual. A Map usage example is shown below. # Get a Map called 'my-distributed-map' my_map = client.get_map("my-distributed-map").blocking() # Run Put and Get operations my_map.put("key", "value") my_map.get("key") # Run concurrent Map operations (optimistic updates) my_map.put_if_absent("somekey", "somevalue") my_map.replace_if_same("key", "value", "newvalue") Using MultiMap¶ Hazelcast MultiMap is a distributed and specialized map where you can store multiple values under a single key. For details, see the MultiMap section in the Hazelcast Reference Manual. A MultiMap usage example is shown below. # Get a MultiMap called 'my-distributed-multimap' multi_map = client.get_multi_map("my-distributed-multimap").blocking() # Put values in the map against the same key multi_map.put("my-key", "value1") multi_map.put("my-key", "value2") multi_map.put("my-key", "value3") # Read and print out all the values for associated with key called 'my-key' # Outputs '['value2', 'value1', 'value3']' values = multi_map.get("my-key") print(values) # Remove specific key/value pair multi_map.remove("my-key", "value2") Using Replicated Map¶ Hazelcast Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster. It provides full replication of entries to all members for high speed access. For details, see the Replicated Map section in the Hazelcast Reference Manual. A Replicated Map usage example is shown below. # Get a ReplicatedMap called 'my-replicated-map' replicated_map = client.get_replicated_map("my-replicated-map").blocking() # Put and get a value from the Replicated Map # (key/value is' Using Queue¶ Hazelcast Queue is a distributed queue which enables all cluster members to interact with it. For details, see the Queue section in the Hazelcast Reference Manual. A Queue usage example is shown below. # Get a Queue called 'my-distributed-queue' queue = client.get_queue("my-distributed-queue").blocking() # Offer a string into the Queue queue.offer("item") # Poll the Queue and return the string item = queue.poll() # Timed-restricted operations queue.offer("another-item", 0.5) # waits up to 0.5 seconds another_item = queue.poll(5) # waits up to 5 seconds # Indefinitely blocking Operations queue.put("yet-another-item") print(queue.take()) # Outputs 'yet-another-item' Using Set¶ Hazelcast Set is a distributed set which does not allow duplicate elements. For details, see the Set section in the Hazelcast Reference Manual. A Set usage example is shown below. # Get a Set called 'my-distributed-set') Using List¶ Hazelcast List is a distributed list which allows duplicate elements and preserves the order of elements. For details, see the List section in the Hazelcast Reference Manual. A List usage example is shown below. # Get a List called 'my-distributed-list' my_list = client.get_list("my-distributed-list").blocking() # Add element to the list my_list.add("item1") my_list.add("item2") # Remove the first element print("Removed:", my_list.remove_at(0)) # Outputs 'Removed: item1' # There is only one element left print("Current size is", my_list.size()) # Outputs 'Current size is 1' # Clear the list my_list.clear() Using Ringbuffer¶' Using ReliableTopic¶ Hazelcast ReliableTopic is a distributed topic implementation backed up by the Ringbuffer data structure. For details, see the Reliable Topic section in the Hazelcast Reference Manual. A Reliable Topic usage example is shown below. # Get a Topic called "my-distributed-topic" topic = client.get_reliable_topic("my-distributed-topic").blocking() # Add a Listener to the Topic topic.add_listener(lambda message: print(message)) # Publish a message to the Topic topic.publish("Hello to distributed world") Configuring Reliable Topic¶ You may configure Reliable Topics using the reliable_topics argument: client = hazelcast.HazelcastClient( reliable_topics={ "my-topic": { "overload_policy": TopicOverloadPolicy.DISCARD_OLDEST, "read_batch_size": 20, } } ) The following are the descriptions of configuration elements and attributes: keys of the dictionary: Name of the Reliable Topic. overload_policy: Policy to handle an overloaded topic. By default, set to BLOCK. read_batch_size: Number of messages the reliable topic will try to read in batch. It will get at least one, but if there are more available, then it will try to get more to increase throughput. By default, set to 10. Using Topic¶ Hazelcast Topic is a distribution mechanism for publishing messages that are delivered to multiple subscribers. For details, see the Topic section in the Hazelcast Reference Manual. A Topic usage example is shown below. # Function to be called when a message is published def print_on_message(topic_message): print("Got message:", topic_message.message) # Get a Topic called "my-distributed-topic" topic = client.get_topic("my-distributed-topic").blocking() # Add a Listener to the Topic topic.add_listener(print_on_message) # Publish a message to the Topic topic.publish("Hello to distributed world") # Outputs 'Got message: Hello to distributed world' Using Transactions¶ Reference Manual. # Create a Transaction object and begin the transaction transaction = client.new_transaction(timeout=10) transaction.begin() # Get transactional distributed data structures txn_map = transaction.get_map("transactional-map") txn_queue = transaction.get_queue("transactional-queue") txn_set = transaction.get_set("transactional-set") try: obj = txn_queue.poll() # Process obj txn_map.put("1", "value1") txn. One can also use context managers to simplify the usage of the transactional data structures. The example above can be simplified as below. # Create a Transaction object and begin the transaction with client.new_transaction(timeout=10) as transaction: # Get transactional distributed data structures txn_map = transaction.get_map("transactional-map") txn_queue = transaction.get_queue("transactional-queue") txn_set = transaction.get_set("transactional-set") obj = txn_queue.poll() # Process obj txn_map.put("1", "value1") txn_set.add("value") # Do other things # If everything goes well, the transaction will be # committed, if not, it will be rolled back automatically. Using PN Counter¶ Hazelcast PNCounter (Positive-Negative Counter) is a CRDT positive-negative counter implementation. It is an eventually consistent counter given there is no member failure. For details, see the PN Counter section in the Hazelcast Reference Manual. A PN Counter usage example is shown below. # Get a PN Counter called 'pn-counter' pn_counter = client.get_pn_counter("pn-counter").blocking() # Counter is initialized with 0 print(pn_counter.get()) # 0 # xx_and_get() variants does the operation # and returns the final value print(pn_counter.add_and_get(5)) # 5 print(pn_counter.decrement_and_get()) # 4 # get_and_xx() variants returns the current # value and then does the operation print(pn_counter.get_and_increment()) # 4 print(pn_counter.get()) # 5 Using Flake ID Generator¶ Hazelcast FlakeIdGenerator is used to generate cluster-wide unique identifiers. Generated identifiers are int primitive values and are k-ordered (roughly ordered). IDs are in the range from 0 to 2^63-1 (maximum signed 64-bit int value). For details, see the FlakeIdGenerator section in the Hazelcast Reference Manual. # Get a Flake ID Generator called 'flake-id-generator' generator = client.get_flake_id_generator("flake-id-generator").blocking() # Generate a some unique identifier print("ID:", generator.new_id()) Configuring Flake ID Generator¶ You may configure Flake ID Generators using the flake_id_generators argument: client = hazelcast.HazelcastClient( flake_id_generators={ "flake-id-generator": { "prefetch_count": 123, "prefetch_validity": 150 } } ) The following are the descriptions of configuration elements and attributes: keys of the dictionary: Name of the Flake ID Generator. prefetch_count: Count of IDs which are pre-fetched on the background when one call to generator.newId()is made. Its value must be in the range 1- 100,000. Its default value is 100. prefetch_validity: Specifies for how long the pre-fetched IDs can be used. After this time elapses, a new batch of IDs are fetched. Time unit is seconds. Its default value is 600seconds ( 10minutes). The IDs contain a timestamp component, which ensures a rough global ordering of them. If an ID is assigned to an object that was created later, it will be out of order. If ordering is not important, set this value to 0. CP Subsystem¶ Hazelcast 4.0 introduces CP concurrency primitives with respect to the CAP principle, i.e., they always maintain linearizability and prefer consistency to availability during network partitions and client or server failures. All data structures within CP Subsystem are available through client.cp_subsystem component of the client. Before using Atomic Long, Lock, and Semaphore, CP Subsystem has to be enabled on cluster-side. Refer to CP Subsystem documentation for more information. Data structures in CP Subsystem run in CP groups. Each CP group elects its own Raft leader and runs the Raft consensus algorithm independently. The CP data structures differ from the other Hazelcast data structures in two aspects. First, an internal commit is performed on the METADATA CP group every time you fetch a proxy from this interface. Hence, callers should cache returned proxy objects. Second, if you call distributed_object.destroy() on a CP data structure proxy, that data structure is terminated on the underlying CP group and cannot be reinitialized until the CP group is force-destroyed. For this reason, please make sure that you are completely done with a CP data structure before destroying its proxy. Using AtomicLong¶ Hazelcast AtomicLong is the distributed implementation of atomic 64-bit integer counter. It offers various atomic operations such as get, set, get_and_set, compare_and_set and increment_and_get. This data structure is a part of CP Subsystem. An Atomic Long usage example is shown below. # Get an AtomicLong called "my-atomic-long" atomic_long = client.cp_subsystem.get_atomic_long("my-atomic-long").blocking() # Get current value value = atomic_long.get() print("Value:", value) # Prints: # Value: 0 # Increment by 42 atomic_long.add_and_get(42) # Set to 0 atomically if the current value is 42 result = atomic_long.compare_and_set(42, 0) print ('CAS operation result:', result) # Prints: # CAS operation result: True AtomicLong implementation. Using Lock¶ Hazelcast FencedLock is the distributed and reentrant implementation of a linearizable lock. It is CP with respect to the CAP principle. It works on top of the Raft consensus algorithm. It offers linearizability during crash-stop failures and network partitions. If a network partition occurs, it remains available on at most one side of the partition. A basic Lock usage example is shown below. # Get a FencedLock called "my-lock" lock = client.cp_subsystem.get_lock("my-lock").blocking() # Acquire the lock and get the fencing token fence = lock.lock() try: # Your guarded code goes here pass finally: # Make sure to release the lock lock.unlock() FencedLock works on top of CP sessions. It keeps a CP session open while the lock is acquired. Please refer to CP Session documentation for more information. By default, FencedLock is reentrant. Once a caller acquires the lock, it can acquire the lock reentrantly as many times as it wants in a linearizable manner. You can configure the reentrancy behavior on the member side.Error, requests sent by 2 clients can be re-ordered in the network and hit the external resource in reverse order. There is a simple solution for this problem. Lock holders are ordered by a monotonic fencing token, which increments each time the lock is assigned to a new owner. This fencing token can be passed to external services or resources to ensure sequential execution. Using Semaphore¶ Hazelcast Semaphore is the distributed implementation of a linearizable and distributed semaphore. It offers multiple operations for acquiring the permits. This data structure is a part of CP Subsystem. Sem. A basic Semaphore usage example is shown below. # Get a Semaphore called "my-semaphore" semaphore = client.cp_subsystem.get_semaphore("my-semaphore").blocking() # Try to initialize the semaphore # (does nothing if the semaphore is already initialized) semaphore.init(3) # Acquire 3 permits out of 3 semaphore.acquire(3) # Release 2 permits semaphore.release(2) # Check available permits available = semaphore.available_permits() print("Available:", available) # Prints: # Available: 2. As an alternative, potentially safer approach to the multiple-permit acquire, you can use the try_acquire() method of Semaphore. It tries to acquire the permits in optimistic manner and immediately returns with a bool operation result. It also accepts an optional timeout argument which specifies the timeout in seconds to acquire the permits before giving up. # Try to acquire 2 permits success = semaphore.try_acquire(2) # Check for the result of the acquire request if success: try: pass # Your guarded code goes here finally: # Make sure to release the permits semaphore.release(2) Semaphore data structure has two variations: The default implementation Hazelcast client cannot release permits before acquiring them first. In other words, a client can release only the permits it has acquired earlier. The second implementation is sessionless. This one does not perform auto-cleanup of acquired permits on failures. Acquired permits are not bound to callers jdk-compatibleserver-side setting. Refer to Semaphore configuration documentation for more details. Using CountDownLatch¶ Hazelcast CountDownLatch is the distributed implementation of a linearizable and distributed countdown latch. This data structure is a cluster-wide synchronization aid that allows one or more callers to wait until a set of operations being performed in other callers completes. This data structure is a part of CP Subsystem. A basic CountDownLatch usage example is shown below. # Get a CountDownLatch called "my-latch" latch = client.cp_subsystem.get_count_down_latch("my-latch").blocking() # Try to initialize the latch # (does nothing if the count is not zero) initialized = latch.try_set_count(1) print("Initialized:", initialized) # Check count count = latch.get_count() print("Count:", count) # Prints: # Count: 1 # Bring the count down to zero after 10ms def run(): time.sleep(0.01) latch.count_down() t = Thread(target=run) t.start() # Wait up to 1 second for the count to become zero up count_is_zero = latch.await(1) print("Count is zero:", count_is_zero) Note CountDownLatch count can be reset with try_set_count() after a countdown has finished, but not during an active count. Using AtomicReference¶ Hazelcast AtomicReference is the distributed implementation of a linearizable object reference. It provides a set of atomic operations allowing to modify the value behind the reference. This data structure is a part of CP Subsystem. A basic AtomicReference usage example is shown below. # Get a AtomicReference called "my-ref" my_ref = client.cp_subsystem.get_atomic_reference("my-ref").blocking() # Set the value atomically my_ref.set(42) # Read the value value = my_ref.get() print("Value:", value) # Prints: # Value: 42 # Try to replace the value with "value" # with a compare-and-set atomic operation result = my_ref.compare_and_set(42, "value") print("CAS result:", result) # Prints: # CAS result: True The following are some considerations you need to know when you use AtomicReference: AtomicReference works based on the byte-content and not on the object-reference. If you use the compare_and_set()method, do not change to the original value because its serialized content will then be different. All methods returning an object return a private copy. You can modify the private copy, but the rest of the world is shielded from your changes. If you want these changes to be visible to the rest of the world, you need to write the change back to the AtomicReference; but be careful about introducing a data-race. The in-memory format of an AtomicReference()method. With the apply()method, the whole object does not need to be sent over the line; only the information that is relevant is sent. Atomic. Distributed Events¶ This chapter explains when various events are fired and describes how you can add event listeners on a Hazelcast Python client. These events can be categorized as cluster and distributed data structure events. Cluster Events¶, connected, disconnected, shutting down and shutdown. Listening for Member Events¶. def added_listener(member): print("Member Added: The address is", member.address) def removed_listener(member): print("Member Removed. The address is", member.address) client.cluster_service.add_listener( member_added=added_listener, member_removed=removed_listener, fire_for_existing=True ) Also, you can set the fire_for_existing flag to True to receive the events for list of available members when the listener is registered. Membership listeners can also be added during the client startup using the membership_listeners argument. client = hazelcast.HazelcastClient( membership_listeners=[ (added_listener, removed_listener) ] ) Listening for Distributed Object Events¶ The events for distributed objects are invoked when they are created and destroyed in the cluster. When an event is received, listener function will be called. The parameter passed into the listener function will be of the type DistributedObjectEvent. A DistributedObjectEvent contains the following fields: name: Name of the distributed object. service_name: Service name of the distributed object. event_type: Type of the invoked event. It is either CREATEDor DESTROYED. The following is example of adding a distributed object listener to a client. def distributed_object_listener(event): print("Distributed object event >>>", event.name, event.service_name, event.event_type) client.add_distributed_object_listener( listener_func=distributed_object_listener ).result() map_name = "test_map" # This call causes a CREATED event test_map = client.get_map(map_name).blocking() # This causes no event because map was already created test_map2 = client.get_map(map_name).blocking() # This causes a DESTROYED event test_map.destroy() Output Distributed object event >>> test_map hz:impl:mapService CREATED Distributed object event >>> test_map hz:impl:mapService DESTROYED Listening for Lifecycle Events¶ The lifecycle listener is notified for the following events: STARTING: The client is starting. STARTED: The client has started. CONNECTED: The client connected to a member. SHUTTING_DOWN: The client is shutting down. DISCONNECTED: The client disconnected from a member. SHUTDOWN: The client has shutdown. The following is an example of the lifecycle listener that is added to client during startup and its output. def lifecycle_listener(state): print("Lifecycle Event >>>", state) client = hazelcast.HazelcastClient( lifecycle_listeners=[ lifecycle_listener ] ) Output: INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTING Lifecycle Event >>> STARTING INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTED Lifecycle Event >>> STARTED INFO:hazelcast.connection:Trying to connect to Address(host=127.0.0.1, port=5701) INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is CONNECTED Lifecycle Event >>> CONNECTED INFO:hazelcast.connection:Authenticated with server Address(host=172.17.0.2, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, server version: 4.0, local address: Address(host=127.0.0.1, port=56732) Lifecycle Event >>> Lifecycle Event >>> DISCONNECTED INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is SHUTDOWN Lifecycle Event >>> SHUTDOWN You can also add lifecycle listeners after client initialization using the LifecycleService. client.lifecycle_service.add_listener(lifecycle_listener) Distributed Data Structure Events¶ You can add event listeners to the distributed data structures. Listening for Map Events¶: %s-%s" % (event.key, event.value)) customer_map.add_entry_listener(include_value=True, added_func=added) customer_map.put("4", "Jane Doe") A map-wide event is fired as a result of a map-wide operation. For example, map.clear() or map.evict_all(). An EntryEvent object is passed to the listener function. See the following example. def cleared(event): print("Map Cleared:", event.number_of_affected_entries) customer_map.add_entry_listener(include_value=True, clear_all_func=cleared) customer_map.clear() Distributed Computing¶ This chapter explains how you can use Hazelcast entry processor implementation in the Python client. Using EntryProcessor¶¶_string() def write_data(self, object_data_output): object_data_output.write_string.Entry' SQL¶ You can use SQL to query data in maps, Kafka topics, or a variety of file systems. Results can be sent directly to the client or inserted into maps or Kafka topics. For streaming queries, you can submit them to a cluster as jobs to run in the background. Warning The SQL feature is stabilized in 5.0 versions of the client and the Hazelcast platform. In order for the client and the server to be fully compatible with each other, their major versions must be the same. Note In order to use SQL service from the Python client, Jet engine must be enabled on the members and the hazelcast-sql module or Cannot execute SQL query because "hazelcast-sql" module is not in the classpath. while executing queries, enable the Jet engine following one of the instructions pointed out in the error message, or add the hazelcast-sql module to your member’s classpath. Supported Queries¶ Ad-Hoc Queries Query large datasets either in one or multiple systems and/or run aggregations on them to get deeper insights. See the Get Started with SQL Over Maps tutorial for reference. Streaming Queries Also known as continuous queries, these keep an open connection to a streaming data source and run a continuous query to get near real-time updates. See the Get Started with SQL Over Kafka tutorial for reference. Federated Queries Query different datasets such as Kafka topics and Hazelcast maps, using a single query. Normally, querying in SQL is database or dataset-specific. However, with mappings, you can pull information from different sources to present a more complete picture. See the Get Started with SQL Over Files tutorial for reference. Mappings¶ To connect to data sources and query them as if they were tables, the SQL service uses a concept called mappings. Mappings store essential metadata about the source’s data model, data access patterns, and serialization formats so that the SQL service can connect to the data source and query it. You can create mappings for the following data sources by using the CREATE MAPPING statement: Querying Map¶ With SQL you can query the keys and values of maps in your cluster. Assume that we have a map called employees that contains values of type Employee: class Employee(Portable): def __init__(self, name=None, age=None): self.name = name self.age = age def write_portable(self, writer): writer.write_string("name", self.name) writer.write_int("age", self.age) def read_portable(self, reader): self.name = reader.read_string("name") self.age = reader.read_int("age") def get_factory_id(self): return 1 def get_class_id(self): return 2 employees = client.get_map("employees").blocking() employees.set(1, Employee("John Doe", 33)) employees.set(2, Employee("Jane Doe", 29)) Before starting to query data, we must create a mapping for the employees map. The details of CREATE MAPPING statement is discussed in the reference manual. For the Employee class above, the mapping statement is shown below. It is enough to create the mapping once per map. client.sql.execute( """ CREATE MAPPING employees ( __key INT, name VARCHAR, age INT ) TYPE IMap OPTIONS ( 'keyFormat' = 'int', 'valueFormat' = 'portable', 'valuePortableFactoryId' = '1', 'valuePortableClassId' = '2' ) """ ).result() The following code prints names of the employees whose age is less than 30: result = client.sql.execute("SELECT name FROM employees WHERE age < 30").result() for row in result: name = row["name"] print(name) The following subsections describe how you can access Hazelcast maps and perform queries on them in more details. Case Sensitivity Mapping names and field names are case-sensitive. For example, you can access an employees map as employees but not as Employees. Key and Value Objects A map entry consists of a key and a value. These are accessible through the __key and this aliases. The following query returns the keys and values of all entries in the map: SELECT __key, this FROM employees “SELECT *” Queries You may use the SELECT * FROM <table> syntax to get all the table fields. The __key and this fields are returned by the SELECT * queries if they do not have nested fields. For the employees map, the following query does not return the this field, because the value has nested fields name and age: -- Returns __key, name, age SELECT * FROM employee Key and Value Fields You may also access the nested fields of a key or a value. The list of exposed fields depends on the serialization format, as described Querying Maps with SQL section. Using Query Parameters You can use query parameters to build safer and faster SQL queries. A query parameter is a piece of information that you supply to a query before you run it. Parameters can be used by themselves or as part of a larger expression to form a criterion in the query. age_to_compare = 30 client.sql.execute("SELECT * FROM employees WHERE age > ?", age_to_compare).result() Instead of putting data straight into an SQL statement, you use the ? placeholder in your client code to indicate that you will replace that placeholder with a parameter. Query parameters have the following benefits: Faster execution of similar queries. If you submit more than one query where only a value changes, the SQL service uses the cached query plan from the first query rather than optimizing each query again. Protection against SQL injection. If you use query parameters, you don’t need to escape special characters in user-provided strings. Querying JSON Objects¶ In Hazelcast, the SQL service supports the following ways of working with JSON data: json: Maps JSON data to a single column of JSONtype where you can use JsonPath syntax to query and filter it, including nested levels. json-flat: Maps JSON top-level fields to columns with non-JSON types where you can query only top-level keys. json To query json objects, you should create an explicit mapping using the CREATE MAPPING statement, similar to the example above. For example, this code snippet creates a mapping to a new map called json_employees, which stores the JSON values as HazelcastJsonValue objects and queries it using nested fields, which is not possible with the json-flat type: client.sql.execute( """ CREATE OR REPLACE MAPPING json_employees TYPE IMap OPTIONS ( 'keyFormat' = 'int', 'valueFormat' = 'json' ) """ ).result() json_employees = client.get_map("json_employees").blocking() json_employees.set( 1, HazelcastJsonValue( { "personal": {"name": "John Doe"}, "job": {"salary": 60000}, } ), ) json_employees.set( 2, HazelcastJsonValue( { "personal": {"name": "Jane Doe"}, "job": {"salary": 80000}, } ), ) with client.sql.execute( """ SELECT JSON_VALUE(this, '$.personal.name') AS name FROM json_employees WHERE JSON_VALUE(this, '$.job.salary' RETURNING INT) > ? """, 75000, ).result() as result: for row in result: print(f"Name: {row['name']}") The json data type comes with full support for querying JSON in maps and Kafka topics. JSON Functions Hazelcast supports the following functions, which can retrieve JSON data. JSON_QUERY : Extracts a JSON value from a JSON document or a JSON-formatted string that matches a given JsonPath expression. JSON_VALUE : Extracts a primitive value, such as a string, number, or boolean that matches a given JsonPath expression. This function returns NULLif a non-primitive value is matched, unless the ON ERRORbehavior is changed. JSON_ARRAY : Returns a JSON array from a list of input data. JSON_OBJECT : Returns a JSON object from the given key/value pairs. json-flat To query json-flat objects, you should create an explicit mapping using the CREATE MAPPING statement, similar to the example above. For example, this code snippet creates a mapping to a new map called json_flat_employees, which stores the JSON values with columns name and salary as HazelcastJsonValue objects and queries it using top-level fields: client.sql.execute( """ CREATE OR REPLACE MAPPING json_flat_employees ( __key INT, name VARCHAR, salary INT ) TYPE IMap OPTIONS ( 'keyFormat' = 'int', 'valueFormat' = 'json-flat' ) """ ).result() json_flat_employees = client.get_map("json_flat_employees").blocking() json_flat_employees.set( 1, HazelcastJsonValue( { "name": "John Doe", "salary": 60000, } ), ) json_flat_employees.set( 2, HazelcastJsonValue( { "name": "Jane Doe", "salary": 80000, } ), ) with client.sql.execute( """ SELECT name FROM json_flat_employees WHERE salary > ? """, 75000, ).result() as result: for row in result: print(f"Name: {row['name']}") Note that, in json-flat type, top-level columns must be explicitly specified while creating the mapping. The json-flat format comes with partial support for querying JSON in maps, Kafka topics, and files. For more information about working with JSON using SQL see Working with JSON in Hazelcast reference manual. SQL Statements¶ Data Manipulation Language(DML) Statements SELECT: Read data from a table. SINK INTO/INSERT INTO: Ingest data into a map and/or forward data to other systems. UPDATE: Overwrite values in map entries. DELETE: Delete map entries. Data Definition Language(DDL) Statements CREATE MAPPING: Map a local or remote data object to a table that Hazelcast can access. SHOW MAPPINGS: Get the names of existing mappings. DROP MAPPING: Remove a mapping. Job Management Statements CREATE JOB: Create a job that is not tied to the client session. ALTER JOB: Restart, suspend, or resume a job. SHOW JOBS: Get the names of all running jobs. - CREATE OR REPLACE SNAPSHOT (Enterprise only): Create a snapshot of a running job, so you can stop and restart it at a later date. DROP SNAPSHOT (Enterprise only): Cancel a running job. Data Types¶ The SQL service supports a set of SQL data types. Every data type is mapped to a Python type that represents the type’s value. Functions and Operators¶ Hazelcast supports logical and IS predicates, comparison and mathematical operators, and aggregate, mathematical, trigonometric, string, table-valued, and special functions. See the Reference Manual for details. Improving the Performance of SQL Queries¶ You can improve the performance of queries over maps by indexing map entries. To find out more about indexing map entries, see add_index() method. If you find that your queries lead to out of memory exceptions (OOME), consider decreasing the value of the Jet engine’s max-processor-accumulated-records option. Limitations¶ SQL has the following limitations. We plan to remove these limitations in future releases. You cannot run SQL queries on lite members. The only supported Hazelcast data structure is map. You cannot query other data structures such as replicated maps. Limited support for joins. See Join Tables. Distributed Query¶. How Distributed Query Works¶. equal: Checks if the result of an expression is equal to a given value. not_equal: Checks if the result of an expression is not equal to a given value. instance_of: Checks if the result of an expression has a certain type. like: Checks if the result of an expression matches some string pattern. %(percentage sign) is the placeholder for many characters, _(underscore) is placeholder for only one character. ilike: Checks if the result of an expression matches some string pattern in a case-insensitive manner. greater: Checks if the result of an expression is greater than a certain value. greater_or_equal: Checks if the result of an expression is greater than or equal to a certain value. less: Checks if the result of an expression is less than a certain value. less_or_equal: Checks if the result of an expression is less than or equal to a certain value. between: Checks if the result of an expression is between two values (this is inclusive). in_: Checks if the result of an expression is an element of a certain list. not_: Checks if the result of an expression is false. Employee Map Query Example¶_string("name") self.age = reader.read_int("age") self.active = reader.read_boolean("active") self.salary = reader.read_double("salary") def write_portable(self, writer): writer.write_string( class is faster as compared to IdentifiedDataSerializable. Querying by Combining Predicates with AND, OR, NOT¶ You can combine predicates by using the and_, or_ and not_ operators, as shown in the below example. from hazelcast.predicate import and_, equal, less employee_map = client.get_map("employee") predicate = and_(equal('active', True), less( and entry_set of a map. Querying with SQL¶ SqlPredicate takes the regular SQL where clause. See the following example: from hazelcast.predicate import sql employee_map = client.get_map("employee") employees = employee_map.values(sql("active AND age < 30")).result() Supported SQL Syntax¶ ‘Joe’, ‘Josh’, ‘Joseph’ etc.) name LIKE 'Jo_'(true for ‘Joe’; false for ‘Josh’) name NOT LIKE 'Jo_'(true for ‘Josh’; false for ‘Joe’) name LIKE 'J_s%'(true for ‘Josh’, ‘Joseph’; false ‘John’, ‘Joe’) ILIKE: <attribute> [NOT] ILIKE 'expression' ILIKE is similar to the LIKE predicate but in a case-insensitive manner. name ILIKE 'Jo%'(true for ‘Joe’, ‘joe’, ‘jOe’,‘Josh’,‘joSH’, etc.) name ILIKE 'Jo_'(true for ‘Joe’ or ‘jOE’; false for ‘Josh’) REGEX: <attribute> [NOT] REGEX 'expression' name REGEX 'abc-.*'(true for ‘abc-123’; false for ‘abx-123’) Querying Examples with Predicates¶ You can use the __key attribute to perform a predicated search for the entry keys. See the following example: from hazelcast.predicate import greater_or_equal person_map = client.get_map("persons").blocking() person_map.put("John", 28) person_map.put("Mary", 23) person_map.put("Judy", 30) predicate = greater_or_equal("this", 27) persons = person_map.values(predicate) print(persons[0], persons[1]) # Outputs '28 30' In this example, the code creates a list with the values greater than or equal to “27”. Querying with JSON Strings¶(less(. Metadata Creation for JSON Querying¶ Hazelcast stores a metadata object per JSON serialized object stored. This metadata object is created every time a JSON serialized object is put into an Map. Metadata is later used to speed up the query operations. Metadata creation is on by default. Depending on your application’s needs, you may want to turn off the metadata creation to decrease the put latency and increase the throughput. You can configure this using metadata-policy element for the map configuration on the member side as follows: <hazelcast> ... <map name="map-a"> <!-- valid values for metadata-policy are: - OFF - CREATE_ON_UPDATE (default) --> <metadata-policy>OFF</metadata-policy> </map> ... </hazelcast> Filtering with Paging Predicates¶ Hazelcast Python client provides paging for defined predicates. With its PagingPredicate, you can get a collection of keys, values, or entries page by page by filtering them with predicates and giving the size of the pages. Also, you can sort the entries by specifying comparators. In this case, the comparator should be either Portable or IdentifiedDataSerializable and the serialization factory implementations should be registered on the member side. Please note that, paging is done on the cluster members. Hence, client only sends a marker comparator to indicate members which comparator to use. The comparision logic must be defined on the member side by implementing the java.util.Comparator<Map.Entry> interface. Paging predicates require the objects to be deserialized on the member side from which the collection is retrieved. Therefore, you need to register the serialization factories you use on all the members on which the paging predicates are used. See the Adding User Library to CLASSPATH section for more details. In the example code below: The greater_or_equalpredicate gets values from the studentsmap. This predicate has a filter to retrieve the objects with an agegreater_page()method of PagingPredicateand querying the map again with the updated PagingPredicate. from hazelcast.predicate import paging, greater_or_equal ... m = client.get_map("students").blocking() predicate = paging(greater_or_equal("age", 18), 5) # Retrieve the first page values = m.values(predicate) ... # Set up next page predicate.next_page() # Retrieve next page values = m.values(predicate) If a comparator is not specified for PagingPredicate, but you want to get a collection of keys or values page by page, keys or values must implement the java.lang.Comparable interface on the member side. Otherwise, paging fails with an exception from the server. Luckily, a lot of types implement the Comparable interface by default, including the primitive types, so, you may use values of types int, float, str etc. in paging without specifying a comparator on the Python client. You can also access a specific page more easily by setting the predicate.page attribute before making the remote call. This way, if you make a query for the hundredth page, for example, it gets all 100 pages at once instead of reaching the hundredth page one by one using the next_page() method. Note PagingPredicate, also known as Order & Limit, is not supported in Transactional Context. Aggregations¶ Aggregations allow computing a value of some function (e.g sum or max) over the stored map entries. The computation is performed in a fully distributed manner, so no data other than the computed function value is transferred to the client, making the computation fast. The aggregator module provides a wide variety of built-in aggregators. The full list is presented below: count distinct double_avg double_sum fixed_point_sum floating_point_sum int_avg int_sum long_avg long_sum max_ min_ number_avg max_by max_by These aggregators are used with the map.aggregate function, which takes an optional predicate argument. See the following example. import hazelcast from hazelcast.aggregator import count, number_avg from hazelcast.predicate import greater_or_equal client = hazelcast.HazelcastClient() employees = client.get_map("employees").blocking() employees.put("John Stiles", 23) employees.put("Judy Doe", 29) employees.put("Richard Miles", 38) employee_count = employees.aggregate(count()) # Prints: # There are 3 employees print("There are %d employees" % employee_count) # Run count with predicate employee_count = employees.aggregate(count(), greater_or_equal("this", 25)) # Prints: # There are 2 employees older than 24 print("There are %d employees older than 24" % employee_count) # Run average aggregate average_age = employees.aggregate(number_avg()) # Prints: # Average age is 30 print("Average age is %f" % average_age) Projections¶ There are cases where instead of sending all the data returned by a query from the server, you want to transform (strip down) each result object in order to avoid redundant network traffic. For example, you select all employees based on some criteria, but you just want to return their name instead of the whole object. It is easily doable with the Projections. The projection module provides three projection functions: single_attribute: Extracts a single attribute from an object and returns it. multi_attribute: Extracts multiple attributes from an object and returns them as a list. identity: Returns the object as it is. These projections are used with the map.project function, which takes an optional predicate argument. See the following example. import hazelcast from hazelcast.core import HazelcastJsonValue from hazelcast.predicate import greater from hazelcast.projection import single_attribute, multi_attribute client = hazelcast.HazelcastClient() employees = client.get_map("employees").blocking() employees.put(1, HazelcastJsonValue({"age": 25, "height": 180, "weight": 60})) employees.put(2, HazelcastJsonValue({"age": 21, "height": 170, "weight": 70})) employees.put(3, HazelcastJsonValue({"age": 40, "height": 175, "weight": 75})) ages = employees.project(single_attribute("age")) # Prints: "Ages of the employees are [21, 25, 40]" print("Ages of the employees are %s" % ages) filtered_ages = employees.project(single_attribute("age"), greater("age", 23)) # Prints: "Ages of the filtered employees are [25, 40]" print("Ages of the filtered employees are %s" % filtered_ages) attributes = employees.project(multi_attribute("age", "height")) # Prints: "Ages and heights of the employees are [[21, 170], [25, 180], [40, 175]]" print("Ages and heights of the employees are %s" % attributes) Performance¶ Near Cache¶ their memory consumption. If invalidation is enabled and entries are updated frequently, then invalidations will be costly. Near Cache breaks the strong consistency guarantees; you might be reading stale data. Near Cache is highly recommended for maps that are mostly read. Configuring Near Cache¶ The following snippet show how a Near Cache is configured in the Python client using the near_caches argument, presenting all available values for each element. When an element is missing from the configuration, its default value is used. from hazelcast.config import InMemoryFormat, EvictionPolicy client = hazelcast.HazelcastClient( near_caches={ "mostly-read-map": { "invalidate_on_change": True, "time_to_live": 60, "max_idle": 30, # You can also set these to "OBJECT" # and "LRU" without importing anything. "in_memory_format": InMemoryFormat.OBJECT, "eviction_policy": EvictionPolicy.LRU, "eviction_max_size": 100, "eviction_sampling_count": 8, "eviction_sampling_pool_size": 16 } } )and max_idleto. Near Cache Example for Map¶. client = hazelcast.HazelcastClient( near_caches={ "mostly-read-map": { "invalidate_on_change": True, "in_memory_format": InMemoryFormat.OBJECT, "eviction_policy": EvictionPolicy.LRU, "eviction_max_size": 5000, } } ) Near Cache Eviction¶. Near Cache Expiration¶ Expiration means the eviction of expired records. A record is expired: If it is not touched (accessed/read) for max_idleseconds time_to_liveseconds passed since it is put to Near Cache The actual expiration is performed when a record is accessed: it is checked if the record is expired or not. If it is expired, it is evicted and KeyError is raised to the caller. Near Cache Invalidation¶ Invalidation is the process of removing an entry from the Near Cache when its value is updated or it is removed from the original map (to prevent stale reads). See the Near Cache Invalidation section in the Hazelcast Reference Manual. Monitoring and Logging¶ Enabling Client Statistics¶ You can monitor your clients using Hazelcast Management Center. As a prerequisite, you need to enable the client statistics before starting your clients. There are two arguments of HazelcastClient related to client statistics: statistics_enabled: If set to True, it enables collecting the client statistics and sending them to the cluster. When it is Trueyou can monitor the clients that are connected to your Hazelcast cluster, using Hazelcast Management Center. Its default value is False. statistics_period: Period in seconds the client statistics are collected and sent to the cluster. Its default value is 3. You can enable client statistics and set a non-default period in seconds as follows: client = hazelcast.HazelcastClient( statistics_enabled=True, statistics_period. Logging Configuration¶ Hazelcast Python client uses Python’s builtin logging package to perform logging. All the loggers used throughout the client are identified by their module names. Hence, one may configure the hazelcast parent logger and use the same configuration for the child loggers such as hazelcast.lifecycle without an extra effort. Below is an example of the logging configuration with INFO log level and a StreamHandler with a custom format, and its output. import logging import hazelcast logger = logging.getLogger("hazelcast") logger.setLevel(logging.INFO) handler = logging.StreamHandler() formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") handler.setFormatter(formatter) logger.addHandler(handler) client = hazelcast.HazelcastClient() client.shutdown() Output 2020-10-16 13:31:35,605 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is STARTING 2020-10-16 13:31:35,605 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is STARTED 2020-10-16 13:31:35,605 - hazelcast.connection - INFO - Trying to connect to Address(host=127.0.0.1, port=5701) 2020-10-16 13:31:35,622 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is CONNECTED 2020-10-16 13:31:35,622 - hazelcast.connection - INFO - Authenticated with server Address(host=172.17.0.2, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, server version: 4.0, local address: Address(host=127.0.0.1, port=56752) 2020-10-16 13:31:35,623 - hazelcast.cluster - INFO - Members [1] { Member [172.17.0.2]:5701 - 7682c357-3bec-4841-b330-6f9ae0c08253 } 2020-10-16 13:31:35,624 - hazelcast.client - INFO - Client started 2020-10-16 13:31:35,624 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is SHUTTING_DOWN 2020-10-16 13:31:35,624 - hazelcast.connection - INFO - Removed connection to Address(host=127.0.0.1, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, connection: Connection(id=0, live=False, remote_address=Address(host=172.17.0.2, port=5701)) 2020-10-16 13:31:35,624 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is DISCONNECTED 2020-10-16 13:31:35,634 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is SHUTDOWN A handy alternative to above example would be configuring the root logger using the logging.basicConfig() utility method. Beware that, every logger is the child of the root logger in Python. Hence, configuring the root logger may have application level impact. Nonetheless, it is useful for the testing or development purposes. import logging import hazelcast logging.basicConfig(level=logging.INFO) client = hazelcast.HazelcastClient() client.shutdown() Output INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTING INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTED INFO:hazelcast.connection:Trying to connect to Address(host=127.0.0.1, port=5701) INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is CONNECTED INFO:hazelcast.connection:Authenticated with server Address(host=172.17.0.2, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, server version: 4.0, local address: Address(host=127.0.0.1, port=56758) INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is SHUTDOWN To learn more about the logging package and its capabilities, please see the logging cookbook and documentation of the logging package. Defining Client Labels¶ Through the client labels, you can assign special roles for your clients and use these roles to perform some actions specific to those client connections. You can also group your clients using the client labels. These client groups can be blacklisted in Hazelcast Management Center so that they can be prevented from connecting to a cluster. See the related section in the Hazelcast Management Center Reference Manual for more information on this topic. You can define the client labels using the labels config option. See the below example. client = hazelcast.HazelcastClient( labels=[ "role admin", "region foo" ] ) Defining Client Name¶ Each client has a name associated with it. By default, it is set to hz.client_${CLIENT_ID}. Here CLIENT_ID starts from 0 and it is incremented by 1 for each new client. This id is incremented and set by the client, so it may not be unique between different clients used by different applications. You can set the client name using the client_name configuration element. client = hazelcast.HazelcastClient( client_name="blue_client_0" ) Configuring Load Balancer¶ Load Balancer configuration allows you to specify which cluster member to send next operation when queried. If it is a Smart Client, only the operations that are not key-based are routed to the member that is returned by the LoadBalancer. If it is not a smart client, LoadBalancer is ignored. By default, client uses round robin load balancer which picks each cluster member in turn. Also, the client provides random load balancer which picks the next member randomly as the name suggests. You can use one of them by setting the load_balancer config option. The following are example configurations. from hazelcast.util import RandomLB client = hazelcast.HazelcastClient( load_balancer=RandomLB() ) You can also provide a custom load balancer implementation to use different load balancing policies. To do so, you should provide a class that implements the LoadBalancers interface or extend the AbstractLoadBalancer class for that purpose and provide the load balancer object into the load_balancer config option.
https://hazelcast.readthedocs.io/en/stable/using_python_client_with_hazelcast.html
2022-09-25T01:01:28
CC-MAIN-2022-40
1664030334332.96
[array(['https://docs.hazelcast.com/hazelcast/latest/_images/FencedLock.png', 'CP Fenced Lock diagram'], dtype=object) ]
hazelcast.readthedocs.io
You're viewing Apigee Edge documentation. View Apigee X documentation. a concrete URL in the configuration, you can configure one or more named TargetServers as described in the section TargetEndpoint. A TargetServer definition consists of a name, a host and a port, with an additional element to indicate whether the TargetServer is enabled or disabled. Videos Watch the following videos to learn more about API routing and load balancing using target servers Sample TargetServer Configuration The following code defines a target server: <TargetServer name="target1"> <Host>1.mybackendservice.com</Host> <Port>80</Port> <IsEnabled>true</IsEnabled> </TargetServer > TargetServer Configuration Elements The following table describes the elements used to create and configure a TargetServer: Managing target servers using the UI Manage target servers, as described below.. Edge To manage target servers using the Edge UI: - Select Admin > Environments > Target Servers in the left navigation bar. - Select the desired environment, such as test or prod. - To create a target server: - Click + Target server. - Enter a name, host, and port for the target server. For example: - Name: target1 - Host: 1.mybackendservice.com - Port: 80 - Select SSL, if required. - Select Enabled to enable the target server. - Click Add. - To edit the target server: - Position your cursor over the target server that you want to edit to display the actions menu. - Click . - Edit the targer server values. - Click Update. - To delete the target server: - Position your cursor over the target server that you want to delete to display the actions menu. - Click . - Click Delete to confirm the operatin. Classic Edge (Private Cloud) To access the Create Proxy wizard using the Classic Edge UI: - Sign in to, where ms-ip is the IP address or DNS name of the Management Server node. - Select APIs > Environment Configuration > Target Servers in the left navigation bar. - Select the desired environment, such as test or prod. - To create a target server: - Click Edit. - Click + Target server. - Enter a name, host, and port for the target server. For example: - Name: target1 - Host: 1.mybackendservice.com - Port: 80 - Select Enabled to enable the target server. - Click Save. - To edit the target server: - Click Edit. - Edit the targer server values. - Click Save. - To delete the target server: - Click Edit. - Click Delete. Managing target servers using the API You can use Edge API to create, delete, update, get, and list target servers. For more information, see TargetServers. Use the following API call to create a target server: $ use the following API call to create a second TargetServer. By defining two TargetServers, you provide two URLs that a TargetEndpoint can use for load balancing: $ } Use the following API call to a limit of 500 TargetServers per environment, as documented in the Limits topic.. Setting load balancer options You can tune availability by using options for load balancing and failover at the load balancer and TargetServer level. This section describes these options. Algorithm Sets the algorithm used by <LoadBalancer>. The available algorithms are RoundRobin, Weighted, and LeastConnections, each of which is documented below. Round robin The default algorithm, round rob target server. When this happens, the failure counter increments by one. However, when Apigee does receive a response from a target, even if the response is an HTTP error (such as 500), that counts as a response from the target server,. Edge will also count responses with those codes as failures. In the following example, target1 will be removed from rotation after five failed requests, including some 5XX responses from the target server. Edge always tries to connect to the target for each request and never removes the target server target server for TLS/SSL If you are using a TargetServer to define the backend service, and the backend service requires the connection to use the HTTPS protocol, then you must enable TLS/SSL in the TargetServer definition. This is necessary because the <Host> tag does not let you specify the connection protocol. Shown below is the TargetServer definition for one-way TLS/SSL where Edge makes HTTPS requests to the backend service: <TargetServer name="target1"> <Host>mocktarget.apigee.net</Host> <Port>443</Port> <IsEnabled>true</IsEnabled> <SSLInfo> <Enabled>true</Enabled> </SSLInfo> </TargetServer> If the backend service requires two-way, or mutual, TLS/SSL, then you configure the TargetServer by using the same TLS> and <ClientAuthEnabled>, see the information on setting those properties for a Virtual Host at Configuring TLS access to an API for the Private Cloud. For complete instructions on configuring outbound TLS/SSL, see Configuring TLS from Edge to the backend (Cloud and Private Cloud). TargetServer schema See the schema for TargetServer and other entities on GitHub., Edge begins sending health checks to your target server. A health check is a request sent to the target server that determines whether the target server is healthy or not. A health check can have one of two possible results: - Success: The target server is considered healthy when a successful health check occurs. This is typically the result of one or more of the following: - Target server accepts a new connection to the specified port, responds to a request on that port, and then closes the port within the specified timeframe. The response from the target server contains “Connection: close” - Target server responds to a health check request with a 200 (OK) or other HTTP status code that you determine is acceptable. - Target server responds to a health check request with a message body that matches the expected message body. When Edge determines that a server is healthy, Edge continues or resumes sending requests to it. - Failure: The target server can fail a health check in different ways, depending on the type of check. A failure can be logged when the target server: - Refuses a connection from Edge to the health check port. - Does not respond to a health check request within a specified period of time. - Returns an unexpected HTTP status code. - Responds with a message body that does not match the expected message body. When a target server fails a health check, Edge increments that server’s failure count. If the number of failures for that server meets or exceeds a predefined threshold ( <MaxFailures>), Edge Edge. Edge> <LoadBalancer> <Algorithm>RoundRobin</Algorithm> <Server name="target1" /> <Server name="target2" /> <MaxFailures>5</MaxFailures> </LoadBalancer> <Path>/test</Path> <HealthMonitor> <IsEnabled>true</IsEnabled> <IntervalInSec>5</IntervalInSec> <TCPMonitor> <ConnectTimeoutInSec>10</ConnectTimeoutInSec> <Port>80</Port> </TCPMonitor> </HealthMonitor> </HTTPTargetConnection> . . . HealthMonitor with TCPMonitor configuration elements The following table describesrOK. If the response does not match, then the request will treated as a failure by the load balancer configuration. The HTTPMonitor supports backend services configured to use HTTP and one-way HTTPS protocols. However, it does not support the following: - Two-way HTTPS (also called two-way TLS/SSL) - Self-signed certificates.> <Header name="Authorization">Basic 12e98yfw87etf</Header> </Request> <SuccessResponse> <ResponseCode>200</ResponseCode> <Header name=”ImOK”>YourOK</Header> </SuccessResponse> </HTTPMonitor> </HealthMonitor> HealthMonitor with HTTPMonitor configuration elements The following table describes the HTTPMonitor configuration elements:
https://docs.apigee.com/api-platform/deploy/load-balancing-across-backend-servers?hl=zh_cn
2021-11-27T15:27:55
CC-MAIN-2021-49
1637964358189.36
[]
docs.apigee.com
Used to reset the session timeout counter, keeping this session alive Ping requests Optionally, ping requests can be periodically sent to Woopra servers to refresh the visitor timeout counter for the session/visit. This is used if it’s important to keep a visitor status ‘online’ when she’s inactive for a long time (for cases such as watching a long video). The javascript SDK will automatically start sending pings after the first event is sent. ?host=mywebsite.com &cookie=AH47DHS5SF182DIQZJD &timeout=300000 - There is no need to have custom parameters in the ping request. - It is recommend to send ping requests at intervals slightly less than the timeout value. If the timeout is 60 seconds, consider pinging the servers every 55 seconds, otherwise you will be sending unnecessary requests.
https://docs.woopra.com/reference/track-ping
2021-11-27T13:51:23
CC-MAIN-2021-49
1637964358189.36
[]
docs.woopra.com
FileCap is deployed as a stand-alone system, which means it has its own built-in operating system and update system. The FileCap Server is currently available in two deployment forms. The first is a CD image in the form of an ISO which can be used to install on your own hardware, VMware or Hyper-V. The second option is hardware appliances, offered by FileCap, with FileCap pre-installed. Choosing a deployment option While choosing a deployment option you should take the following into consideration: ISO Image (Recommended option) - More flexibility, hard disk size is customisable - Runs on custom hardware and FileCap Appliances - Runs on VMWare and Hyper-V - Installer auto installs latest security patches - Easy backup and snapshots on virtualisation platforms For the ISO installation steps see the chapter: Installing FileCap Server from ISO FileCap hardware - Data separation is optimal, confidential data is not stored on the same hardware as other data - Hard disk is build in and cannot be changed without re-installation VMware Virtual Appliance (This is an obsolete option, please use the ISO image) - Fast and easy deployment in VMware - VMware tools pre-installed - Fixed hard disk size of 100GB - Easy backup and snapshot function We recommend using the ISO image deployment option in combination with VMWare or Hyper-V. Using custom hardware or FileCap hardware is also possible, but less flexible. Downloading FileCap Server and plugins FileCap and its plugins can be downloaded from the download section of the FileCap website, the direct URL is:
https://docs.filecap.com/filecap-server/deployment-options-filecap-server/
2021-11-27T14:10:25
CC-MAIN-2021-49
1637964358189.36
[]
docs.filecap.com
Pelican is a static site generator, written in Python. Pelican 3. For a more immediate response, you can also join the team via IRC at #pelican on Freenode — if you don’t have an IRC client handy, use the webchat for quick feedback. If you ask a question via IRC and don’t get an immediate response, don’t leave the channel! It may take a few hours because of time zone differences, but f you are patient and remain in the channel, someone will almost always respond to your inquiry. A French version of the documentation is available at Pelican.
https://docs.getpelican.com/en/3.3.0/
2021-11-27T14:54:46
CC-MAIN-2021-49
1637964358189.36
[]
docs.getpelican.com
Menus Menu Item Privacy Extend Consent From Joomla! Documentation (Redirected from Help310:Menus Menu Item Privacy Remind Request) Contents Description Used to display an 'Extend Consent' menu link in the front end of the site to display a form allowing your users to renew their privacy consent. How to Access Add a new menu item 'Extend Consent' - Select Menus → [name of the menu] → Add New Menu Item from the dropdown menu of the Administrator Panel - Select Privacy → Extend Consent in the modal popup window. Edit an existing menu item 'Extend Consent' - Click on the menu item's Title in the Menu Manager. Screenshot Details Details Tab The Extend Consent has the following settings: -"). - Menu Item Type. The Menu Item Type selected when this menu item was created. This can be one of the core menu item types or a menu item type provided by an installed extension. - Link. Link for this. - Menu: (- Select Menu -/Main: (Public/Guest/Registered/Special/Super Users). The access level group that is allowed to view this item. - Language: (All/English (en-GB)). Assign a language to this menu item - Note: An optional note to display in the Menu Manager. Common Options See Menu Item Manager: Edit/New Menu Item for help on fields common to most Menu Item types: Toolbar At the top you will see the toolbar: The functions are: - Save. Saves the item and stays in the current screen. - Save & Close. Saves the item and closes the current screen. - Save & New. Saves the item and keeps the editing screen open and ready to create another item. - Save as Copy. Saves your changes to a copy of the current item. Does not affect the current item. This toolbar icon is not shown if you are creating a new item. - Close. Closes the current screen and returns to the previous screen without saving any modifications you may have made. - Help. Opens this help screen.
https://docs.joomla.org/Help310:Menus_Menu_Item_Privacy_Remind_Request
2021-11-27T15:16:24
CC-MAIN-2021-49
1637964358189.36
[]
docs.joomla.org
By taking a snapshot of a vApp, you take snapshots of all virtual machines in the vApp. After you take the snapshot, you can revert all virtual machines in the vApp to the snapshot, or remove the snapshot if you do not need it. vApp snapshots have some limitations. - vApp snapshots do not capture NIC configurations. - If any virtual machine in the vApp is connected to an independent disk, you cannot take a vApp snapshot. take a snapshot, select Create Snapshot.Taking a snapshot of a vApp replaces the existing snapshot, if there is any. - Click OK. Results A snapshot of the vApp is created. What to do next You can revert all the virtual machines in the vApp to the most recent snapshot.
https://docs.vmware.com/en/VMware-Cloud-Director/9.5/com.vmware.vcloud.tenantportal.doc/GUID-7A1E8E22-513D-43AD-AD73-BA9DE1C4ED3B.html
2021-11-27T15:30:30
CC-MAIN-2021-49
1637964358189.36
[]
docs.vmware.com
fASL Tutorial #3: The N-Back Task¶ Overview of Working Memory¶ Working memory refers to the ability to simultaneously hold multiple items in one’s conscious attention. Think of a time when someone told you to write down an event on your calendar: You are given the date, time, and location, and then you need to write it down. Most likely you found yourself repeating those items, either mentally or aloud, as you tried to remember them. In light of this example, let’s refine our definition: Working memory is the ability to keep certain pieces of information in your awareness while suppressing irrelevant thoughts. These pieces of information - names, dates, numbers - can either be something you just read or heard, or they can be retrieved from longer-term memory; that is, they can be things you learned days or years ago. Working memory is the maintaining of these items in the front of your thoughts and then juggling, manipulating, and rearranging them as needed. The N-Back Task¶ As we saw from the above example, working memory is used in all kinds of everyday situations. But how do we tap into it and measure it in a laboratory? The most popular versions of working memory tasks are the spatial memory working task and the letter working memory task. We will focus here on the letter working memory task, as that is the one that was used during the ASL scan you will analyze. Working memory tasks are usually continuous; that is, the subject will continuously monitor a stream of stimuli - such as letters shown at regular intervals - and keep in mind a rule for whether to respond or not. These rules often take the form of an N-back rule (also referred to as N-back tasks): respond to the letter on the screen if you saw that same letter N letters ago. To illustrate this, take the simplest rule: The 0-back task. In this case, the subject is told to respond by pushing a button whenever he sees a particular letter, such as X. This requires remembering a rule, but doesn’t require working memory; no items are held in mind, manipulated, or subordinated to other items; all the task requires is recognizing a stimulus, which, with enough training, becomes a reflex. An example of the 0-back task. In this example, the subject is told to respond whenever he sees the letter X. Perfect accuracy requires responding to the X with a button press, and not responding to any of the other letters. Using the 0-back task as a template, we can increase the number of items to be held in working memory. In a 1-back task, for example, the subject is required to respond to a letter if it matches the previous letter (i.e., the letter that was shown 1 stimulus ago). In the figure below, the subject would respond to the second and fourth letters that were presented, since they match the letter that was presented on the previous slide. The last letter matches one of the previous letters shown, but since that letter occurred two slides ago, the subject should withhold his response. As the number of items increases, say in a 4-back task, the task becomes more difficult; more items need to be held in working memory, and more mental comparisons need to be made between what is seen now and what was seen previously:
https://andysbrainbook.readthedocs.io/en/latest/ASL/fASL_03_Task.html
2021-11-27T13:56:04
CC-MAIN-2021-49
1637964358189.36
[array(['../_images/03_WorkingMemoryTask_0back.png', '../_images/03_WorkingMemoryTask_0back.png'], dtype=object) array(['../_images/03_WorkingMemoryTask_1back.png', '../_images/03_WorkingMemoryTask_1back.png'], dtype=object) array(['../_images/03_WorkingMemoryTask_4back.png', '../_images/03_WorkingMemoryTask_4back.png'], dtype=object)]
andysbrainbook.readthedocs.io
Go to hazelcast.org/documentation.
https://docs.hazelcast.org/docs/3.5/manual/html/globaleventconfiguration.html
2021-11-27T13:48:49
CC-MAIN-2021-49
1637964358189.36
[]
docs.hazelcast.org
Go to hazelcast.org/documentation.:
https://docs.hazelcast.org/docs/3.5/manual/html/whyhazelcast.html
2021-11-27T14:24:12
CC-MAIN-2021-49
1637964358189.36
[]
docs.hazelcast.org
lightkurve.correctors.PLDCorrector¶ - class lightkurve.correctors.PLDCorrector(tpf, aperture_mask=None)[source]¶ Implements the Pixel Level Decorrelation (PLD) systematics removal method. Special case of RegressionCorrectorwhere the DesignMatrixis composed of background-corrected pixel time series. The design matrix also contains columns representing a spline in time design to capture the intrinsic, long-term variability of the target. Pixel Level Decorrelation (PLD) was developed by [1] to remove systematic noise caused by spacecraft jitter for the Spitzer Space Telescope. It was adapted to K2 data by [2] and [3] for the EVEREST pipeline [4]. For a detailed description and implementation of PLD, please refer to these references. Lightkurve provides a reference implementation of PLD that is less sophisticated than EVEREST, but is suitable for quick-look analyses and detrending experiments. Our simple implementation of PLD is performed by first calculating the noise model for each cadence in time. This function goes up to arbitrary order, and is represented by\[m_i = \sum_l a_l \frac{f_{il}}{\sum_k f_{ik}} + \sum_l \sum_m b_{lm} \frac{f_{il}f_{im}}{\left( \sum_k f_{ik} \right)^2} + ...\] - where \(m_i\) is the noise model at time \(t_i\) \(f_{il}\) is the flux in the \(l^\text{th}\) pixel at time \(t_i\) \(a_l\) is the first-order PLD coefficient on the linear term \(b_{lm}\) is the second-order PLD coefficient on the \(l^\text{th}\), \(m^\text{th}\) pixel pair We perform Principal Component Analysis (PCA) to reduce the number of vectors in our final model to limit the set to best capture instrumental noise. With a PCA-reduced set of vectors, we can construct a design matrix containing fractional pixel fluxes. To solve for the PLD model, we need to minimize the difference squared\[\chi^2 = \sum_i \frac{(y_i - m_i)^2}{\sigma_i^2},\] where \(y_i\) is the observed flux value at time \(t_i\), by solving\[\frac{\partial \chi^2}{\partial a_l} = 0.\] The design matrix also contains columns representing a spline in time design to capture the intrinsic, long-term variability of the target. References - 1 Deming et al. (2015), ads:2015ApJ…805..132D. (arXiv:1411.7404) - 2 Luger et al. (2016), ads:2016AJ….152..100L (arXiv:1607.00524) - 3 Luger et al. (2018), ads:2018AJ….156…99L (arXiv:1702.05488) - 4 EVEREST pipeline webpage, Examples Download the pixel data for GJ 9827 and obtain a PLD-corrected light curve: >>> import lightkurve as lk >>> tpf = lk.search_targetpixelfile("GJ9827").download() >>> corrector = tpf.to_corrector('pld') >>> lc = corrector.correct() >>> lc.plot() However, the above example will over-fit the small transits! It is necessary to mask the transits using corrector.correct(cadence_mask=...). - __init__(tpf, aperture_mask=None)[source]¶ Constructor method. The constructor shall: * accept all data required to run the correction (e.g. light curves, target pixel files, engineering data). * instantiate the original_lcproperty. Methods Attributes
https://docs.lightkurve.org/reference/api/lightkurve.correctors.PLDCorrector.html
2021-11-27T14:51:00
CC-MAIN-2021-49
1637964358189.36
[]
docs.lightkurve.org
Audit event shows authentication package as NTLMv1 instead of NTLMv2 This article discusses an issue where the authentication was actually using NTLMv2 but reporting NTLMv1 in the event log. Applies to: Windows Server 2012 R2 Original KB number: 2701704 Summary You're using lmcompatibilitylevel on 3 or higher on all machines in the domain to force clients to use only NTLMv2. In testing connections to network shares by IP address to force NTLM, you discover the "Authentication Package" was still listed as NTLMv1 on the security audit event (Event ID 4624) logged on the server. For example, you test with a Windows 7 client connecting to a file share on Windows Server 2008 R2. The network trace showed the authentication was actually using NTLMv2 but reporting NTLMv1 in the event log: Log Name: Security Source: Microsoft-Windows-Security-Auditing Event ID: 4624 Task Category: Logon Level: Information Keywords: Audit Success Description: An account was successfully logged on. Account Name: user Account Domain: contoso Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): NTLM V1 Key Length: 128 More information There are two known scenarios that can lead to this result. Scenario A: Windows Server 2003 Domain Controllers We discovered that we can reproduce this behavior when the domain controller validating the users' credentials is a 2003 based server. Windows Server 2003 didn't have the "authentication package" field in its event logging, this was a new feature added in Windows Vista. If the domain controller is Windows Server 2008 or newer the server will show the authentication package listed correctly as NTLMv2. If the reporting of the authentication protocol version is important, we suggest using Windows Server 2008 or newer Domain Controllers. Windows Server 2003 is in the extended support phase, the support is retired in July 2015. See Search product lifecycle. Scenario B: Negotiation of security levels uses "legacy" methods that mean best effort This scenario involves third-party clients: The customer has a third-party SMB client that is configured for NTLMv1. The file server is configured for LmCompatiblityLevel=5 and minimum sesion security NTLMv2, the DC is configured for LmCompatiblityLevel=4. A user on the third-party client connects. In the SMB, you see the security blob in the SMB session negotiation with the expected name fields and NegotiateFlags, the server rejects the negotiation: NegotiateFlags: 0x60000215 (NTLM v1128-bit encryption, , Sign) NegotiateNTLM: (......................1.........) Requests usage of the NTLM v1 session security protocol. NegotiateNTLM2: (............0...................) Does NOT request usage of the NTLM v1 using the extended session security. The third-party client then retries without using the security blob (which indicates extended session security). In this format, you don't see the same known list of name fields and maybe also noNegotiateFlags. Some fields like "ClientChallenge" might indicate that the client tries to perform NTLMv2 hash processing. But because of the lack of NegotiateFlags this doesn't arrive as a NTLMv2 authentication request at the DC. The server forwards the package to the DC that authenticates the request, and since the DC is OK to use NTLMv1, it authenticates the request. The server receives the successful logon and audits that as NTLMv1 as specified by the DC. For logons without extended session security, the server has no option to block the logon request based on the client flags. It has to forward the request with the best flags it got to the DC. On return, it also has to accept any decision the DC makes on the logon. In this case, it accepts the logon and logs it as NTLMv1 logon, even though the resource server is configured to only allow NTLMv2.
https://docs.microsoft.com/en-US/troubleshoot/windows-server/windows-security/authentication-package-listed-as-ntlmv1-security-audit-event
2021-11-27T16:16:01
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
Overview of offline backup This article gives an overview of offline backup. Initial full backups to Azure typically transfer large amounts of data online and require more network bandwidth when compared to subsequent backups that transfer only incremental changes. Remote offices or datacenters in certain geographies don't always have sufficient network bandwidth. For this reason, these initial backups take several days. During this time, the backups continuously use the same network that was provisioned for applications running in the on-premises datacenter. Azure Backup supports offline backup, which transfers initial backup data offline, without the use of network bandwidth. It provides a mechanism to copy backup data onto physical storage devices. The devices are then shipped to a nearby Azure datacenter and uploaded onto a Recovery Services vault. This process ensures robust transfer of backup data without using any network bandwidth. Offline backup options Offline backup is offered in two modes based on the ownership of the storage devices: - Offline backup based on Azure Data Box (preview) - Offline backup based on the Azure Import/Export service Offline backup based on Azure Data Box (preview) This mode is currently supported with the Microsoft Azure Recovery Services (MARS) Agent, in preview. This option takes advantage of Azure Data Box to ship Microsoft-proprietary, secure, and tamper-resistant transfer appliances with USB connectors to your datacenter or remote office. Backup data is directly written onto these devices. This option saves the effort required to procure your own Azure-compatible disks and connectors or to provision temporary storage as a staging location. Microsoft also handles the end-to-end transfer logistics, which you can track through the Azure portal. An architecture that describes the movement of backup data with this option is shown here. Here's a summary of the architecture: - Azure Backup directly copies backup data to these preconfigured devices. - You can then ship these devices back to an Azure datacenter. - Azure Data Box copies the data onto a customer-owned storage account. - Azure Backup automatically copies backup data from the storage account to the designated Recovery Services vault. Incremental online backups are scheduled. To use offline backup based on Azure Data Box, see Offline backup using Azure Data Box. Offline backup based on the Azure Import/Export service This option is supported by Microsoft Azure Backup Server (MABS), System Center Data Protection Manager (DPM) DPM-A, and the MARS Agent. It uses the Azure Import/Export service. You can transfer initial backup data to Azure by using your own Azure-compatible disks and connectors. This approach requires that you provision temporary storage known as the staging location and use prebuilt utilities to format and copy the backup data onto customer-owned disks. An architecture that describes the movement of backup data with this option is shown here. Here's a summary of the architecture: - Instead of sending the backup data over the network, Azure Backup writes the backup data to a staging location. - The data in the staging location is written to one or more SATA disks by using a custom utility. - As part of the preparatory work, the utility creates an Azure import job. The SATA drives are shipped to the nearest Azure datacenter and reference the import job to connect the activities. - At the Azure datacenter, the data on the disks is copied to an Azure storage account. - Azure Backup copies the backup data from the storage account to the Recovery Services vault. Incremental backups are scheduled. To use offline backup based on the Azure Import/Export service with the MARS Agent, see Offline backup workflow in Azure Backup. To use the same along with MABS or DPM-A, see Offline backup workflow for DPM and Azure Backup Server. Offline backup support summary The following table compares the two available options so that you can make the appropriate choices based on your scenario. *If your country/region doesn't have an Azure datacenter, you need to ship your disks to an Azure datacenter in another country/region.
https://docs.microsoft.com/en-us/azure/backup/offline-backup-overview
2021-11-27T16:26:58
CC-MAIN-2021-49
1637964358189.36
[array(['media/offline-backup-overview/azure-backup-databox-architecture.png', 'Azure Backup Data Box architecture'], dtype=object) array(['media/offline-backup-overview/azure-backup-import-export.png', 'Azure Backup Import/Export service architecture'], dtype=object)]
docs.microsoft.com
Setting Up An OPNSense Network Firewall¶ Before You Begin¶ First, consider how the firewall will be connected to the Internet. You will need to provision several unique subnets, which should not conflict with the network configuration on the WAN interface. If you are unsure, consult your local system administrator. Many firewalls, including the recommended OPNSense device, automatically set up the LAN interface on 192.168.1.1/24. This particular private network is also a very common choice for home and office routers. If you are connecting the firewall to a router with the same subnet (common in a small office, home, or testing environment), you will probably be unable to connect to the network at first. However, you will be able to connect from the LAN to the firewal’s Web GUI, and from there you will be able to configure the network so it is working correctly. The recommended TekLager APU4D4 has 4 NICs: WAN, LAN, OPT1, and OPT2. This allows for a dedicated port on the network firewall for each component of SecureDrop (Application Server, Monitor Server, and Admin Workstation). Depending on your network configuration, you should define the following values before continuing. - Admin Subnet: 10.20.1.0/24 - Admin Gateway: 10.20.1.1 - Admin Workstation: 10.20.1.2 - Application Subnet: 10.20.2.0/24 - Application Gateway: 10.20.2.1 - Application Server (OPT1): 10.20.2.2 - Monitor Subnet: 10.20.3.0/24 - Monitor Gateway: 10.20.3.1 - Monitor Server (OPT2) : 10.20.3.2 Initial Configuration¶ Unpack the firewall, connect the power, and power on the device. We will use the OPNSense Web GUI to do the initial configuration of the network firewall. Connect to the OPNSense Web GUI¶ If you have not already done so, boot the Admin Workstation into Tails using its designated USB drive. Make sure to enable the unsafe browser on the “Welcome to Tails” screen under “Additional settings”. Connect the Admin Workstation to the LAN interface. You should see a popup notification in Tails that says “Connection Established”. If you click on the network icon in the upper right of the Tails Desktop, you should see “Wired Connected”: Warning Make sure your only active connection is the one you just established with the network firewall. If you are connected to another network at the same time (e.g. a wireless network), you may encounter problems trying to connect the firewall’s Web GUI. Launch the Unsafe Browser from the menu bar: Applications ▸ Internet ▸ Unsafe Browser. Note The Unsafe Browser is, as the name suggests, unsafe (its traffic is not routed through Tor). However, it is the only option because Tails intentionally disables LAN access in the Tor Browser. A dialog will ask “Do you really want to launch the Unsafe Browser?”. Click Launch. You will see a pop-up notification that says “Starting the Unsafe Browser…” After a few seconds, the Unsafe Browser should launch. The window has a bright red border to remind you to be careful when using it. You should close it once you’re done configuring the firewall and use Tor Browser for any other web browsing you might do on the Admin Workstation. Navigate to the OPNSense Web GUI in the Unsafe Browser: Note If you have trouble connecting, go to your network settings and make sure that you have an IPv4 address in the 192.168.1.1/24range. You may need to turn on DHCP, else you can manually configure a static IPv4 address of 192.168.1.xwith a subnet mask of 255.255.255.0. However, make sure not to configure your Tails device to have the same IP as the firewall ( 192.168.1.1). The firewall uses a self-signed certificate, so you will see a “This Connection Is Untrusted” warning when you connect. This is expected. You can safely continue by clicking Advanced and Accept the Risk and Continue. You should see the login page for the OPNSense GUI. Log in with the default username and passphrase ( root/ opnsense). If this is your first time logging in to the firewall, the setup wizard will be displayed. You should not step through it at this point, however, as there are other tasks to complete. To exit, click the OPNSense logo in the top left corner of the screen. Enable Two-Factor Authentication¶ OPNSense supports two-factor authentication (2FA) via mobile apps such as Google Authenticator or FreeOTP. To set it up, first make sure you have a mobile device available with your choice of 2FA app. Next, in the OPNSense Web GUI, navigate to System > Access > Servers and click + to add a new server. On the next page, enter TOTP Local in the Descriptive name field and choose Local + Timebased One Time Password from the Type dropdown. Leave the other fields at their default values and click Save Next, navigate to System > Access > Users and click the edit button for the root user. On the subsequent page, first set a strong admin password. We recomend generating a strong passphrase with KeePassXC and saving it in the Tails Persistent folder using the provided KeePassXC database template. Then scroll down the page to the OTP seed section and check the Generate new secret (160bit) checkbox. Finally, click Save. Once the page has reloaded, scroll down to the OTP QR code section and click Click to unhide, then scan the generated QR code with your mobile auth application of choice. If you wish, you may also save the OTP seed value displayed above the QR code in your Tails KeePassXC database - this isn’t required, but will allow you to set up TOTP on another mobile device if you need to in the future. Test your new login credentials¶ To verify that your new password and OTP secret are working, navigate to System > Access > Tester. Select TOTP Local from the Authentication Server dropdown, enter the root username in the Username field, and enter your OTP token and password concatenated like 123456PASSWORD in the Password field. Then click Test. If the test fails, make sure you have used the correct OTP code and password, and edit the root user record as necessary. Warning Do not skip this test, or proceed further until it passes, as you will be locked out of the firewall Web GUI and console if the account is not set up correctly! Finally, navigate to System > Settings > Administration and scroll down to the Authentication section at the bottom of the page. In the Server dropdown, TOTP Local and deselect Local Database.. Click Save. Set Alternate Hostnames¶ Before you can set up the hardware firewall, you will need to set the Alternate Hostnames setting. First, navigate to System > Settings > Administration. In the Web GUI section, update the Alternate Hostnames field with the values 192.168.1.1 and the IP address of the Admin Gateway ( 10.20.1.1 if you are using the recommended default values), separated by a space. Finally, scroll to the bottom of the page and click Save. Configure Interfaces Via The Setup Wizard¶ To start the OPNSense Setup Wizard, navigate to System > Wizard and click Next. General Information: Leave your hostname as the default, OPNsense. There is no relevant domain for SecureDrop, so we recommend setting this to securedrop.localor something similar. Use your preferred DNS servers. If you don’t know what DNS servers to use, we recommend using Google’s DNS servers: 8.8.8.8and 8.8.4.4. Uncheck the Override DNS checkbox. In the Unbound DNS section, uncheck Enable Resolver. Click Next. Time Server Information: Leave the default settings unchanged and click Next. Configure WAN Interface: Enter the appropriate configuration for your network. Consult your local sysadmin if you are unsure what to enter here. For many environments, the default of DHCP will work and the rest of the fields can be left at their default values. Click Next to proceed. Configure LAN Interface: Use the IP address of the Admin Gateway ( 10.20.1.1) and the subnet mask ( /24) of the Admin Subnet. Click Next. Set Root Password: If the password was already reset during the 2FA setup, you don’t need to set it again. If it was not, then set a strong password now and store it in the Admin Workstation’s KeePassXC database. Click Next to continue. Reload Configuration: Click Reload to apply the changes you made in the Setup Wizard. At this point, since the LAN subnet settings were changed from their defaults, you will no longer be able to connect after reloading the firewall and the reload will time out. This is not an error - the firewall has reloaded and is working correctly. To connect to the new LAN interface, unplug and reconnect your network cable to get a new network address assigned via DHCP. Note that if you used a subnet with fewer addresses than /24, the default DHCP configuration in OPNSense may not work. In this case, you should assign the Admin Workstation a static IP address that is known to be in the subnet to continue. The Web GUI will now be available on the Admin Gateway IP address. Navigate to https://<Admin Gateway IP> in the Unsafe Browser and log in to the root account using an OTP token and the passphrase you just set. Note If 2FA is enabled, you must enter the OTP token and passphrase concatenated as a single string like 123456PASSWORD in the Password field. Once you’ve logged in to the Web GUI, you are ready to continue configuring the firewall. Connect Interfaces and Test¶ Now that the initial configuration is completed, you can connect the WAN port without potentially conflicting with the default LAN settings (as explained earlier). Connect the WAN port to the external network. You can watch the WAN entry in the Interfaces table on the OPNSense Dashboard homepage to see as it changes from down (red arrow pointing down) to up (green arrow pointing up). This usually takes several seconds. The WAN’s IP address will be shown once it comes up. Finally, test connectivity to make sure you are able to connect to the Internet through the WAN. The easiest way to do this is to open another tab in the Unsafe Browser and visit a host that you expect to be up (e.g. google.com). Update OPNSense to the latest version¶ You should update OPNSense to the latest version available before proceeding with the rest of the configuration. Navigate to Lobby > Dashboard and click Click to check for updates to start the process, and follow any on-screen instructions to complete the update. Note that a reboot may be required, and you may also need to apply several updates in a row to get to the latest version. Disable DHCP on the Firewall¶ OPNSense runs a DHCP server on the LAN interface by default. At this stage in the documentation, the Admin Workstation likely has an IP address assigned via that DHCP server. In order to tighten the firewall rules as much as possible, we recommend disabling the DHCP server and assigning a static IP address to the Admin Workstation instead. Disable DHCP Server on the LAN Interface¶ To disable DHCP, navigate to Services > DHCPv4 > [LAN] in the Web GUI. Uncheck the Enable DHCP server on the LAN interface checkbox, scroll down, and click Save. Assign a Static IP Address to the Admin Workstation¶ Now you will need to assign a static IP to the Admin Workstation. You can easily check your current IP address by clicking the top right of the menu bar, clicking on the Wired Connection and then clicking Wired Settings. From here you can click on the cog beside the wired network connection: This will take you to the network settings. Change to the IPv4 tab. Ensure that IPv4 Method is set to Manual, and that the Automatic switch for DNS is in the “off” position, as highlighted in the screenshot below: Note The Unsafe Browser will not launch when using a manual network configuration if it does not have DNS servers configured. This is technically unnecessary for our use case because we are only using it to access IP addresses on the LAN, and do not need to resolve anything with DNS. Nonetheless, you should configure some DNS servers here so you can continue to use the Unsafe Browser to access the WebGUI in future sessions. We recommend keeping it simple and using the same DNS servers that you used for the network firewall in the setup wizard. Fill in the static networking information for the Admin Workstation: - Address: 10.20.1.2 - Netmask: 255.255.255.0 - Gateway : 10.20.1.1 Click Apply. If the network does not come up within 15 seconds or so, try disconnecting and reconnecting your network cable to trigger the change. You will need you have succeeded in connecting with your new static IP when you are able to connect using the Tor Connection assistant, and you see the message “Connected to Tor successfully”. Troubleshooting: DNS Servers and the Unsafe Browser¶ After saving the new network configuration, you may still encounter the “No DNS servers configured” error when trying to launch the Unsafe Browser. If you encounter this issue, you can resolve it by disconnecting from the network and then reconnecting, which causes the network configuration to be reloaded. To do this, click the network icon in the system toolbar, and click Disconnect under the name of the currently active network connection, which is displayed in bold. After it disconnects, click the network icon again and click the name of the connection to reconnect. You should see a popup notification that says “Connection Established”, and the Tor Connection assistant should show the message “Connected to Tor successfully”. For the next step, SecureDrop Configuration, you will manually configure the firewall for SecureDrop, using screenshots or XML templates as a reference. SecureDrop Configuration¶ SecureDrop uses the firewall to achieve two primary goals: - Isolating SecureDrop from the existing network, which may be compromised (especially if it is a venerable network in a large organization like a newsroom). - Isolating the Application Server and the Monitor Server from each other as much as possible, to reduce attack surface. In order to use the firewall to isolate the Application Server and the Monitor Server from each other, we need to connect them to separate interfaces, and then set up firewall rules that allow them to communicate. Enable The OPT1 And OPT2 Interfaces¶ The OPT1 and OPT2 interfaces will be used for the Application Server and Monitor Server respectively. To enable them, first connect the Application Server to the physical OPT1 port and the Monitor Server to the OPT2 port. Next, navigate to Interfaces > Assignments. LAN and WAN will alaready be enabled. Click the + button in the New Interface section to enable the OPT1 interface on the next available NIC ( igb2 in the screenshot below). Once OPT1 has been added, click + again to add OPT2 (on igb3 in the screenshot below) Finally, click Save. Configure the LAN, WAN, OPT1, and OPT2 interfaces¶ OPT1 and OPT2 need to be configured to use the subnets defined for the Application and Monitor Servers, and some additional configuration is required for the LAN and WAN interfaces, that is not covered by the Setup Wizard. Configure the WAN interface¶ First, navigate to Interfaces > [WAN]. In the Basic configuration section, check the checkbox labeled Prevent interface removal. In the Generic configuration section, make sure that the Block private networks and Block bogon networks checkboxes are checked. Scroll down and click Save, then click Apply changes when prompted. Configure the LAN interface¶ Next, navigate to Interfaces > [LAN]. In the Basic configuration section, check the checkbox labeled Prevent interface removal. In the Generic configuration section, select Static IPv4 in the IPv4 Configuration Type dropdown, and None in the IPV6 Configuration Type dropdown. Scroll down and click Save, then click Apply changes when prompted. Configure the OPT1 interface¶ Next, navigate to Interfaces > [OPT Application Gateway IP address and routing prefix ( 10.20.2.1 and 24 if you are using the recommended values). Click Save, then click Apply changes when prompted. Configure the OPT2 interface¶ Finally, navigate to Interfaces > [OPT Monitor Gateway IP address and routing prefix ( 10.20.3.1 and 24 if you are using the recommended values). Click Save, then click Apply changes when prompted. Configure Firewall Aliases¶ In order to simplify firewall rule setup, the next step is to configure aliases for hosts and ports referred to in the rules. To start, first navigate to Firewall > Aliases. You should see some system-defined aliases as shown below: Click the + button to add new aliases. You should add the aliases defined in the table below (assuming recommended values for IP addresses): When complete, the Aliases page should look like this: Scroll down and click Apply to save and apply your new aliases. Configure Firewall Rules¶ Next, configure firewall rules for each interface. Configure Firewall Rules on LAN¶ First, navigate to Firewall > Rules > LAN. The LAN interface should have one automatically-generated anti-lockout rule in place, in addition to two default-allow rules. The default-allow rules should be removed once the SecureDrop-specific rules below have been added. The anti-lockout feature should be disabled as a last step. The rules needed are described in this table: Add or remove rules until they match the following screenshot including ordering. Click the + button to add a rule. Once the rules match, click Apply Changes. Finally, remove the default anti-lockout rule. First, navigate to Firewall > Settings > Advanced. Scroll down to the Miscellaneous section and check the Disable anti-lockout checkbox. Then, click Save. Configure Firewall Rules On OPT1¶ Next, navigate to Firewall > Rules > OPT1. There should be no rules defined on this interface. Add the rules below: Once they match the screenshot below, click Apply Changes. Configure Firewall Rules On OPT2¶ Next, navigate to Firewall > Rules > OPT2. Similarly to OPT1, there should be no rules defined on this interface. Add the rules below until the rules in the Web GUI match those in the screenshot: Finally, click Apply Changes. The Network Firewall configuration is now complete, allowing you to move to the next step: setting up the servers. Troubleshooting Tips¶ Here are some general tips for setting up OPNSense firewall rules: - Create aliases for the repeated values (IPs and ports). - OPNSense is a stateful firewall, which means that you don’t need corresponding rules to allow incoming traffic in response to outgoing traffic (like you would in, e.g. iptables with --state ESTABLISHED,RELATED). - You should create the rules on the interface where the traffic originates. - Make sure you delete the default “allow all” rule on the LAN interface. - If you are troubleshooting connectivity, the firewall logs can be very helpful. You can find them in the Web GUI in Firewall > Log Files Keeping OPNSense up to Date¶ Periodically, the OPNSense project maintainers release an update to the OPNSense software running on your firewall. You can check for updates using the link on the OPNSense dashboard. If you see that an update is available, we recommend installing it. Most of these updates are for minor bugfixes, but occasionally they can contain important security fixes. You should keep apprised of updates yourself by checking the OPNSense Blog or subscribing to the OPNSense Blog RSS feed.
https://docs.securedrop.org/en/latest/firewall_opnsense.html
2021-11-27T15:22:57
CC-MAIN-2021-49
1637964358189.36
[array(['_images/opnsense-authservers.png', 'OPNSense - auth server'], dtype=object) array(['_images/opnsense-otpcheck.png', 'OPNSense - otpcheck'], dtype=object) array(['_images/opnsense-qrcode.png', 'OPNSense - qrscan'], dtype=object) array(['_images/opnsense-testuserhappy.png', 'OPNSense - testuserhappy'], dtype=object) array(['_images/opnsense-no-updates.png', 'OPNSense - No Updates'], dtype=object) array(['_images/opnsense-disable-dhcp.png', 'OPNSense - Disable DHCP'], dtype=object) array(['_images/wired_settings.png', 'Wired Settings'], dtype=object) array(['_images/tails_network_settings.png', 'Tails Network Settings'], dtype=object) array(['_images/tails-manual-network-with-highlights.png', 'Tails Manual Network Settings'], dtype=object) array(['_images/four_nic_admin_workstation_static_ip_configuration.png', '4 NIC Admin Workstation Static IP Configuration'], dtype=object) array(['_images/opnsense-assign-interfaces.png', 'OPNSense - assign interfaces'], dtype=object) array(['_images/opnsense-alias-start.png', 'OPNSense - Alias Start'], dtype=object) array(['_images/opnsense-alias-end.png', 'OPNSense - aliases end'], dtype=object) array(['_images/opnsense-lan-rules.png', 'OPNSense - Firewall LAN Rules'], dtype=object) array(['_images/opnsense-antilockout.png', 'OPNSense - Disable Antilockout'], dtype=object) array(['_images/opnsense-firewall-opt1.png', 'OPNSense Firewall OPT1 Rules'], dtype=object) array(['_images/opnsense-firewall-opt2.png', 'OPNSense Firewall OPT2 Rules'], dtype=object)]
docs.securedrop.org
Radial Tree What is a radial tree? A radial tree is a taxonomic tree visualized as a circle. Why use a radial tree Since it sometimes it is difficult to fit a tree in small vertical space it can be useful to view it more compactly as a circle to see the taxonomic relationships of calls. Exporting To export a radial tree, click "Export" in the top right corner of your browser and select "PNG" or "SVG". Updated 4 months ago
https://docs.cosmosid.com/docs/radial-tree
2021-11-27T14:51:15
CC-MAIN-2021-49
1637964358189.36
[array(['https://p-AeFvB6.t2.n0.cdn.getcloudapp.com/items/12uvp4jp/916b8a26-8cb2-428f-8f82-e066ad561af8.png?v=dc67d538c405f563078bcc7dab03d278', None], dtype=object) ]
docs.cosmosid.com
HTML5: What Is It, Why Should I Care, and What’s Microsoft Doing About It? HTML5. Unless you’ve been living under a rock, you’ve probably at least heard of it. But what is HTML 5, really? Background HTML5 is a bit of an overloaded term at the moment, which unfortunately leads to no small amount of confusion. Wikipedia has a bit of the history of the HTML5 specification process, but while that history does tell us that there are currently two separate groups (WHATWG and W3C) working on the HTML5 specification, it doesn’t really tell us much about what HTML5 brings to the table, or its scope. HTML5 is also an umbrella term, indeed a buzzword, that is frequently used to encompass other new web technology specifications, such as CSS3, Geolocation APIs, Web SQL Database, etc. Given two groups working on the standard, one might rationally wonder how that all works. Well, without getting too deeply into the history, one group (the W3C) was working on a separate specification (XHTML 2.0) which was long overdue, and which folks were worried would actually break much of how the web works today, by enforcing stricter standards in terms of how markup is constructed. Another group (WHATWG), didn’t want to wait around, and started working on HTML5 (originally referred to as Web Applications 1.0) in 2004. In 2007, the W3C adopted part of the work of WHATWG as the starting point for their HTML5. Confused yet? Wait, it gets better. WHATWG now refers to their specification as simply “HTML” and calls it a “Living Standard” meaning that it will be developed and updated on an ongoing basis. What’s in it for Me? So why should you, as a web developer, care? HTML 5 has some real promise for making some of your tasks easier. New semantic elements like <header>, <footer>, <article>, etc. make markup more readable, and intuitive in terms of the purpose various elements on the page. CSS3, while not directly a part of the HTML5 specification, adds more control and improvements to rendering of page elements, allowing you to develop better looking applications with less hackery needed to make it happen. And while they’re currently struggling with arguments over codec implementations in the various browsers (no one seems to be able to agree on a single codec), the <audio> and <video> elements hold promise of making it easier to embed audio and video in your pages and applications. And related technologies, such as Geolocation APIs, provide easier access to functionality that previously required plug-ins, script hacks, or was just plain limited to native applications. Is it ready, how do I use it, and is Microsoft supporting it? This is where the rubber meets the road…implementation. And it’s challenging for developers because the truth is that HTML5 is not finished, a subject of no little discussion on the web. In fact, there’s even a tongue-in-cheek site devoted to the fact: The answer from IsHtml5Ready.com That being said, most modern browsers provide support for at least a subset of the overall HTML5 and CSS3 feature set. One challenge facing both browser vendors and developers is that parts of the HTML5 specifications and some modules of the CSS3 specification are still changing on a weekly basis. This means that both browser vendors and web developers should be aware of the maturity of particular specifications prior to implementation. The solution for some vendors has been to implement draft specifications using vendor-specific prefixes. For example, Firefox and Safari added custom prefixes for adding rounded corners to elements, like so: 1: .rounded 2: { 3: -moz-border-radius: 5px; 4: -webkit-border-radius: 5px; 5: -border-radius: 5px; 6: } That’s 3x the CSS, before you consider any other browsers that may have custom implementations, and before you consider elements where you may want each corner rounded differently. Ugh! (and before folks complain, yes, I know that the current implementations of these browsers support the non-prefixed border-radius property…the point is whether or not such vendor-specific implementations make sense or not). So how is a developer to figure out what’s supported? Well, I’ll start by telling you what NOT to do…DON’T sniff the browser version as a means of testing if HTML 5 is supported. Browser (user agent) sniffing is a big reason for pain in transitioning from version to version of browsers. Too many sites, for example, use(d) script to sniff for IE, and then make assumptions about necessary fixes that stopped being necessary as of IE 8 (or even IE 7 in some cases). Unless you really want to maintain some complex script that manages versions of browsers you’re targeting, write your code to the current standards, and if you want to take advantage of future-looking or draft functionality, follow a two-step process: - Detect the specific feature you wish to use. - Use polyfills (mild language warning) or other workarounds to add support where the feature is not available. By using detection (either manually, or via a library such as Modernizr), you can focus on features, and not on browsers. Your code can degrade gracefully, shim with a polyfill, or whatever you choose, and when a newer browser is available that supports your desired feature, you won’t have any code to fix, the feature will simply work! Woohoo! What’s Microsoft doing to help? Microsoft has been very clear that for scenarios requiring the broadest possible reach, HTML5 represents the future. Internet Explorer 9 provides robust, standards-based support for the mature, stable features of HTML5 and CSS, and the IE team is committed to continuing to support a standards-based approach to feature implementation. You can read more about Internet Explorer 9 and HTML5/CSS3 at: Additionally, Microsoft is committed to improving tooling for HTML5 development, and a major first step in that direction was the addition of HTML5 and CSS3 support in Expression Web 4, via Service Pack 1. My colleague Chris Bowen has posted a very nice write-up of the HTML5 and CSS3 functionality in Expression Web 4 SP1, which I highly recommend (both the post and the service pack). Additionally, Visual Studio 2010 Service Pack 1 adds Intellisense statement completion for HTML5, and initial support for parts of CSS3. Last, but not least, Microsoft has teamed with Template Monster to release a series of Razor syntax-based templates for use with the new Microsoft WebMatrix development tool (though they can be used with Visual Studio as well), some of which take advantage of the new semantic markup elements in HTML5. You can see an example of one of these templates in use at the Mid Atlantic Developer Expo site (while you’re there, consider submitting a talk, or registering for the conference). You can safely assume that more improvements in tooling are coming, and that support for HTML5 and CSS3 (and related technologies) will continue to be added to Internet Explorer and Visual Studio and Expression Web as the specifications reach the appropriate level of maturity. Conclusion So what should you, as a web developer, take from all of this? Well, for starters, this discussion really just scratches the surface, so start reading up on HTML 5 (including this interesting history). When you’re ready to move forward, keep the following in mind: - Go ahead and jump in with HTML5 and CSS3, but with the understanding that browser support varies, due to the fact that some of these specifications are a moving target, while others are sufficiently mature to support robust, cross-browser implementations. - Use detection techniques (or Modernizr) to determine whether a given feature is supported by your user’s browser, and use polyfills as needed to supplement where features are missing. - Take advantage of tooling in Expression Web 4 SP1 and Visual Studio 2010 SP1 to make building HTML5-enabled sites easier. HTML5 is here to stay, and while it may not be completely finished, there’s value in testing the waters. Let me know what you think about HTML5, and Microsoft’s support for it, either by email, or in the comments.
https://docs.microsoft.com/en-us/archive/blogs/gduthie/html5-what-is-it-why-should-i-care-and-whats-microsoft-doing-about-it
2021-11-27T16:11:20
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
Responsive NativeBase V3 supports responsive styles out of the box. Instead of manually adding responsiveness to your apps, NativeBase V3 allows you provide object and array values to add responsive styles. Responsive syntax relies on the breakpoints defined in the theme object. To make styles responsive, you can use either the array or object syntax. #The Array syntax All style props that arrays as values for responsive styles. For Example to make a Box width or w responsive using the array syntax, here's what you need to do: #The Object syntax You can also define responsive values with breakpoint aliases in an object. Any undefined alias key will define the base, non-responsive value. For Example to make a Text fontSize responsive using the object syntax, here's what you need to do: #Demo Here's a simple example of a component that uses a stacked layout on small screens, and a side-by-side layout on larger screens.
https://docs.nativebase.io/3.0.0-next.38/responsive-style/
2021-11-27T15:38:52
CC-MAIN-2021-49
1637964358189.36
[]
docs.nativebase.io
NetServer agents and carriers NetServer is a layered library that enables developers to work with the SuperOffice database without having to know the details of how data is stored. Think of it as a set of high-level APIs where all the hard work of language decoding, security checks, database selects, and joining tables are handled for you. By work with we mean CRUD operations: create, read, update, and delete. A service is primarily a method exposed by the NetServer to handle the data in the SuperOffice database or enhance the presentation of said data. A single service call will represent many database queries and contain business logic, user-preference checking, and default handling. Tip In CRMScript, all NetServer classes are prefixed NS and their methods typically follow Pascal-case naming conventions. Authentication When using NetServer, the SCMScript runs as the currently signed in user. For background tasks such as email import, scheduled tasks and data exchange (DBI), the script runs as a system user. Note Sentry will apply to your CRMScripts. Agent and carrier software pattern The agent and carrier pattern separates data from actions: - Agents govern actions - Carriers are containers of content (data) All NetServer services are called through an agent. To get your hands on data, you must go through the appropriate agent. A few services are not reachable through CRMScript. These exceptions are: - AudienceAgent - FileManagerAgent - TrayAppAgent Agents Agents are designed to handle a specific business area, for example, sale. - An agent represents a set of related service calls - Each method on the agent corresponds to 1 service call Here's how it works: - Create a new agent. - Use the agent to access 1 (or more) of the CRUD methods exposed by it. - Set or get data. - Save (if applicable). Typical methods: CreateDefaultEntity() GetEntity(id) SaveEntity() DeleteEntity() Tip The corresponding CRMScript class is labeled NS[Businessarea]Agent. For example, NSPersonAgent, NSContactAgent, and NSAppointmentAgent. Declaration CRMScript treats an agent as any other object, except agents don't have a state. Thus, you need to declare each agent only once. Carriers Carriers represent the data passed back and forth to the server by the agent. There are 2 types of carriers: - Simple read-only carriers - Complex entity carriers Tip To tell them apart, look for the word Entity at the end of the CRMScript class name. Read-only carriers (item carriers) The simple carriers expose the properties of their content primarily as simple string values. In many situations, this simplicity is all you need and gives you the advantage of avoiding overhead. However, they can't be saved back to the database! NSPersonAgent personAgent; NSPerson p = personAgent.GetPerson(5); printLine(p.GetFullName()); Note Whenever you want to add to or change the database, you must use an entity carrier. Entity carriers The complex carriers expose the properties of their content as objects, which are populated with more detailed data. They can be updated and stored back to the database. NSPersonAgent personAgent; NSAssociateAgent associateAgent; NSPersonEntity p = personAgent.GetPersonEntity(5); printLine("Original name: " + p.GetFullName()); p.SetMiddleName("de"); personAgent.SavePersonEntity(p); printLine("Updated name: " + personAgent.GetPerson(5).GetFullName()); Tip Remember to call save() to push the changes back to the database! Declaration Declared carrier objects are NOT initialized! This is especially important for objects that use enumerations, because those objects will contain illegal values. If you try to access them, NetServer will throw an exception. Tip To create a new "blank" entity, use the CreateDefaultEntity() method. Then set at least enums. Use Get and Set to access attributes of entities. - Get methods return another carrier or a basic type - Set methods take a carrier or basic type as argument
https://docs.superoffice.com/automation/crmscript/netserver/ns-agents-and-carriers.html
2021-11-27T14:58:04
CC-MAIN-2021-49
1637964358189.36
[]
docs.superoffice.com
Date Range Supported Date Ranges: - Earliest: January 1, 1400 - Latest: December 31, 2599 You can use dates in the Gregorian calendar system only. Dates in the Julian calendar are not supported. Formatting Tokens You can use the following tokens to change the format of a column of dates:.
https://docs.trifacta.com/pages/diffpages.action?originalId=151995224&pageId=155386378
2021-11-27T14:10:28
CC-MAIN-2021-49
1637964358189.36
[]
docs.trifacta.com
Budget model hierarchy Use the budget model hierarchy link to modify your budget model by adding or deleting segments in the budget model hierarchy. As in cost model hierarchy, you can also use the Build segment hierarchy link in the Budget Model form easily using interactive user interface. Add new segments to the hierarchy or remove segments from the budget model per your requirements. Note: In budget models, if you have already generated budget keys, then you cannot modify the budget model hierarchy.
https://docs.servicenow.com/bundle/jakarta-it-business-management/page/product/it-finance/concept/c_BudModHier.html
2018-02-18T01:26:30
CC-MAIN-2018-09
1518891811243.29
[]
docs.servicenow.com
Event ID 203 — Software Installation Processing Applies To: Windows Server 2008 R2 The Software Installation client-side extension is responsible for installing software, applied through Group Policy, to both computers and users. Event Details Resolve This is a normal condition. No further action is required. Related Management Information Software Installation Processing Group Policy Infrastructure
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd392615(v=ws.10)
2018-02-18T02:10:31
CC-MAIN-2018-09
1518891811243.29
[array(['images/dd300121.green%28ws.10%29.jpg', None], dtype=object)]
docs.microsoft.com
[ aws . waf-regional ] Returns an array of RegexMatchSetSummary objects. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. list-regex-match-sets [--next-marker <value>] [--limit <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --next-marker (string) If you specify a value for Limit and you have more RegexMatchSet objects than the value of Limit , AWS WAF returns a next-marker value in the response that allows you to list another group of ByteMatchSets . For the second and subsequent list-regex-match-sets requests, specify the value of next-marker from the previous response to get information about another batch of RegexMatchSet objects. --limit (integer) Specifies the number of RegexMatchSet objects that you want AWS WAF to return for this request. If you have more RegexMatchSet objects than the number you specify for Limit , the response includes a next-marker value that you can use to get another batch of RegexMatchSet objects. - RegexMatchSet objects than the number that you specified for Limit in the request, the response includes a next-marker value. To list more RegexMatchSet objects, submit another list-regex-match-sets request, and specify the next-marker value from the response in the next-marker value in the next request. RegexMatchSets -> (list) An array of RegexMatchSetSummary objects. (structure) Returned by list-regex-match-sets . Each RegexMatchSetSummary object includes the Name and RegexMatchSetId for one RegexMatchSet . RegexMatchSetId -> (string) The RegexMatchSetId for a RegexMatchSet . You use RegexMatchSetId to get information about a RegexMatchSet , update a RegexMatchSet , remove a RegexMatchSet from a Rule , and delete a RegexMatchSet from AWS WAF.RegexMatchSetId is returned by create-regex-match-set and by list-regex-match-sets . Name -> (string)A friendly name or description of the RegexMatchSet . You can't change Name after you create a RegexMatchSet .
https://docs.aws.amazon.com/cli/latest/reference/waf-regional/list-regex-match-sets.html
2018-02-18T01:46:57
CC-MAIN-2018-09
1518891811243.29
[]
docs.aws.amazon.com
Locations in ZoomShift represent the physical locations, sites, or departments within your organization. This guide covers how to manage locations in ZoomShift. We have a tutorial video that covers some of the information in this article. If you like watching instead of reading you should check it out! Adding/Editing Locations To add or edit locations go to the Settings -> Locations page. If you do not see Settings in your menu then you probably do not have permission. To learn more about roles/permissions, read this guide. There is no limit on the number of locations you can add to ZoomShift. Adding a new location is as easy clicking the New Location button. After clicking this button you will see a form like the one below. Once a location has been added you can edit that location by clicking on its respective row. You will see the same form pop-up like the one below. Assign Employees to Locations Assigning your employees to locations will help to organize your schedule. You can learn more about this by checking out this help guide. Using Locations for Mobile Time Clock If you are using the mobile time clock you have the option to track your employee’s location when they punch in/out. You can then set a maximum distance from one of your locations. If an employee punched in/out outside of this range we will mark the timesheet for review. In order for this feature to work you need to enter an address for each of your locations. The address field is just a basic mailing address like: 123 Main St. Milwaukee WI 53203. After saving a location with an address you will notice that the Location Detected column will get marked with a green check mark. This means that ZoomShift was able to detect the GPS coordinates of this address. This location can now be used when punching in from the mobile time clock. By default, locations are displayed in the order that they are added to ZoomShift. If you want to change this order you can do so by dragging your locations to the order you want. You can do this by clicking on the reorder icon next to a location’s name and then dragging it to the order you want.
http://docs.zoomshift.com/settings-and-configuration/locations
2018-02-18T00:58:39
CC-MAIN-2018-09
1518891811243.29
[array(['https://images.contentful.com/7m65w4g847me/2CuB5haKyM8uKYSMSUEsm8/786ffdff25470b3942c611471209a962/locations-1.png', None], dtype=object) array(['https://images.contentful.com/7m65w4g847me/19Cd2gY9sEC6oY6MMi4ASY/93a2b3cf387fabb4dcf5c1ded8a804a5/locations-2.png', None], dtype=object) array(['https://images.contentful.com/7m65w4g847me/11TntSJWTsqmoWCAswGw4i/b198e7c46329316cbfe01568712fcb8d/locations-3.gif', None], dtype=object) ]
docs.zoomshift.com
Set Time-Off Limits Wizard Overview To set time-off limits by using WFM's best estimates of availability, possible days off, and other relevant information, start the Set Time-Off Limits Wizard (STOL Wizard). - Select Set Limits from the Action menu, or click the Set Limits icon ( ) in the toolbar. - The Set Time-Off Limits Wizard starts and displays its first page, Choose Dates. - There are potentially two more screens in the wizard - You. Factors Impacting Agent Availability When setting time-off limits, be aware that WFM calculates the number of available agents for every timestep according to: - The start and end of contract availability—The agent is not available outside of the availability interval. - The earliest start and latest end time of shifts assigned to the contract—This might further limit the agent's availability interval. - The activity open hours—Even if the target is not an activity, the agent's availability is limited by the earliest open and the latest close time of the activities, for which the agent is qualified. - The Rotating Pattern—Strict rotating day-offs, explicitly specified start and end times, and earliest and latest times of the rotating shift. - Optionally, granted day-offs and full-day exceptions. This page was last modified on December 14, 2018, at 09:20. Feedback Comment on this article:
https://docs.genesys.com/Documentation/WM/latest/SHelp/StTOLmtsWz
2019-01-16T09:41:52
CC-MAIN-2019-04
1547583657151.48
[]
docs.genesys.com
Add Open Channels to your project, or edit an existing configuration. Open Channels configuration requires your webhook URL. See: Set Up Your Webhook Server. What You'll Do - Configure Open Channels in your Urban Airship project's settings. Steps - Open your project from the dashboard, then navigate to Settings » Channels » Open Channels. - Click + Configure New Open Channel, or click Edit to make changes to a previously configured channel. - Urban, or cancel to discard. After saving, a Validation Code field appears, containing a 36-character UUID. This code must be returned by the webhook server at <webhook_root>/validate. See: Set Up Your Webhook Server. - Check the Enabled box to enable this for API use, then click Update, or cancel to discard. We do not validate the endpoint until you enable and save the configuration. We will revalidate it each time you update the configuration, as long as it remains enabled.
https://docs.urbanairship.com/tutorials/getting-started/channels/open-channels/
2019-01-16T10:07:35
CC-MAIN-2019-04
1547583657151.48
[]
docs.urbanairship.com
Adding Custom CSS classes to fields or widgets Note: Custom CSS is always enabled for fields. To enable custom CSS classes for GravityView widgets, first follow this how-to guide. Click the gear icon to access field or widget settings. The gear icon will appear when there are configurations available for the field or the widget. Add your custom CSS classes. You can add CSS classes that will be added to the field container when rendered in the View. GravityView provides some classes for modifying widths. Learn more on the CSS Guide's "Grid Layout" section. Close the settings screen. Save the view.
https://docs.gravityview.co/article/272-adding-custom-css-classes-to-fields-or-widgets
2019-01-16T09:42:13
CC-MAIN-2019-04
1547583657151.48
[array(['https://gravityview.co/wp-content/uploads/2018/01/click-the-gear-icon-to-access-field-or-widget-settings.png?1431729050', None], dtype=object) array(['https://gravityview.co/wp-content/uploads/2018/01/add-custom-css-class.png?1431729051', None], dtype=object) array(['https://gravityview.co/wp-content/uploads/2018/01/close-the-settings-screen.png?1431729052', None], dtype=object) array(['https://gravityview.co/wp-content/uploads/2018/01/save-the-view.png?1431729052', None], dtype=object) ]
docs.gravityview.co
This topic provides a quick introduction to setting up your SNMP Network Devices in Uptime Infrastructure Monitor as well as how to resolve common setup issues. Adding your network devices to Uptime Infrastructure Monitor The first step in setting up an SNMP device for monitoring within Uptime Infrastructure Monitor is to add the device to Uptime Infrastructure Monitor as an element. This is done from the: - Net-SNMP V2 or Net-SNMP V3: these are typically used for servers that support the Net-SNMP protocol for gathering metrics related to CPU, Memory, Disk load, etc. as well as any additional metrics provided by OID. - Network Device: this option is used for network devices such as switches, routers, SANs, firewalls, etc. that only provide metrics for OIDs appropriate to their functionality (e.g. port availability, network utilization, etc.). Also use this selection to add an SNMP v1 device. Credentials Settings page of the Uptime Infrastructure Monitor UI (Config -> Global Credentials Settings -> SNMP Global Configuration), so that the Use Global SNMP Connection Configuration check box can be selected; otherwise, you will need to provide the connection details for each device / system. The main options here are: - SNMP Version: - V1 (only available if you select Network Device in the Type of System/Device field) - an older, un-authenticated, un-encrypted version of SNMP. - V2 - an un-authenticated, un-encrypted version of SNMP. - V3 - a newer version of SNMP that provides for authentication and encryption of the transmitted data. - SNMP Port: Default option is port 161 but may be set differently as required. - Read Community: Default option is public but very likely need to be changed. The read community acts as a very basic password for controlling access to the SNMP data. - SNMP V3 only: These options are only used if the device supports the newer encrypted version 3 of SNMP. - Username. - Authentication Password. - Authentication Method. - Privacy Password. - Privacy Type. If you are uncertain of which SNMP values to select for the device / system, the following areas may be helpful to check: - Check in the network device's admin panel / interface. Most devices should allow you to change these values somewhere under their network options. - Check the user guide / manual for the device as it should provide the default options. - If the device is a server running the net-snmp daemon, check the snmpd.conf file under either /etc or /usr/local/etc/snmp/ for the current values. - Check with your organization's network administrators. After setting all the SNMP configuration values and hostname, click on the save button to have Uptime Infrastructure Monitor attempt to communicate with the device / system over SNMP and create the element. Some common issues that may arise at this stage are: - Request seems to time-out when setting up the device. - Check that the port and read community are correct. - Error: Can't load required RFC1213 MIB data. - This error means that the device doesn't support the RFC1213 MIB, which is required for net-snmp support. Try setting it up as a network device instead. Setting up a Service Monitor for the device's OIDs Upt.
http://docs.uptimesoftware.com/display/UT/SNMP+Monitoring+Quick+Start+Guide
2019-01-16T11:00:31
CC-MAIN-2019-04
1547583657151.48
[]
docs.uptimesoftware.com
Keep track of invoices and payments in one place. Influx allows you to manually reconcile payments, and we also integrate with major direct debit companies – for a hands-off experience as an owner/manager. Turn accounts on by going: Settings > Preferences > Account preferences Select ‘edit’, and then ‘Yes’ next to ‘manage accounts’:
https://docs.influxhq.com/accounts/
2019-01-16T10:34:51
CC-MAIN-2019-04
1547583657151.48
[array(['https://influxhqcms.s3.amazonaws.com/sites/5372211a4673aeac8b000002/assets/54af39664673aef82e067fb9/Manage_accounts.png', None], dtype=object) ]
docs.influxhq.com