content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
5.7.1 Release Notes Release Information The Appspace Core 5.7.1 build is a planned update that focuses on the latest bug fixes. Released on October 17th, 2015 this build covers Appspace Cloud. Bug Fixes Resolved Bugs - AP-8926: Configuration provider used instead of hardcoded path in the IOConfiguration. - AP-8928: Appspace configuration is loaded multiple times, during first-time load and when updating changes.
https://docs.appspace.com/appspace/6.0/appspace/release-notes/5.7/5.7.1/
2021-09-17T01:19:59
CC-MAIN-2021-39
1631780053918.46
[]
docs.appspace.com
A Cross Site Request Forgery (CSRF) attack attempts to force a user to execute functionality without their knowledge. Typically the attack is initiated by presenting the user with a link or image that when clicked invokes a request to another site with which the user already has an established an active session. CSRF is typically a browser based attack. The only way to create a HTTP request from a browser with a custom HTTP header is to use Javascript XMLHttpRequest or Flash, etc. Browsers have built-in security). To add a CSRF protection filter: Open the cluster topology descriptor file, $cluster-name .xml, in a text editor. Add a WebAppSecwebappsec provider to topology/gatewaywith a parameter for each service as follows: <provider> <role>webappsec</role> <name>WebAppSec</name> <enabled>true</enabled> <param> <name>csrf.enabled</name> <value>$csrf_enabled</value> </param> <param><!-- Optional --> <name>csrf.customHeader</name> <value>$header_name</value> </param> <param><!-- Optional --> <name>csrf.methodsToIgnore</name> <value>$HTTP_methods</value> </param> </provider> where: $csrf_enabledis either true or false. $header_namewhen the optional parameter csrf.customHeader is present the value contains the name of the header that determines if the request is from a trusted source. The default, X-XSRF-Header, is described by the NSA in its guidelines for dealing with CSRF in REST. $http_methodswhen the optional parameter csrf.methodsToIgnoreis present the value enumerates the HTTP methods to allow without the custom HTTP header. The possible values are GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS, CONNECT, or PATCH. For example, specifying GET allows GET requests from the address bar of a browser. Save the file. The gateway creates a new WAR file with modified timestamp in $gateway /data/deployments.
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.5.0/bk_security/content/configuring_protection_filter_csrf_attacks.html
2021-09-17T01:31:40
CC-MAIN-2021-39
1631780053918.46
[]
docs.cloudera.com
Date: Sat, 08 Jun 2013 10:11:52 +0200 From: "Herbert J. Skuhra" <[email protected]> To: "O. Hartmann" <[email protected]> Cc: [email protected] Subject: Re: mail/claws-mail: exporting mail filters? Message-ID: <871u8d6myf.wl%[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Sat, 8 Jun 2013 09:04:12 +0200 "O. Hartmann" <[email protected]> wrote: > Since I use on several boxes private and in the deprtment the same > email accounts, I'd like to export the mail filters I created and > import them to other boxes. I didn't figure out yet how to perform this > task on claws-mail. I realized that this is still a point "still under > construction on close to every platform I used for mailing. > > Does anyone has an idea? 1. Ask on the claws mailing list? 2. Use a search engine? 3. Search the claws mailing list archive on gmane? 4. Read the claws-mail man page? 5. Copy ~/.claws-mail/matcherrc? -- Herbert Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=543117+0+archive/2013/freebsd-questions/20130609.freebsd-questions
2021-09-17T01:36:38
CC-MAIN-2021-39
1631780053918.46
[]
docs.freebsd.org
Using snapshots on a VPS Find out how to how enable and use the Snapshot option in the OVHcloud Control Panel Find out how to how enable and use the Snapshot option in the OVHcloud Control Panel Last updated 3rd September 2021 Creating a snapshot is a fast and simple way to secure a functioning system before making changes that might have undesired or unforeseen consequences, for example testing a new configuration or software. It does not, however, constitute a complete system backup strategy. This guide explains the usage of snapshots for your OVHcloud VPS. Before applying backup options, we recommend to consult the VPS options for pricing comparisons and further details. Log in to your OVHcloud Control Panel, navigate to the "Bare Metal Cloud" section, and select your server from the left-hand sidebar under Virtual Private Servers}. From the Home tab, scroll down to the box labelled Summary of options. Click on ... next to the option "Snapshot" and in the context menu click on Order. In the next step, please take note of the pricing information, then click on Order. You will be guided through the order process and receive a confirmation email. Once the option is enabled, click on ... next to the option "Snapshot" and in the context menu click Take a snapshot. Creating the snapshot might take a few minutes. Afterwards, the timestamp of the creation will appear in the Summary of options box. Since you can only have one snapshot activated at a time, the existing snapshot has to be deleted before creating a new one. Simply choose Delete the snapshot from the context menu. If you are sure that you would like to reset your VPS to the status of the snapshot, click Restore the snapshot and confirm the restoration task in the popup window. Please note that when you restore a VPS from a snapshot, the snapshot will be deleted. If you wish to keep the same snapshot, you should take a new one before making changes to the restored system. Check the service to ensure it is running: $ sudo service qemu-guest-agent status Check the service to ensure it is running: $ sudo service qemu-guest-agent status Consult our cPanel auto backup guide to find out how to fix issues with cPanel servers getting stuck during an OVHcloud automated backup. Using automated backups
https://docs.ovh.com/ca/en/vps/using-snapshots-on-a-vps/
2021-09-17T00:51:43
CC-MAIN-2021-39
1631780053918.46
[]
docs.ovh.com
Firewall The menu Network Protection > Firewall allows you to define and manage firewall rules of the gateway. Generally speaking, the firewall is the central part of the gateway which functions in a networked environment to prevent some communications forbidden by the security policy. The default security policy of Sophos UTM states that all network traffic is to be blocked and logged, except for automatically generated rule sets that are necessary for other software components of the gateway to work. However, those auto-generated rule sets are not shown on the Firewall > Rules tab. This policy requires you to define explicitly which data traffic is allowed to pass the gateway.
https://docs.sophos.com/nsg/sophos-utm/utm/9.707/help/en-us/Content/utm/utmAdminGuide/NetProtFirewall.htm
2021-09-17T00:01:39
CC-MAIN-2021-39
1631780053918.46
[]
docs.sophos.com
Funnelback 15.6.0 Release notes for Funnelback 15.6.0 Released : 27 June 2016 15.6.0 - Selected improvements and bug fixes - Introduced Translucent Document Level Security, which allows for some information to be exposed about documents the current user is not permitted to see. - Renamed Funnelback's "Query Completion" feature to "Auto-complete". - Introduced a new Custom Servlet Filter Hook mechanism to allow for advanced pre/post filtering of search requests. - Fixed a bug preventing users from copying an in-live-only synonym/best-bet/curator. - Fixed an issue with the breadcrumb missing "tuning" link in Tuning History page. - Fixed bug in accessibility checker where StackOverflowErrors would be generated and stored in the log. - Fixed bug in accessibility reporter where checking documents with large numbers of errors would result in an OutOfMemoryError. - Funnelback's installer is now 64-bit, and no longer requires 32-bit compatibility libraries in order to install. - Crawls will now exit with a success status if they store documents (regardless of whether they are downloaded or copied-forward by incremental crawling). - Several design/interaction improvements and enhancements to the Admin, Documentation, SEO Auditor and Content Auditor interfaces. - Push collections will now accept header metadata with the prefix X-Funnelback-Push-Meta-Data- - The old prefix with underscores is still supported, but it discouraged as some proxy servers strip such headers by default. - Implemented ability to compare live and preview versions of curator rules. - Corrected handling of spaces in filenames within web resources. 15.6.0 - Upgrade Issues - Please note that the renaming of "Query Completion" to "Auto-complete" affects a large number of collection.cfg settings as well as csv file name and a number of other areas. Where possible, the installer will automatically rename settings, file and update the relevant setting references in ftl files. Custom workflow scripts interacting with these settings or files may need to be manually updated while upgrading Funnelback. - If classic-ui is installed, database collections using the classic-ui's serve-db-document links may no longer work. Cache views, ideally the modern-ui's cache, should be used to provide links for database records instead. - The systems for starting most gathering components have been changed - While no functionality should be affected, please be aware that the format of the update logs have been changed. - The modern-ui cache controller applies a stricter security model for cache copies and any collections which have a Security fieldenabled will not be able to serve cache copies. Previously FileCopy collections had a security field set by default, even if DLS was not enabled, and unless removed this will prevent cache copies from being accessible after upgrading. - When upgrading, the installer will now move leftover files from lib/java/all to lib/java/prev-timestamp. This is to prevent custom or patched .jar files (deployed in earlier versions) from interfering with the upgraded systems. Files in lib/java/prev-timestamp should be inspected and moved back into lib/java/all after the upgrade manually if they are required.
https://docs.squiz.net/funnelback/archive/more/release-notes/historical/15.6.0.html
2021-09-16T23:47:12
CC-MAIN-2021-39
1631780053918.46
[]
docs.squiz.net
Contents: Contents: You can create connections to one or more Microsoft SQL Server databases from the Trifacta platform. Limitations - This is a read-only connection. connection box. - You can also create connections through the Connections page. - See Connections Page. For additional details on creating a SQL Server connection, see Enable Relational Connections. This connection can also be created using the CLI or API. - For details on values to use when creating via CLI or API, see Connection Types. - See CLI for Connections. - See API Connections Create v3. Modify the following properties as needed: Use For more information, see Database Browser. Data Conversion For more information on how values are converted during input and output with this database, see SQL Server Data Type Conversions. This page has no comments.
https://docs.trifacta.com/display/r051/Create+SQL+Server+Connections
2021-09-17T00:23:09
CC-MAIN-2021-39
1631780053918.46
[]
docs.trifacta.com
Effective on January 6, 2020 Obligations Rights and Limits Disclaimer and Limit of Liability Termination Governing Law and Dispute Resolution General Terms Vave “Dos and Don’ts” Complaints Regarding Content How To Contact Us When you use our Services you agree to all of these terms. Your use of our Services is also subject to our Cookie Policy and our Privacy Policy, which covers how we collect, use, share, and store your personal information. You agree that by clicking “Try for Free”, “Essayer Gratuitement”, “Become a Manager” or similar, registering, accessing or using our services (described below), you are agreeing to enter into a legally binding contract with Vave Vave.io, Vave apps, Vave and other Vave-related sites, apps, communications and other services that state that they are offered under this Contract (“Services”), including the offsite collection of data for those Services, such as our ads and the “Share with Vave” and “Share with Vave” plugins. Registered users of our Services are “Members” and unregistered users are “Visitors”, Real Estate Independant advisors are “Real Estate Managers”. Vave You conclude this Agreement with the company Parallax Partners LTD (also referred to as “us”) domiciled at C/O Pod2, managed by Mehdi Radi and Pierre Notton. We use the term “Designated Countries” to refer to countries in the European Union (EU), European Economic Area (EEA), and Switzerland. If you reside in the “Designated Countries”, you are entering into this Contract with Parallax Partners LTD (“V. When you register and join the Vave Service or become a registered user on Vave, you become a Member or a Manager. If you have chosen not to register for our Services, you may access certain features as a “Visitor.” Vave account, which must be in your real name; and (3) you are not already restricted by Vave from using the Services. Creating an account with false information is a violation of our terms, including accounts registered on behalf of others or persons under the age of 16. “Minimum Age” means 16 years old. However, if law requires that you must be older in order for Vave to lawfully provide the Services to you without parental consent (including using of your personal data) then the Minimum Age is such older age. Vave’s refund policy. We may calculate taxes payable by you based on the billing information that you provide us at the time of purchase. Vave’s refund policy. We may calculate taxes payable by you based on the billing information that you provide us at the time of purchase. You can get a copy of your invoice through your Vave account settings under “Purchase History”.Vave can get a copy of your invoice through your Vave account settings under “Purchase History”. Vave connections, restricting your profile visibility from search engines, or opting not to notify others of your Vave Vave, you own the content and information that you submit or post to the Services, and you are only granting V Vave agree that if content includes personal data, it is subject to our Privacy Policy. You and Vave agree that if content includes personal data, it is subject to our Privacy Policy. You and Vave agree that we may access, store, process and use any information and personal data that you provide in accordance with, the terms of the Privacy Policy and your choices (including settings). By submitting suggestions or other feedback regarding our Services to Vave, you agree that Vave can use and share (but does not have to) such feedback for any purpose without compensation to you. You promise to only provide information and content that you have the right to share, and that your Vave profile will be truthful. You agree to only provide content or information that does not violate the law nor anyone’s rights (including intellectual property rights). You also agree that your profile information will be truthful. Vave may be required by law to remove certain information or content in certain countries.. Vave is not a storage service. You agree that we have no obligation to store, maintain or provide you a copy of any content or information that you or others provide, except to the extent required by applicable law and as noted in our Privacy Policy.. V. Vave may help connect Members offering their services (career coaching, accounting, etc.) with Members seeking services. Vave does not perform nor employs individuals to perform these services. You must be at least 18 years of age to offer, perform or procure these services. You acknowledge that Vave does not supervise, direct, control or monitor Members in the performance of these services and agree that (1) Vave is not responsible for the offering, performance or procurement of these services, (2) Vave does not endorse any particular Member’s offered services, and (3) nothing shall create an employment, agency, or joint venture relationship between Vave and any Member offering services. If you are a Member offering services, you represent and warrant that you have all the required licenses and will provide services consistent with our Professional Community Policies. Similarly, Vave may help you register for and/or attend events organized by Members and connect with other Members who are attendees at such events. You agree that (1) Vave is not responsible for the conduct of any of the Members or other attendees at such events, (2) Vave does not endorse any particular event listed on our Services, (3) Vave does not review and/or vet any of these events, and (4) that you will adhere to these terms and conditions that apply to such events. We have the right to limit how you connect and interact on our Services. Vave reserves the right to limit your use of the Services, including the number of your connections and your ability to contact other Members. Vave reserves the right to restrict, suspend, or terminate your account if you breach this Contract or the law or are misusing the Services (e.g., violating any of the Dos and Don’ts or Professional Community Policies). We’re providing you notice about our intellectual property rights. Vave reserves all of its intellectual property rights in the Services. Trademarks and logos used in connection with the Services are the trademarks of their respective owners. Vave , and the “rooster” logos and the Vave trademarks, service marks, graphics and logos used for our Services are trademarks or registered trademarks of Vave .. This is our disclaimer of legal liability for the quality, safety, or reliability of our Services. VAVE AND ITS AFFILIATES MAKE NO REPRESENTATION OR WARRANTY ABOUT THE SERVICES, INCLUDING ANY REPRESENTATION THAT THE SERVICES WILL BE UNINTERRUPTED OR ERROR-FREE, AND PROVIDE THE SERVICES (INCLUDING CONTENT AND INFORMATION) ON AN “AS IS” AND “AS AVAILABLE” BASIS. TO THE FULLEST EXTENT PERMITTED UNDER APPLICABLE LAW, VAVE AND ITS AFFILIATES DISCLAIM ANY IMPLIED OR STATUTORY WARRANTY, INCLUDING ANY IMPLIED WARRANTY OF TITLE, ACCURACY OF DATA, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. These are the limits of legal liability we may have to you. TO THE FULLEST EXTENT PERMITTED BY LAW (AND UNLESS VAVE HAS ENTERED INTO A SEPARATE WRITTEN AGREEMENT THAT OVERRIDES THIS CONTRACT), V. VAVE AND ITS AFFILIATES WILL NOT BE LIABLE TO YOU IN CONNECTION WITH THIS CONTRACT FOR ANY AMOUNT THAT EXCEEDS (A) THE TOTAL FEES PAID OR PAYABLE BY YOU TO VAVE FOR THE SERVICES DURING THE TERM OF THIS CONTRACT, IF ANY, OR (B) US $1000. The limitations of liability in this Section 4 are part of the basis of the bargain between you and Vave and shall apply to all claims of liability (e.g., warranty, tort, negligence, contract and law) even if V. We can each end this Contract, but some rights and obligations survive. Both you and V. In the unlikely event we end up in a legal dispute, you and Vave Vave Ireland agree that the laws of Ireland, excluding conflict of laws rules, shall exclusively govern any dispute relating to this Contract and/or the Services. You and Vave Vave agree that the laws of the State of California, U.S.A., excluding its conflict of laws rules, shall exclusively govern any dispute relating to this Contract and/or the Services. You and Vave both agree that all claims and disputes can be litigated only in the federal or state courts in Santa Clara County, California, USA, and you and Vave each agree to personal jurisdiction in those courts. Vave has waived its right to enforce this Contract. You may not assign or transfer this Contract (or your membership or use of Services) to anyone without our consent. However, you agree that Vave may assign this Contract to its affiliates or a party that buys it without your consent. There are no third-party beneficiaries to this Contract. You agree that the only way to provide us legal notice is at the addresses provided in Section 10. V. You agree that you will not: Create a false identity on V Vave; Vave, including, without limitation, (i) copying or distributing our learning videos or other materials or (ii) copying or distributing our technology, unless it is released under open source licenses; (iii) using the word “Vave” Vave without our express consent (e.g., representing yourself as an accredited Vave trainer); Rent, lease, loan, trade, sell/re-sell or otherwise monetize the Services or related data or access to the same, without Vave’ consent; Deep-link to our Services for any purpose other than to promote your profile or a Group on our Services, without Vave’. Our Contact information. Our Help Center also provides information about our Services.
https://docs.vave.io/user-agreement
2021-09-17T01:21:50
CC-MAIN-2021-39
1631780053918.46
[]
docs.vave.io
5.8 Release Notes Release Information The Appspace Core 5.8 build is a planned update that focused on platform optimization, enhancements, and bug fixes. Released on December 5th, 2015, this build is for Appspace Cloud and On-Premise. What’s new in Appspace 5.8 Introduction of Basic apps for Signs Signs gets its first major feature update with the introduction of Basic apps. The new Basic app workflow makes it simpler for you to create fullscreen playlist-driven apps with common functions within easy reach. New drag-and-drop content upload feature in Library The Upload Local Media interface in the Appspace Library has been updated to enable users to drag-and-drop content items for upload. You can simply drag an item into the drop zone to start uploading a file, and continue to drop files while other content items are being uploaded. This significantly reduces the number of click throughs for uploading single or multiple content. Improvements to the Sign Library user interface The Application Library is now known as the Sign Library, and below are further changes made to the user interface: - Devices column is now known as Players. - Last Updated column is now known as Updated. - Public Link icon is hidden by default, and displayed only when the application row is selected. - Layouts and Widgets columns have been removed. Software Deployments Extension for Portal Administrators The all-new Software Deployments extension allows Portal Administrators on on-premise deployments of Appspace to create software deployment packages and deploy selectively to a network, application, or a device. Security Updates The HttpOnly and Secure flags on all cookies are set for HTTP in addition to HTTPS. The HttpOnly flag instructs the browser not to make the cookie value accessible to Javascript, whereas the Secure cookie restricts the browser to only send via a secured channel. Bug Fixes Resolved Bugs - AP-6710: Appspace Core does not handle null properties. - AP-8609: Unable to invite a user, after a network group with the user was deleted. - AP-8889: Unable to update extension with required Appspace Core version, when maintenance number is more than zero. - AP-8929: API: RetrieveDeviceProperties is not filtered by devicename. - AP-8992: Apostrophe not supported in email username, when inviting users. - AP-9005: Mixed content (HTTPS vs HTTP) when viewing System > Account > Company page. - AP-9024: In the Users extension, all users in the users list have Content Manager role. - AP-9030: Improve locking performance on NHibernate factory. - AP-9035: Improve locking performance on ASP.NET Session. - AP-9066: Misspelling of ‘Appspace’ in Appspace Usage Job. - AP-9096: Unable to upload a new server profile when generating new offline instances. - AP-9097: UpdateOfflineLicense keeps generating new sets of product keys. - AP-9099: Transcoded file disappears from the encoding folder. - AP-9136: User permission changes when password is reset. - AP-9140: Download Server Profile XML downloads incorrect server profile. - SIGN-68: Data Only apps created before 5.7 show as Signage. - SIGN-102: Library UI, no ‘X’ icon on each row to cancel or stop the upload process, or remove the file from the upload page. - SIGN-103: Signs UI, no ‘X’ icon on each row to cancel or stop the upload process, or remove the file from the upload page. - SIGN-110: In the Signs extension, the Tick button is shown in the wrong position. - SIGN-114: Copied Signs application shows incorrect content size in Appspace Cloud. - SIGN-133: The Cancel button size is inconsistent on Safari. - SIGN-149: When clicking Save in Setting tab, Basic and Data Only applications return the “We’ll be right back” error message. - SIGN-166: The preview display is blank after resolution setting is changed from Landscape to Portrait for an Advanced app. - SIGN-170: The preview display is blank after resolution setting is changed from Landscape to Portrait for a Basic app. - SIGN-172: Contents added do not show on the Visual Editor. - SIGN-173: UI changes in Settings tab - some labels still show “application” instead of “sign”. - SIGN-175: Creating an application name with apostrophe results in the “overview” page displaying incorrectly. - SIGN-186: Changes made to the content’s playout properties are not saved. Resolved Escalations - AE-1109: Appspace 5.1.2 installer crashes when launched. - AE-1455: Flash cross-domain policy file issue with web service. - AE-1459: Improved cookie configuration. - AE-1804: Apostrophe not supported in email username, when inviting users. - AE-1821: Uploading BIN file to on-premise server results in ‘Failed to apply license key’ error. Upgrade Paths The general rule for Appspace Core on-premise upgrades is that version upgrades require an interim upgrade before you can upgrade to the latest version. See the following table for the upgrade path of your Appspace on-premise version. End of Support Dates The below are the End of Support (EOS) dates for deprecated Appspace Core releases.
https://docs.appspace.com/appspace/6.0/appspace/release-notes/5.8/5.8/
2021-09-17T01:43:55
CC-MAIN-2021-39
1631780053918.46
[array(['../../../../_images/01126.png', None], dtype=object) array(['../../../../_images/02125.png', None], dtype=object)]
docs.appspace.com
A trace key is the value assigned to a field, such as Internal Id, that uniquely identifies a record. Knowing the trace key allows you to reference one specific record within a flow. The trace key is useful when examining and troubleshooting an integration: - Error Management 2.0 (EM2.0) automatically resolves similar errors based on the trace key - The record’s trace key is the key field that identifies a record when auditing flow event data Searching by this value makes records easier to identify in integrator.io reports and error information. The trace key contains metadata automatically determined by integrator․io as the field most likely to be unique for an endpoint and record. In most cases, you can accept the default trace key. Contents - View a record’s trace key - Override the default trace key View a record’s trace key You will encounter trace key values in either of the following locations: - A downloaded flow event CSV file (generated at Tools > Reports) – in the first column, traceKey - Error field details, as shown below Caution: A trace key may not exceed 128 characters. If an ID is very long or if you’re building an exceptionally complex custom trace key, any value longer than 128 characters will be truncated. The incomplete value could potentially obscure the difference between records, in the unlikely occurrence that two or more trace keys are identical only up to the string length displayed. Override the default trace key Assigning a custom trace key offers you flexibility for meeting your business requirements and any later troubleshooting. In certain cases, integrator.io will not be able to distinguish a unique value for a record, thus leaving the trace key blank. When that happens, you’re encouraged to override the default, as described in the following sections. Note that the override settings are available only for custom, or “DIY,” flows and do not appear in Integration app flow steps. Set a trace key in an export or listener In a custom flow, you can specify an alternate field, fields, or concatenated value that you want to use as a trace key. Since you’re specifying a custom trace key for the data’s source application in an export or webhook listener, the new trace key value is always referenced throughout the flow. For example, a record might contain the following structure, and you’ve noticed that product_code is assigned as the trace key field: { "record": { "product_name": "Pillow Case", "product_code": 144, "updated_at": "2021-04-15 10:08:29", "created_in": "Store View B", "category": 8, "inventory": 40, "color": "goldenrod", "addresses": [] } } For troubleshooting purposes, you might want to make the trace key more specific or more human-readable than “144,” as follows: - In Flow Builder, click the export to edit it. - Expand the Advanced section. - Enter a field, such as {{record.product_name}}, or a handlebars expression, such as {{join "-" record.product_name record.product_code}}, in the Override trace key template setting. - Optionally, you can click the Open handlebars editor ( ) button to test how the template will resolve with a sample record. - Save and close the export. The same procedure applies to a listener. Continuing with this example, the trace key would be displayed – and searchable – as “144, Pillow Case: goldenrod” for the sample record, above, with the template shown below: Set a trace key for imported one-to-many records If an import contains a one-to-many lookup, meaning that you are syncing source child records that should each be used as the main record, you have an additional opportunity to provide a custom trace key template for the additional records. Once again, integrator.io will attempt to assign a default trace key for each child record, but the override gives you more flexibility in tailoring the value for your requirements. The child record trace key template value always includes the parent record trace key in the format, <parent_record_trace_key> - <child_record_trace_key> ...where <parent_record_trace_key> is the default record trace key or the override you set, as explained earlier. Then, when you create or edit a one-to-many import in Flow Builder or via Resources > Imports, your custom Override child record trace key template value will be shown for the <child_record_trace_key>. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360060740672-Set-a-custom-trace-key
2021-09-17T01:08:12
CC-MAIN-2021-39
1631780053918.46
[array(['/hc/article_attachments/4402566005645/view-tk.png', None], dtype=object) array(['/hc/article_attachments/4402569051149/override-tk.png', None], dtype=object) array(['/hc/article_attachments/4402566902029/override-import-tk.png', None], dtype=object) ]
docs.celigo.com
Account Details: Balance Items Use this tab to define balance items under which you would like to classify a new account in the balance sheet. Balance items are major components of the balance sheet determining how and in what categories, subcategories and groups accounts are organized. The exact set of balance sheet items depends on the selected type of accounting framework as it prescribes a certain organizational format for the balance sheet. In Codejig ERP, such accounting frameworks are called Balance types. You select balance types for generating Profit & Loss and Balance sheet reports. Under the Balance items tab, several balance types that are available for your country are displayed by default. For each balance type you intend to use, select a balance item under which you want to display a given account. Choose a balance item from the list of available items for the particular balance type. More information Balance Type
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427396588
2021-09-17T01:57:44
CC-MAIN-2021-39
1631780053918.46
[]
docs.codejig.com
Date: Fri, 29 May 2020 00:13:26 +0000 From: Brandon helsley <[email protected]> To: Chris Hill <[email protected]> Cc: "[email protected]" <[email protected]> Subject: Re: Mounting DVD image Message-ID: <CY4PR19MB1655F0FCD13E6B9D751B8747F98F0@CY4PR19MB1655.namprd19.prod.outlook.com> In-Reply-To: <[email protected]> References: <CY4PR19MB16555772298D778FABA13FA7F98E0@CY4PR19MB1655.namprd19.prod.outlook.com>, <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help I'm trying to boot bhyve from DVD instead of FTP Sent from Outlook Mobile<> ________________________________ From: Chris Hill <[email protected]> Sent: Thursday, May 28, 2020 6:11:23 PM To: Brandon helsley <[email protected]> Cc: [email protected] <[email protected]> Subject: Re: Mounting DVD image On Thu, 28 May 2020, Brandon helsley. That mdconfig command allows you to mount the .iso disc image file as if it were a disk. But it sounds like you have burned an actual physical DVD, which is a different situation. If you're trying to install FreeBSD onto this machine, the procedure would be to boot from that DVD and follow the prompts. If there's anything on this machine you want to keep, back it up first :^) -- Chris Hill [email protected] Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=788447+0+archive/2020/freebsd-questions/20200531.freebsd-questions
2021-09-17T00:23:24
CC-MAIN-2021-39
1631780053918.46
[]
docs.freebsd.org
Repartitioning a VPS after an upgrade Find out how to increase your storage space following an upgrade Find out how to increase your storage space following an upgrade Last updated 18th May 2021 When you upgrade your VPS, you might need to repartition your storage space. Here are the steps to follow. Repartitioning could permanently damage your data. OVHcloud cannot be held responsible for any loss or damage to your data. Before doing anything, make sure you back up all of your data. This guide explains the steps you need to follow to increase your storage space. Unlike RAM and processor (CPU) of your VPS, the storage space cannot automatically be adjusted after an upgrade. Attempting to extend a partition can lead to a loss of data. It is therefore strongly recommended that you back up the data on your VPS. On older VPS ranges, your partitions will be automatically mounted in rescue mode. You can use the following command to identify where your partition is mounted: lsblk The partition corresponding to rescue mode will be the one mounted in the directory /, which is actually the system root. In contrast, the partition of your VPS will probably be placed in the directory associated with /mnt. If your VPS is of the current ranges however, the partition will not be automatically mounted. If the MOUNTPOINT column of your output confirms this, you can skip the unmounting step. In order to resize the partition, you will need to unmount it. To unmount your partition, use the following command: umount /dev/sdb1 After unmounting the partition, you should check the filesystem ( filesystem check) to see if there are errors in the partition. The command is as follows: If you see any errors, take note of them and resolve them as required. Below is a (non-exhaustive) list of the most common errors you might see: bad magic number in superblock: Do not continue. Please read and follow our instructions on How to fix a bad magic number in superblock error. /dev/vdb1 has unsupported feature(s): metadata_csum followed by e2fsck: Get a newer version of e2fsck!: Update e2fsck. If the latest version is not available via apt (or another manager package), you will need to compile it from the sources. If the filesystem check is completed successfully, launch the fdisk application. In the settings, you need to enter the name of the disk and not the name of the partition. For instance, if your partition is sdb1 instead of vdb1, the disk name will be /dev/sdb. fdisk -u /dev/sdb This application has several sub-commands, which you can view with the command m. Before deleting the old partition, it is recommended that you write down the number corresponding to the first sector of the partition. You can find this information through the command p. The information is listed under the Start field. Save this data If you have not backed up your data, this is the point of no return. Then delete the partition with the command d. Command (m for help): d Selected partition 1 The single partition will automatically be deleted. You now need to create a new partition with the command n. It is recommended that First sector line, check that the default value is the same as the one you have previously written down. If it is different, use the value you have written down. You now need to ensure that the partition is bootable. You can do this using the command a. Command (m for help): a Partition number (1-4): 1 Save your changes and exit the application with the command w: Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. The partition has been extended, but the filesystem still occupies the same space as before. To extend it, simply. In order to check if the extension has been successful, you can mount the newly created partition and verify You will find the new partition size listed below the label size. If the command e2fsck returns the error message bad magic number in superblock, you should check and repair the filesystem by using a backup of the superblock. To see which backups of the superblock are available, enter the following command: dumpe2fs /dev/sdb1 | grep superblock Then use the first superblock backup to check and repair the filesystem: fsck -b 32768 /dev/sdb1 You can find this in the Server Manager: Right click on the C: volume and select "Extend Volume..." You will then be prompted to choose your new volume size: Enter your desired size and hit "OK". Your volume will now be extended.
https://docs.ovh.com/ca/en/vps/repartitioning-vps-after-upgrade/
2021-09-17T00:20:29
CC-MAIN-2021-39
1631780053918.46
[]
docs.ovh.com
The Audit Log Adapter is designed to re-read the raw audit log file if the adapter is shut down and restarted. The adapter maintains the last timestamp processed from its previous execution. When re-reading the raw audit log file, the adapter will process all entries with the timestamp equal to or greater than the last timestamp from the previous execution. In some cases, this may mean that the same entry is processed twice in the final audit log file. If the raw audit log file rolls over while the Audit Log Adapter is shut down, the adapter will only process entries in the current raw audit log file. Any entries in the previous audit log file (named <server_name>_audit.log.bak) are not processed. In this case, you must ensure that the previous raw audit log file is saved to retain information on the changes made during the time when the adapter was shut down.
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.0/ip-manager-concepts-guide-101/GUID-D96349EC-55E0-410C-97FC-E1BF8717B823.html
2021-09-17T02:08:29
CC-MAIN-2021-39
1631780053918.46
[]
docs.vmware.com
Custom Jinja Filters¶ Custom filters are used to simplify validation skillets by using a small set of filter options to check the most common configuration components contained in tags, attributes, and element text values. Review the XML Basics documentation for the XML terminology used in the custom filters. Additional examples can be found in the skilletlib examples directory Capturing XML Objects¶ In order to properly validate a config it is often necessary to convert the XML structure to an object, which can then be used in a Jinja expression to perform basic logic and validation. The captured object is associated to an XPath plus its corresponding XML element and assigned a variable name used in the custom filter. Each custom filter example below shows its respective captured object for use in the filter. When building skillets, the Builder needs to: - know the XPath for each object to capture - determine what part of the XML element will be referenced: attribute, tag, element text value - which custom filter to select based on the XML element reference - what conditions have to be met: a specific or range of values, item present or absent, etc. Checking Attributes¶ Attribute filters are most commonly used to check object names although other attributes can exist within the XML configuration. attribute_present(tag name, attribute name, attribute value) attribute_absent(tag name, attribute name, attribute value) The attribute being checked in this example is the external-list entry name. Therefore the input values for the attribute filters are: - tag name: <entry> - attribute name: ‘name’ - the attribute value of interest A sample XML element found at XPath /config/devices/entry[@name='localhost.localdomain']/vsys/entry[@name='vsys1']/external-list is used as reference for the attribute custom filters examples. The element will be a captured object variable called external-list. <external-list> <entry name="tutorial_edl"> <type> <ip> <recurring> <five-minute/> </recurring> <description/> <url></url> </ip> </type> </entry> <entry name="my_edl"> <type> <ip> <recurring> <five-minute/> </recurring> <description/> <url></url> </ip> </type> </entry> </external-list> attribute_present¶ Checks if an attribute value exists and returns True if the attribute value is found. This filter is used to ensure a named object or policy exists in the configuration. This item should be present as part of a best practice validation or other config skillets may have dependencies on this item.external-list | attribute_present('entry', 'name', 'my_edl') external-list | attribute_absent('entry', 'name', 'new_edl') The first filter will return True since my_edl is in the external-list object. The second filter will return False since new_edl is not in the external-list object. attribute_absent¶ Checks if an attribute value exists and returns True if the attribute value is not found. This filter is used to ensure a named object or policy does not already exist in the configuration. If the item exists it may cause config merge conflicts or override an existing configuration.external-list | attribute_absent('entry', 'name', 'my_edl') external-list | attribute_absent('entry', 'name', 'new_edl') The first filter will return False since my_edl is in the external-list object. The second filter will return True since new_edl is not in the external-list object. Checking an Element Value¶ Element value filters are most commonly used to check specific text values in the XML configuration. element_value('tag name') [expression] value Any valid jinja expression can be used to evaluate the text value. A sample XML element found at XPath /config/devices/entry[@name='localhost.localdomain']/deviceconfig/system will be used as reference for the element value custom filter example. The element will be a captured object variable called device_system. <update-schedule> <anti-virus> <recurring> <hourly> <at>4</at> <action>download-and-install</action> </hourly> </recurring> </anti-virus> <wildfire> <recurring> <every-min> <action>download-and-install</action> </every-min> </recurring> </wildfire> </update-schedule> <snmp-setting> <access-setting> <version> <v3/> </version> </access-setting> </snmp-setting> <ntp-servers> <primary-ntp-server> <ntp-server-address>0.pool.ntp.org</ntp-server-address> </primary-ntp-server> <secondary-ntp-server> <ntp-server-address>1.pool.ntp.org</ntp-server-address> </secondary-ntp-server> </ntp-servers> <login-banner>You have accessed a protected system. Log off immediately if you are not an authorized user. </login-banner> <timezone>EST</timezone> element_value¶ Checks an element_value expression and returns True if the expression is true. This filter is used to check a specific value or range based on best practices or expected configuration settings. Various checks such as ‘==’, ‘!=’, ‘>=’, and ‘<=’ can be used in the filter.device_system | element_value('update-schedule.wildfire.recurring.every-min.action') == 'download-and-install' device_system | element_value('timezone') == 'UTC' The first filter uses the dot notationto step down the tree to the wildfire dynamic update action. This allows a single captured object to be used for multiple tests instead of an explicit capture object for each test using a granular XPath. The filter will return True since the action for Wildfire updates is set to ‘download-and-install’. The second filter will return False since the XML configuration for timezone is ‘EST’ and not ‘UTC’. Checking a Set of Element Values¶ In some cases multiple values are contained with a portion of the configuration. These are often referenced in the configuration file with <member> tags. Examples of multiple entries include: - zones, addresses, users, or tags assigned to a security policy - URL categories assigned to block or alert actions - interfaces assigned to a zone or virtual-router To check multiple element values, the element_value_contents custom filter can search across all members to find a specific value. element_value_contains¶ The inputs to the filter are the tag name and the search value. element_value_contains('tag name', 'search value') This example checks a security rule to see if a specific destination address using an external-list is found. The XPath for the Outbound Block Rule is /config/devices/entry[@name=’localhost.localdomain’]/vsys/entry[@name=’vsys1’]/rulebase/security/rules/entry[@name=’Outbound Block Rule’] Below is an abbreviated XML element showing the <destination> content of interest. <entry name="Outbound Block Rule"> <to> <member>any</member> </to> <from> <member>any</member> </from> <destination> <member>panw-highrisk-ip-list</member> <member>panw-known-ip-list</member> <member>panw-bulletproof-ip-list</member> </destination> <action>deny</action> <log-setting>default</log-setting> <tag> <member>Outbound</member> </tag> </entry> The custom filter looks for the inclusion of the panw-bulletproof-ip-list EDL as a destination address. security_rule_outbound_edl | element_value_contains('destination.member', 'panw-bulletproof-ip-list') Since the member value is found a True result is returned. Referencing the same example, other element_value_contains checks could be used for <to> or <from> zones and <tag> members.
https://skilletbuilder.readthedocs.io/en/latest/reference_examples/jinja_custom_filters.html
2021-09-17T01:34:05
CC-MAIN-2021-39
1631780053918.46
[]
skilletbuilder.readthedocs.io
You are viewing version 2.25 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version. PaCRD Overview PaCRD (a combination of “Pipelines as Code” and “Custom Resource Definition”) is a Kubernetes controller that manages the lifecycle of SpinnakerTM applications and pipelines as objects within your cluster. PaCRD extends Kubernetes functionality to support Spinnaker Application and Pipeline objects that can be observed for changes through a mature lifecycle management API. With PaCRD you can: - Maintain your Spinnaker pipelines as code with the rest of your Kubernetes manifests. - Persist Pipeline and Application changes with confidence to your Spinnaker cluster. - Leverage existing tools like Helm and Kustomize to template your pipelines across teams and projects. To get started right away, check out the Quick Start section for installation instructions. Prerequisites To use PaCRD, make sure you meet the following requirements: - Have a working Kubernetes 1.11+ cluster - Have a working Spinnaker installation - Although there is no minimum version required for this experiment, Armory recommends using the latest release - Have permissions to install CRDs, create RBAC roles, and create service accounts Quick Start Download the current pacrd manifest to your local machine: curl -fsSL > pacrd-1.0.1.yaml Then, inspect the manifest to make sure it is compatible with your cluster. Create the following files in the directory where you downloaded the pacrd manifest to customize the installation: kustomization.yaml and patch.yaml. Start by creating a kustomization.yaml file, which contains the installation settings: # file: kustomization.yaml resources: - pacrd-1.0.1.yaml patchesStrategicMerge: - patch.yaml namespace: spinnaker # Note: you should change this value if you are _not_ deploying into the `spinnaker` namespace. Next, create a patch.yaml file that contains your pacrd config. If you are not deploying into the spinnaker namespace, update the front50 and orca keys: # file: patch.yaml apiVersion: v1 kind: ConfigMap metadata: name: pacrd-config namespace: spinnaker data: pacrd.yaml: | spinnakerServices: # NOTE: change `spinnaker` to your namespace name here front50: orca: # OPTIONAL: uncomment the next line to configure a Fiat service account, it should be at the same level that spinnakerServices. # fiatServiceAccount: my-service-account When you are ready, apply the pacrd manifest to your cluster: # If using `kubectl` >= 1.14 kubectl apply -k . # Otherwise, use `kustomize` and `kubectl` toegether kustomize build | kubectl apply -f - Usage Once you have PaCRD installed and running in your cluster, you can define your applications and pipelines. Then apply them to the cluster. While this product is in an Experimental state, kind objects for PaCRD live under the pacrd.armory.spinnaker.io/v1alpha1 version moniker. Applications In Spinnaker, an Application is a logical construct that allows you to group resources under a single name. You can read more about applications in the Spinnaker docs. For available Application configuration options check out the PaCRD Custom Resource Definition Documentation. Creating an application In Kubernetes, define your application in an application.yaml file. The configuration fields are the same as what you see when you create an application using the Spinnaker UI. The following example defines an application named “myapplicationname”. Note: Application names must adhere to both Kubernetes and Spinnaker name standards. #file: application.yaml apiVersion: pacrd.armory.spinnaker.io/v1alpha1 kind: Application metadata: name: pacrd-pipeline-stages-samples spec: email: [email protected] description: Description Create the application in your cluster by running: kubectl apply -f application.yaml Check on the status of your application by using either the get or describe commands. kubectl recognizes either app or application for the resource kind: kubectl get app myapplicationname # or kubectl get application myapplicationname The command returns information similar to the this: NAME URL LASTCONFIGURED STATUS myapplicationname 7m26s Created Updating an application You can update in one of two ways: - Reapply the application manifest in your repository kubectl apply -f application.yaml - Edit the application manifest in-cluster kubectl edit app myapplicationname When you update your application in Kubernetes, the changes propagate into Spinnaker. If an error occurs during the update, your application may show an ErrorFailedUpdate state. You can see the details of that failure by describing the resource and looking in the “Events” section: kubectl describe app myapplicationname Deleting an application You can delete an application in one of two ways: - Reapply the application manifest in your repository kubectl delete -f application.yaml - Delete the application directly kubectl delete app myapplicationname When you delete your application in Kubernetes, the deletion propagates into Spinnaker. If an error occurs during deletion, your application may show an ErrorFailedDelete state. You can see the details of that failure by describing the resource and looking in the “Events section”: kubectl describe app myapplicationname Pipelines Pipelines allow you to encode the process that your team follows to take a service from commit to a desired environment, such as production. You can read more in the Spinnaker Pipelines guide. View Pipeline configuration options in the PaCRD Custom Resource Definition Documentation. Creating pipelines In Kubernetes, define your pipeline in a pipeline.yaml file. The configuration fields are the same as what you see when you create a pipeline using the Spinnaker UI. The following example defines a simple pipeline named “myapplicationpipeline”, which bakes a manifest and prompts for a manual judgment. Pipeline names should follow the Kubernetes Object Names and IDs naming conventions. This example assumes that you’ve created the myapplicationname application from the previous section. Create one before proceeding if you have not done so already. # file: deploy-nginx.yaml apiVersion: pacrd.armory.spinnaker.io/v1alpha1 kind: Pipeline metadata: name: pacrd-deploymanifest-integration-samples spec: description: A sample showing how to define artifacts. application: &app-name pacrd-pipeline-stages-samples stages: - type: deployManifest properties: name: Deploy text manifest refId: "1" requisiteStageRefIds: [ ] account: spinnaker cloudProvider: kubernetes moniker: app: *app-name skipExpressionEvaluation: true source: text manifests: - | apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 Create your pipeline in your cluster: kubectl apply -f pipeline.yaml Check on the status of your pipeline by using either the get or describe commands. kubectl will recognize either pipe or pipeline for the resource kind: kubectl get pipe myapplicationpipeline # or ... kubectl get pipeline myapplicationpipeline The command returns information similar to the this: NAME STATUS LASTCONFIGURED URL myapplicationpipeline Updated 5s A describe call can give you additional contextual information about the status of your pipeline: kubectl describe pipeline myapplicationpipeline The command returns information similar to the this:: Name: myapplicationpipeline API Version: pacrd.armory.spinnaker.io/v1alpha1 Kind: Pipeline Metadata: # omitted for brevity Spec: # omitted for brevity Status: Id: f1eb82ce-5a8f-4b7a-9976-38e4aa022702 Last Configured: 2020-03-09T15:55:27Z Phase: Updated URL: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Updated 94s pipelines Pipeline successfully created in Spinnaker. Warning ErrorUpdatingPipeline 93s pipelines Bad Request: The provided id f1eb82ce-5a8f-4b7a-9976-38e4aa022702 doesn't match the pipeline id null Normal Updated 91s (x2 over 91s) pipelines Pipeline successfully updated in Spinnaker. Updating pipelines You can update a pipeline in one of two ways: - Reapply the pipeline manifest in your repository kubectl apply -f pipeline.yaml - Edit the pipeline manifest in-cluster kubectl edit pipeline myapplicationpipeline When you update your pipeline in Kubernetes, the changes propagate into Spinnaker. If an error occurs during the update, your pipeline may show an ErrorFailedUpdate state. You can see the details of that failure by describing the resource and looking in the “Events” section: kubectl describe pipeline myapplicationpipeline Deleting pipelines You can delete a pipeline in one of two ways: - Delete the pipeline manifest from your repository definition kubectl delete -f pipeline.yaml - Delete the pipeline directly kubectl delete pipeline myapplicationpipeline When you delete your pipeline in Kubernetes, the deletion propagates into Spinnaker. If an error occurred during deletion, then your pipeline may show an ErrorFailedDelete state. You can see the details of that failure by describing the resource and looking in the “Events section”: kubectl describe pipeline myapplicationpipeline Artifacts An artifact is an object that references an external resource. Examples include a Docker container, a file in source control, an AMI, or a binary blob in S3. Artifacts in PaCRD come in two types: - Definitions contain all necessary information to locate an artifact. - References contain enough information to find a Definition. Defining Artifacts Define your pipeline artifacts in a section called expected artifacts. The following example defines a single container image that the pipeline expects as an input to the BakeManifest stage: apiVersion: pacrd.armory.spinnaker.io/v1alpha1 kind: Pipeline metadata: name: my-pipeline spec: description: A sample showing how to define artifacts. application: my-application expectedArtifacts: - id: &image-id my-application-docker-image displayName: *image-id matchArtifact: type: docker/image properties: name: my-organization/my-container artifactAccount: docker-registry stages: - type: bakeManifest properties: name: Bake Application refId: "1" outputName: myManifest templateRenderer: helm2 inputArtifacts: - id: *image-id Each matchArtifact block contains: type: required; the artifact classification; see the Types of Artifacts section in the Spinnaker documentation for supported types properties: dictionary of key-value pairs appropriate for that artifact PaCRD only validates officially supported artifacts. PaCRD does not validate custom artifacts or artifacts defined via Plugins. Referencing Artifacts Reference artifacts in the inputArtifacts section of a pipeline stage. You can use either the artifact id or displayName. If you are new to using artifacts, you can use the displayName value, which is most often what appears when the Spinnaker UI displays your pipeline. The following example defines two artifacts in the expectedArtifacts section. Each artifact is then referenced in the inputArtifacts section of the bakeManifest stage. The first is declared with id and the second with displayName. apiVersion: pacrd.armory.spinnaker.io/v1alpha1 kind: Pipeline metadata: name: my-pipeline spec: description: A sample showing how to reference artifacts. application: my-application expectedArtifacts: - id: first-inline-artifact-id displayName: My First Inline Artifact Id matchArtifact: type: embedded/base64 properties: name: my-inline-artifact - id: second-inline-artifact-id displayName: My Second Inline Artifact matchArtifact: type: embedded/base64 properties: name: my-second-inline-artifact stages: - type: bakeManifest properties: name: Bake Application refId: "1" outputName: myManifest templateRenderer: helm2 inputArtifacts: - id: first-inline-artifact-id - displayName: My Second Inline Artifact PaCRD validates that the inputArtifacts referenced in the bakeManifest stage correspond to exactly one artifact declared in the expectedArtifacts section of the CRD. PaCRD throws a PipelineValidationFailed error when it can’t find an input artifact in the list of expected artifacts. You can see which input artifact failed validation by executing a describe call against the pipeline under creation. If you use the above example but replace the id reference with a-nonsense-value, pipeline validation fails. Execute kubectl describe: kubectl describe pipeline my-pipeline Expected output displays which input artifact failed validation: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Updated 2m53s (x2 over 2m54s) pipelines Pipeline successfully updated in Spinnaker. Warning PipelineValidationFailed 0s (x4 over 3s) pipelines artifact with id "a-nonsense-value" and name "" could not be found for this pipeline Enabling monitoring with New Relic If you want to monitor how PaCRD is used and the errors encountered, you can enable New Relic integration. You can either send this data to your New Relic account or to Armory’s New Relic account. If you choose to share the data with Armory, it helps us improve the product and provide better support. For information about what data is sent and how it is used, contact Armory, or simply enable it with your New Relic account first. To enable this integration, add the New Relic license to the patch.yaml file as shown below. If you send it to Armory’s New Relic account, we will give you a license to use, otherwise use your New Relic account’s license. #> Since reconciliation happens multiple times per minute, Armory sends metrics only during the first three minutes of each hour. Error messages contain obfuscated URLs, application names, and pipeline names. By default the application name will be pacrd, if you want to change this you can add NewRelicAppName property at the same level of newRelicLicense and add your own custom application name. Here’s an example of error stack traces: Setting up mTLS If you want to set up mTLS for this particular service, you need to configure Spinnaker for mTLS first. See Configuring mTLS for Spinnaker Services for details. Prerequisites - ca.pem file - ca.key file - ca certificate password - pacrd.crt - pacrd.key - pacrd certificate password (pacrd.pass.txt) If you don’t have the PaCRD certificate, key, and password files, you can generate them using this script: Once you have that information you can continue. Steps Add the PaCRD certificate files as a Kubernetes secret: kubectl create secret generic pacrd-cert \ --from-file=./pacrd.crt \ --from-file=./pacrd.key \ --from-file=./pacrd.pass.txt \ --from-file=./ca.pem Go to the pacrdinstallation folder that has your kustomization.yaml, patch.yaml, and pacrd.yaml. Create a new file called mtls.yamlwith the following content: # file: mtls.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: control-plane: controller-manager name: pacrd-controller-manager namespace: spinnaker spec: template: spec: containers: - name: manager containers: volumeMounts: - mountPath: /opt/secrets name: pacrd-certificates volumes: - secret: secretName: pacrd-cert name: pacrd-certificates This file will mount the certificates to /opt/secrets/in the pacrd manager container. Update kustomization.yamlto include the mtls.yamlfile: # file: kustomization.yaml resources: - pac_new.yaml patchesStrategicMerge: - patch.yaml - mtls.yaml namespace: spinnaker # Note: you should change this value if you are _not_ deploying into the `spinnaker` namespace. Update your patch.yamlfile to add the certificate information: #> server: ssl: enabled: true certFile: /opt/secrets/pacrd.crt keyFile: /opt/secrets/pacrd.key keyPassword: /opt/secrets/pacrd.pass.txt cacertFile: /opt/secrets/ca.pem clientAuth: want http: cacertFile: /opt/secrets/ca.pem clientCertFile: /opt/secrets/pacrd.crt clientKeyFile: /opt/secrets/pacrd.key clientKeyPassword: /opt/secrets/pacrd.pass.txt Redeploy your service with kustomize build | kubectl apply -f - Known Limitations v0.1.x - v0.9.x Applications Deleting an application in Kubernetes triggers the following behavior: - Delete the application in Kubernetes. - Delete the application in Spinnaker. - Delete pipelines associated with the application in Spinnaker only. Pipelines - Pipeline stages must be defined with a typekey for the stage name and a key of the same name where all stage options live. For example, for the “Bake Manifest” stage you would structure your definition like this: # ... stages: - type: BakeManifest bakeManifest: name: Bake the Bread # ... # ... v0.1.x - v0.4.0 Applications - Documentation for available Application spec fields must be found in the installation manifest for this controller. You can do so by grepping for applications.pacrd.armory.spinnaker.ioin the installation manifest. Fields are documented under spec.validation.openAPIV3Schema. Pipelines - Documentation for available Pipeline spec fields must be found in the installation manifest for this controller. You can do so by grepping for pipelines.pacrd.armory.spinnaker.ioin the installation manifest. Fields are documented under spec.validation.openAPIV3Schema. PaCRD Examples Custom Resource Definition examples for testing PaCRD pipelines
https://v2-25.docs.armory.io/docs/spinnaker-user-guides/pacrd/
2021-09-17T00:38:58
CC-MAIN-2021-39
1631780053918.46
[array(['https://d33wubrfki0l68.cloudfront.net/72bdaba17a03448b230a0cdefca37d3fcbcca21d/4d8e9/images/pacrd/new_relic.png', None], dtype=object) ]
v2-25.docs.armory.io
As and when inventory is replenished or depleted in NetSuite, the inventory information (quantity available) is automatically exported to Walmart, keeping both systems always in sync. When you want to sync the inventory, run the inventory data flow from integrator.io. The Walmart listed item is then updated with the quantity that's available in NetSuite. View of the units in Walmart Seller central: Settings for Walmart Inventory to NetSuite Inventory Add Flow Advanced Settings: Save the settings after selecting the relevant values. - Select the NetSuite saved search for syncing inventory levels: Use the 'Refresh' option to fetch the latest records from NetSuite - Always sync inventory levels for entire catalog: Syncs inventory for all the NetSuite items linked to Walmart, every time inventory export data flow runs - NetSuite locations to pick inventory from Select the. Note: Some of the inventory for the items goes to reserve. The reserved quantity is not visible. Whenever a buyer puts the item in the cart, before checkout, this adds to the reserve. The reserve changes from time to time depending on the activity done by the buyers on the website. The final quantity updated comprises of Reserve + Order generated before fulfillment+ current stock. Final Displayed Quantity is the combination of the above three parameters. Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/229833647-Sync-inventory-from-netSuite-to-Walmart
2021-09-17T01:12:12
CC-MAIN-2021-39
1631780053918.46
[array(['/hc/article_attachments/215327788/2016-10-26_15-12-37a.png', None], dtype=object) array(['/hc/article_attachments/115007522708/image44.png', 'image44.png'], dtype=object) array(['/hc/article_attachments/115007522748/image54.png', 'image54.png'], dtype=object) ]
docs.celigo.com
perl Resource This page is generated from the Chef Infra Client source code. To suggest a change, edit the perl.rb file and submit a pull request to the Chef Infra Client repository. Use the perl resource to execute scripts using the Perl interpreter. This resource may also use any of the actions and properties. Syntax A perl resource block executes scripts Perl: perl 'hello world' do code <<-EOH print "Hello world! From Chef and Perl."; EOH end where: codespecifies the command to run The full syntax for all of the properties that are available to the perl resource is: perl 'name' do code String creates String cwd String environment Hash flags String group String, Integer notifies # see description path Array returns Integer, Array subscribes # see description timeout Integer, Float user String, Integer umask String, Integer action Symbol # defaults to :run if not specified end where: perlis the resource. nameis the name given to the resource block. actionidentifies which steps Chef Infra Client will take to bring the node into the desired state. code, creates, cwd, environment, flags, group, path, returns, timeout, user, and umaskare properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource. Actions The perl resource has the following actions: :nothing - Prevent a command from running. This action is used to specify that a command is run only when another resource notifies it. :run - Default. Run a script. Properties The perl Type: String, Integer The group name or group ID that must be changed before running a command. ignore_failure - Ruby Type: true, false | Default Value: false Continue running a recipe if a resource fails for any reason.is: notifies :action, 'resource[name]', :timer retries - Ruby Type: Integer | Default Value: 0 The number of attempts to catch exceptions and retry the resource. retry_delay - Ruby Type: Integer | Default Value: 2 The retry delay (in seconds). returns - Ruby Type: Integer, Array | Default Value: 0 The return value for a command. This may be an array of accepted values. An exception is raised when the return value(s) do not match. timeout - Ruby Type: Integer, Float | Default Value: 3600 The amount of time (in seconds) a command is to wait before timing out. user - Ruby Type: String, Integer The user name or user ID that should be changed before running a command. umask - Ruby Type: String, Integer The file mode creation mask, or umask. Common Resource Functionality Chef resources include common properties, notifications, and resource guards.?
https://docs.chef.io/resources/perl/
2021-09-17T01:36:34
CC-MAIN-2021-39
1631780053918.46
[]
docs.chef.io
Our pricing structure is volume-based pricing in hit-based quantity, you will get discounts for large quantity purchases. For each API transaction will be counted as the API calling activity status is successful or incompleted. The quota purchasing that we provide is prepaid billing system. Please contact our sales team at [email protected] for quota purchasing. You will get pricing information and a discount if necessary. After you get the pricing deal, we will send you subscription agreement. The detail of payment terms states on the subscription agreement. After you sign the subscription agreement, we will send you the following documents: Original Invoice; and/or Tax invoice; and/or Copies of the subscription Agreement; and/or Copy of the Agreement Addendum (if any and needed). Please make confirmation to our business or finance representative through [email protected] and [email protected] once you have transfered the payment to us We will have your payment checked, and once we have obtained the payment, we shall provide you with the quota in your account. We accommodate payment method by bank transfer or virtual account.
https://docs.identifai.id/getting-started/quota-purchasing
2021-09-17T00:41:40
CC-MAIN-2021-39
1631780053918.46
[]
docs.identifai.id
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Returns a list of all buckets owned by the authenticated sender of the request. For .NET Core this operation is only available in asynchronous form. Please refer to ListBucketsAsync. Namespace: Amazon.S3 Assembly: AWSSDK.S3.dll Version: 3.x.y Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/S3/MIS3ListBuckets.html
2021-09-17T02:31:01
CC-MAIN-2021-39
1631780053918.46
[]
docs.aws.amazon.com
Step 5 - Configure Scripts Integration Step 5 Configure Scripts In this section we will make sure all fields map correctly to FMSP. Please read the Getting Connected Docs to familiarize yourself. Depending on you version of FMSP and how many modifications you have made to things like field and table names, you will have to make sure all the configuration fields map correctly. fmQBO for FM Starting Point does NOT have all fields mapped as some features are not supported. You can identify these mappings easily. Unsupported mappings will have the corresponding script step commented out. Be sure to read Using your Own Solution for a detailed explanation on how to map fields.
https://docs.fmqbo.com/article/135-step-5-configure-scripts
2021-09-16T23:47:28
CC-MAIN-2021-39
1631780053918.46
[]
docs.fmqbo.com
[−][src]Struct directories_next:: ProjectDirs ProjectDirs computes the location of cache, config or data directories for a specific application, which are derived from the standard directories and the name of the project/organization. Examples All examples on this page are computed with a user named Alice, and a ProjectDirs struct created with ProjectDirs::from("com", "Foo Corp", "Bar App"). use directories_next::ProjectDirs; if let Some(proj_dirs) = ProjectDirs::from("com", "Foo Corp", "Bar App") { proj_dirs.config_dir(); // Linux: /home/alice/.config/barapp // Windows: C:\Users\Alice\AppData\Roaming\Foo Corp\Bar App // macOS: /Users/Alice/Library/Application Support/com.Foo-Corp.Bar-App } Implementations impl ProjectDirs[src] pub fn from_path(project_path: PathBuf) -> Option<ProjectDirs>[src] Creates a ProjectDirs struct directly from a PathBuf value. The argument is used verbatim and is not adapted to operating system standards. The use of ProjectDirs::from_path is strongly discouraged, as its results will not follow operating system standards on at least two of three platforms. Use ProjectDirs::from instead. pub fn from([src] qualifier: &str, organization: &str, application: &str ) -> Option<ProjectDirs> qualifier: &str, organization: &str, application: &str ) -> Option<ProjectDirs> Creates a ProjectDirs struct from values describing the project. The returned value depends on the operating system and is either Some, containing project directory paths based on the state of the system's paths at the time new()was invoked, or None, if no valid home directory path could be retrieved from the operating system. To determine whether a system provides a valid $HOME path, please refer to BaseDirs::new The use of ProjectDirs::from (instead of ProjectDirs::from_path) is strongly encouraged, as its results will follow operating system standards on Linux, macOS and Windows. Parameters qualifier– The reverse domain name notation of the application, excluding the organization or application name itself. An empty string can be passed if no qualifier should be used (only affects macOS). Example values: "com.example", "org", "uk.co", "io", "" organization– The name of the organization that develops this application, or for which the application is developed. An empty string can be passed if no organization should be used (only affects macOS and Windows). Example values: "Foo Corp", "Alice and Bob Inc", "" application– The name of the application itself. Example values: "Bar App", "ExampleProgram", "Unicorn-Programme" pub fn project_path(&self) -> &Path[src] Returns the project path fragment used to compute the project's cache/config/data directories. The value is derived from the ProjectDirs::from call and is platform-dependent. pub fn cache_dir(&self) -> &Path[src] Returns the path to the project's cache directory. pub fn config_dir(&self) -> &Path[src] Returns the path to the project's config directory. pub fn data_dir(&self) -> &Path[src] Returns the path to the project's data directory. pub fn data_local_dir(&self) -> &Path[src] Returns the path to the project's local data directory. pub fn runtime_dir(&self) -> Option<&Path>[src] Returns the path to the project's runtime directory. Trait Implementations impl Clone for ProjectDirs[src] fn clone(&self) -> ProjectDirs[src] fn clone_from(&mut self, source: &Self)1.0.0[src] impl Debug for ProjectDirs[src] Auto Trait Implementations impl RefUnwindSafe for ProjectDirs impl Send for ProjectDirs impl Sync for ProjectDirs impl Unpin for ProjectDirs impl UnwindSafe for ProjectDirs>,
https://docs.rs/directories-next/2.0.0/directories_next/struct.ProjectDirs.html
2021-09-17T00:38:55
CC-MAIN-2021-39
1631780053918.46
[]
docs.rs
2.2. Step 1: Data Models¶ Warning WIP Now, after we got an overview, it’s time to put our hands on the tools. Though, if you want to skip this part of the tutorial to directly work with EHRbase, you can get the example files here. As a first step, we need to obtain the information models. As mentioned in the introduction, the Clinical Knowledge Managers are our first address. To download all Archetypes, go to the International Clinical Knowledge Manager On the left side, you can find different categories of Archetypes, for example observations that contain data models like blood pressure, body temperature or Glasgow coma scale. For our tutorial, we want to get a copy of the archetypes from the Clinical Knowledge Manager. Under Archetypes (marked in the image), you will find a function called Bulk Export. You can choose if the export should only contain Archetypes from a selected project or all and depending on its lifecycle status (published, draft etc.). Choose to get the latest published revision and use ADL (Archetype Definition Language) as export format. Clicking Bulk Export will then download a zip folder containing all Archetypes meeting the criteria. Next, install the Template Designer. The process should be straight forward (At least on Windows). Alternatively, the ADL Designer can also be used to create Templates by following this guide. Open the Template Designer. The first step is to configure a Knowledge Repository. Click on Edit Repository List Set Archetype Files to the path where you unzipped the Archetypes you obtained through the Bulk Export. When you then select your new repository, the Archetypes should appear on the right window: Now you can start to create your own Template. Typically, a Template needs an Archetype of type Composition as the root element. The Composition Archetypes provide the basic structure for the Template through Slots (which can be filled with Archetypes) and predefined metadata elements. In our example, we us the Self Monitoring Archetype. Just drag and drop the Archetype from the right panel to the left panel. Additionally, we add a blood pressure Archetype. Next, you can define further constraints on the particular elements, for example defining their cardinality, remove single elements, add terminology bindings etc. We could also fill the slot within the blood pressure Archetype with a device Archetype to collect information about the device used for the measurement. Finally, give your Template a name. Then you can Export the Template in the Operational Template (OPT) Format (File –> Export –> As Operational Template). This is all you need to upload your Template to EHRbase or any other openEHR server.
https://ehrbase.readthedocs.io/en/latest/02_getting_started/02_data_models/index.html
2021-09-17T02:11:44
CC-MAIN-2021-39
1631780053918.46
[array(['../../_images/ckm_main.png', 'alternate text'], dtype=object) array(['../../_images/bulk.png', 'alternate text'], dtype=object) array(['../../_images/template_designer.png', 'alternate text'], dtype=object) ]
ehrbase.readthedocs.io
MFK Command: SetStageVehicleCharacterJumpOut Synopsis Set the character to jump out of the vehicle when it's destroyed. This must be called in the same stage as the AddStageVehicleCharacter call that added the character. Syntax SelectMission("m1"); ... AddStage(); AddStageVehicle("cVan","m1_cVan","NULL","cVan.con"); AddStageVehicleCharacter("cVan", "lisa"); SetStageVehicleCharacterJumpOut("cVan", "lisa", 270); AddObjective("dummy"); CloseObjective(); CloseStage(); CloseMission(); Notes This can only be used on characters added with AddStageVehicleCharacter. History 1.19 - Added this command.
https://docs.donutteam.com/books/lucas-simpsons-hit-run-mod-launcher/page/mfk-command-setstagevehiclecharacterjumpout
2019-02-16T05:03:09
CC-MAIN-2019-09
1550247479885.8
[]
docs.donutteam.com
Module Fundamentals This version of Puppet is not included in Puppet Enterprise. The latest version of PE includes Puppet 4.4. A newer version is available; see the version menu above for details. Modules are self-contained bundles of code and data. You can download pre-built modules from the Puppet Forge or you can write your own modules.. Any of these classes or defines can be declared by name within a manifest or from an external node classifier (ENC). # /etc/puppetlabs/code/environments/production, Deprecation Note: The testsdirectory is deprecated in favor of the examplesdirectory. If you use puppet module generateto create your module skeleton, rename the testsdirectory to examples. Example This example module, source =>URL would be puppet:///modules/my_module/service.conf. Its contents can also be accessed with the filefunction, like content => file('). component.epp— A manifest can render this template with epp('my_module/component.epp'). examples/— Contains examples showing how to declare the module’s classes and defined types. init.pp other_example.pp— Major use cases should have an example.. You cannot have a class named init.. Certain module names are disallowed; see the list of reserved words and names. Files. For more best practices, see the Puppet Labs Modules Style Guide.
https://docs.puppet.com/puppet/4.1/modules_fundamentals.html
2019-02-16T04:52:30
CC-MAIN-2019-09
1550247479885.8
[]
docs.puppet.com
Contents Performance Analytics and Reporting Previous Topic Next Topic Performance Analytics domain separation for managed service providers Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Performance Analytics domain separation for managed service providers Managed service providers can configure Performance Analytics with domain separation to provide domain-specific analytics and to control how scores are collected through the domain hierarchy. You can create domain configurations to define which domains to collect data from and which domains to display on dashboards. Associate these domain configurations with specific data collection jobs and dashboards to provide relevant scores to users while maintaining your Performance Analytics configuration in a single domain.To use this functionality you must have Performance Analytics Premium, the Domain Support - Domain Extension Installer plugin, and responsive dashboards. Activate the Performance Analytics - Domain Support pluginYou can activate the Performance Analytics - Domain Support plugin (com.snc.pa.domain_support) if you have the admin role.Create a domain configurationCreate a domain configuration to define which domains to collect scores from and how to store scores within the domain hierarchy.Domain separation on dashboards and scorecardsYou can view domain-specific scores on dashboards and scorecards. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-performance-analytics-and-reporting/page/use/performance-analytics/concept/pa-domain-separation-msp.html
2019-02-16T06:00:51
CC-MAIN-2019-09
1550247479885.8
[]
docs.servicenow.com
Thunderboard BLE System Setup Tutorial Steps - Setup the Thunderboard device - Import the thunderboard-ble-system from ipm.clearblade.com - Configure Adapter - Setup & start the edge on Raspberry Pi - Test and Verify the setup - Start the Adapter - See Visualization Setup the Thunderboard device - The default advertisement time is 30 seconds on Thunderboard Devices. We will be modifying this to advertise indefinitely. - Follow the steps in the instructions-to-setup-thunderboard to flash the Thunderboard’s default firmware with new changes. Import BLE system on platform - Create a developer account on platform.clearblade.com - Once logged in, the developer can access a console. - To import thunderboard-ble-templateinto - A new system: click the Newbutton & search for thunderboard. - An existing system: click the installbutton & search for thunderboard. - Select the system and hit import/ createbutton. Configure Adapter Edit adapterconfig.txt: - In the developer console, select your system and switch to the adapters tab. - Select the ThunderBoardAdapteradapter & edit the configuration. - Download the adapterconfig.txtfile and edit the absolute paths based on where you install and run the edgeon your gateway (here raspberry pi) Edit pythonScanner.py: - This python file has code which communicates with the edge running locally on the raspberry pi. Set the credentialsbased on the TODOcomments in the python file. ## Change the following fields in the pythonScanner.py "platformURL" - the platform where the system resides, default is communicating to the edge, keep it unchanged. "systemKey" & "systemSecret" - The key & secret of the system, can be found by going in the about section of the system of the clearblade developer console. "username" - the user which is there in the users table "password" - respective password for the above username Replace these changed files in the adapter configuration. Setup and start the edge on Raspberry Pi - Switch to the edgesection in the developer console. - An edge ThunderBoardOnPiwould already be created, if not create one by clicking the Newbutton. - Note that it will not be connected as of now. - Click on the setting icon in the name column of the edge pageclick on setup instructions. Select target as Linux 32bit - ARM Assuming a Raspberry Pi is up and running, ssh into the Pi’s terminal & create a directory cbedgeinside the home folder and cdinto it. Note: By default, the edge folder will be set as /home/pi/cbedge Perform the Download, Unzip, Install& Permissionoperations on the Pi. lsthe folder to find the edge binary. [Image] Given edge ip and platform ip are as shown in the image. [Image] Start the edge by running the command in Pi’s terminal. Test and Verify the setup - Verify the edge is connected in the edgetab of the developer console - Verify the adapter is deployed, go to the adapter’s section and find the edge in the connected section. [Image] Start the adapter - Navigate to the adapter section of the console. - Select the edge in the connected section - Select the ThunderBoardAdapter adapter, select the ThunderboardOnPi edge, and press the play button. - Go to the Devices tab and confirm your device is listed there. For an additional check, navigate to the Messages tab, and confirm there are MQTT messages listed. See Visualization - To see a visualization of the data, go to the Portals tab and click on the AnomalyDetection portal. Change the topic field on the right to be thunderboard/environment/THUNDERBOARD_ID/_platform (the ID will be a 5 digit number in pretty much every MQTT topic in the Messages tab) - Update the Sensor Key field to one of sound, co2, temperature, voc, battery, light, uv, humidity, pressure.
https://docs.clearblade.com/v/3/4-developer_reference/iot_gateways/Thunderboard-BLE-System-Setup-Tutorial/
2019-02-16T05:24:55
CC-MAIN-2019-09
1550247479885.8
[]
docs.clearblade.com
The Show Statusbar command enables and disables displaying the status bar. It may be useful to disable it when you are working in full-screen mode. You can set the default for the status bar in the Image Window Appearance dialog. You can access this command from the image menubar through View → Show Statusbar.
https://docs.gimp.org/2.8/en/gimp-view-show-statusbar.html
2019-02-16T05:00:25
CC-MAIN-2019-09
1550247479885.8
[]
docs.gimp.org
Contents Now Platform Custom Business Applications Previous Topic Next Topic Form test step: UI Action Visibility Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Form test step: UI Action Visibility Verifies whether one or more UI Actions are visible or invisible on the current form. The default visible UI Actions vary depending on the currently-impersonated user will run this step. Test (Read only.) The test to which this step belongs. Step config (Read only.) The test step for this form. Table The table with the UI Actions to test. Visible The list of UI elements on the current form that should be visible. Not Visible The list of UI elements on the current form that should not be visible. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-application-development/page/administer/auto-test-framework/reference/atf-ui-visible.html
2019-02-16T05:48:42
CC-MAIN-2019-09
1550247479885.8
[]
docs.servicenow.com
A trigger is procedure associated with a table that is executed (i.e., fired) whenever that table is modified by the execution of an insert, update, or delete statement or by a call to a RDM Core Cursor API function that inserts, updates, or deletes a row from a database. Triggers are specified using SQL and those trigger features implemented in RDM as described below conform to the SQL standard.
https://docs.raima.com/rdm/14_1/_s_q_l__t_r_i_g_g_e_r.html
2019-02-16T05:36:36
CC-MAIN-2019-09
1550247479885.8
[]
docs.raima.com
Layer and Column Properties Each element has its own set of properties that can be modified, including effect and peg layers. If you want to modify some of the element’s properties, display the The Layer Properties editor allows you to: The Column Properties editor allows you to: The Xsheet column will not open the Layer Properties editor, but it will show the column properties allowing you to modify settings related to the Xsheet column.
https://docs.toonboom.com/help/harmony-11/workflow-standalone/Content/_CORE/_Workflow/011_Timing/044_H3_Layer_Properties.html
2019-02-16T06:19:44
CC-MAIN-2019-09
1550247479885.8
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stage.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/draw.png', 'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/sketch.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Layers/HAR11/HAR11_timing_layerproperties.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Layers/HAR11/HAR11_timing_drawingdialog.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
All Files Navigation Toolbar The Navigation toolbar lets you quickly display the first and last frame of a panel. These buttons grey out when the playhead is at the start or end of a panel. How to access the Navigation toolbar Select Windows > Toolbars > Navigation. Icon Tool Name Description First Frame Displays the first frame of the layer animation. Last Frame Displays the last frame of the layer animation. Optional Buttons The following are buttons you can add to this toolbar if you want to have them in your workspace—see Customizing Toolbars. Icon Tool Name Description Project First Frame Displays the storyboard's first panel at its first frame. Previous Scene Displays the previous scene. Previous Panel Displays the previous panel. Next Panel Displays the next panel. Keyboard shortcut: A. Next Scene Displays the next scene. Keyboard shortcut: F. Project Last Frame Displays the storyboard's last panel at its last frame.
https://docs.toonboom.com/help/storyboard-pro-6/storyboard/reference/toolbars/navigation-toolbar.html
2019-02-16T05:52:51
CC-MAIN-2019-09
1550247479885.8
[]
docs.toonboom.com
Server API Use these endpoints to manage your server options Check Health By using this endpoint you will be able to check the service statuses and memory usage. curl -XGET "{{ your_app_id }}&token={{ your_token }}" This endpoint will return an array like below: { "status": { "elasticsearch": "green", "redis": true }, "process": { "memory_used": 6867528 } } Ping Perform a simple ping to the server. This endpoints will not perform any extra validation than a simple server check, no matter the internal connections to third party servers, and no matter the amount of plugins installed. curl -XHEAD "" The response will be a 200 if the server is alive, or another code otherwise. Edit this page!
http://docs.apisearch.io/http-reference/server.html
2019-02-16T05:58:06
CC-MAIN-2019-09
1550247479885.8
[]
docs.apisearch.io
Configuring Media Lifecycle Management Contents - 1 Configuring Media Lifecycle Management When it is time to purge old recording files, Interaction Recording Web Services (or Web Services if you're using version 8.5.210.02 or earlier) requires additional configuration to allow the records to purge and/or backup successfully. For instance, when it is time to purge old recording files, Interaction Recording Web Services (Web Services) sends a purge request to the SpeechMiner database indicating which records to delete. Interaction Recording Web Services (Web Services) To enable Genesys Interaction Recording to purge and back up recording files, configure the Interaction Recording Web Services (Web Services) node as follows: In the backgroundScheduledMediaOperationsSettings section of the serverSettings section within the application.yaml file: - Set enableBackgroundScheduledMediaOperations to true - Set defaultBackupExportURI to a backup folder—for example, is the default backup folder. For more information about these options, see the Advanced Settings for the MLM API. In the recordingSettings section of the serverSettings section within the application.yaml file, set auditLogDeletedFiles to true if you want to log all deleted recordings in the audit log when they are purged. For more information about these options, see Configuring the Call Recording Audit Log. SpeechMiner To enable Interaction Recording Web Services (Web Services) to contact Interaction Receiver and purge the requested recordings, use a text editor to add the following to the location based setting group in the json.txt file: { "name":"interaction-receiver", "location": "/US/CA", "value": { "uri-prefix": "", "userName": "interaction receiver user name", "password": "interaction receiver password" } } - is the Load Balancer URL that points to the Interaction Receiver. - The Interaction Receiver user name and password must be the same as the user and password property values found within the [recording.archive] section of the tenant Annex in the configuration, and are set when configuring Recording Crypto Server. If these values are not found there, they should be added. Execute the following command: curl -u <user:password> -X POST -d @json.txt --header "Content-Type: application/json" http://<Web Services-cluster-address>/api/v2/settings/speechminer For more information on the properties of this settings group, see Interaction Recording Web Services Settings Groups. For more information about the location based setting group for encryption, see Encrypting and Provisioning Certificates. Creating Rules and Schedules Use Genesys Administrator Extension to create rules and schedules. For step-by-step instructions, see Recording Lifecycle Scheduler. Consider the following when creating backup and purge tasks: - Do not schedule backup tasks to run concurrently on the same Interaction Recording Web Services (Web Services) node if these tasks back up overlapping records. - Do not schedule backup and purge tasks to run concurrently if they act on overlapping records. - Ensure that all the Interaction Recording Web Services (Web Services) nodes have accurate clocks. - Genesys Administrator Extension's time is based on UTC. - When using the MLM Purge feature, if you specify a rule for voice recordings, the corresponding muxed screen recordings will not be purged unless you also select the Include Screen Recordings checkbox. - Recordings that are protected from deletion (using the Non-Deletion API or SpeechMiner) will not be deleted by Media Lifecycle Management purge tasks. - Do not schedule a purge task to run independently in its own rule unless you are willing to lose the associated data. Even if a backup has been scheduled, it is not guaranteed to complete successfully before the purge task is executed. When you are scheduling rules containing purge tasks, adhere to the following guidelines to avoid an unexpected failure of Purge or Backup tasks: - Run only one Purge task in a rule. - Run the Purge task last in a rule. - Do not run two rules with overlapping minAge/maxAge time intervals too close together (less than 5 seconds) if the first rule contains a Purge task. Note that the interval is the time between the rules that are running (that is, the completion of one rule and the start of the next) and not between the scheduled start time of rules. You can look at the recording.log file to determine when a rule has finished. Look for the following message: ... [] [] [] Scheduled rule [<rule name>] at location [<node path>] completedThe <rule name> and <node path> depend on the customer configuration. Note that the amount of time to run a rule depends on many factors, including call volume. The interval should be much greater than that suggested above to make allowances for day to day variations. Configuring For Multiple Regions The following sections describe how to configure MLM for multiple regions. Need For An MLM Node In Each Region Requiring Backup and/or Purge By design, an MLM node will only backup and/or purge call and screen recordings for which the metadata region property exactly matches the crRegion (call recording region) property found in the node's Interaction Recording Web Services (Web Services) application.yaml configuration file (if you are using Web Services and Application version 8.5.201.09 or earlier it is found in the server-settings.yaml file). This design prevents these nodes from "pulling" media between data centers. For example, if there are two data centers defining regions "east" and "west", and the client Interaction Recording Web Services (Web Services) nodes with nodePaths (in the application.yaml file or in the server-settings.yaml file if you are using Web Services and Application version 8.5.201.09 or earlier) /US/EAST/10.2.0.1 through /US/EAST/10.2.0.10 are in region "east", and client Interaction Recording Web Services (Web Services) nodes with nodePaths /US/WEST/10.2.1.1 through /US/WEST/10.2.1.10 are in region "west", and there is a requirement for deleting all call recordings after 90 days, then there will need to be at least one MLM node in each region (possibly with nodePaths /US/EAST/10.2.0.20 and /US/WEST/10.2.1.20) each with a 90-day purge rule. Configuring SpeechMiner Purge API If a deployment supports call recording and SpeechMiner, a deployment will need to have the SpeechMiner Purge API configured (see SpeechMiner for more information). For a multi-region deployment that has only one SpeechMiner, the SpeechMiner Purge API should be configured with a location property value that is the nearest common ancestor of the nodePaths of all the MLM nodes. For instance, using the example above, the nearest common ancestor of nodePaths /US/EAST/10.2.0.20 and /US/WEST/10.2.1.20 is /US. For a multi-region deployment that has one SpeechMiner per region, the SpeechMiner Purge API should be configured for the SpeechMiner of each region, using a location property value for each that is the nearest common ancestor of all the nodePaths of the region's Interaction Recording Web Services (Web Services) nodes. For instance, using the example above, the nearest common ancestor of nodePaths /US/EAST/10.2.0.1 through /US/EAST/10.2.0.10 and /US/EAST/10.2.0.20 is /US/EAST, and the nearest common ancestor of nodePaths /US/WEST/10.2.1.1 through /US/WEST/10.2.1.10 and /US/WEST/10.2.1.20 is /US/WEST. Configuring Pre-Recording You can configure MLM to keep the entire audio and the screen of calls that might need review of a Contact Center supervisor or manager. Use the following steps to set up Pre-recording: - Using Genesys Administrator Extension (GAX), select Business Attributes and create a new Custom business attribute object. Name the object Recording. - In GAX, select Business Attribute Values and select the Recording object you created in step #1 above. - Select Attribute Values, and create a new attribute value named Keep Recording. - In the Interaction-Workspace section, create the following parameters: - display-type=enum - enum.business-attribute=enumkeep_recording - enum.default-value=no - ready-only=false - In GAX, select Business Attributes and create a Customer new business attribute object. Name the new object enumkeep_recording. - In GAX, select Business Attribute Values and select the enumkeep_recording object you created in the step above. - Select Attribute Values and create the following attribute values: - no (set to default) - yes - In the Workspace Web Edition Cluster object (WWEWS_Cluster) or from the Workspace Desktop Edition application object, select the Application Options tab. - In the [interaction-workspace] section, set the interaction.case-data.format-business-attribute option to Recording. - From Routing Strategy, attache the following user data to the interaction: keep_recording="no".. For additional information, refer to the Universal Routing 8.1 Reference Manual. Selective Recording If your business retention policy is to keep a random percentage, say 20% of calls, then the routing strategy would call a function to determine whether to keep the call. If the call should not be kept, set the value to keep_recording="no". If the call should be kept based on the rule, set the value to keep_recording="yes". The agent does not need to mark the call to be kept for review. Use the following steps to setup Pre-recording for selective recording: - In the routing strategy, attach the following user data to the call: - keep_recording="no" - Add the same user data on the Agent's desktop, so that the agent can change the value from no to yes if the agent wants to keep the recording based on what the caller said. For more information about how to create rules and schedules, see Recording Lifecycle Scheduler. Upgrading the GIR Components When upgrading from version 8.5.205.01 or earlier to version 8.5.206.01 or later, the GIR components can be upgraded in any order (Web Services, Recording Plug-in, Recording Processor Script or Voice Processor, etc), but callType should not be specified in MLM tasks within the upgraded Recording plug-in until all Web Services nodes have been upgraded. Rolling Back the GIR Components When rolling back the components from version 8.5.206.01 or later to version 8.5.205.01 or earlier, the GIR components can be rolled back in any order (Web Services, Recording Plug-in, Recording Processor Script or Voice Processor, etc), as long as no MLM tasks specify callType in the filter. If a callType is specified as a filter of a task, the task must be removed before rolling back Web Services to a previous version. Disabling the task is not sufficient. Advanced Settings for the MLM API The following table describes the parameters that are in the backgroundScheduledMediaOperationsSettings section of the serverSettings section within the application.yaml file. Feedback Comment on this article:
https://docs.genesys.com/Documentation/CR/8.5.2/Solution/MLM
2019-02-16T05:30:35
CC-MAIN-2019-09
1550247479885.8
[]
docs.genesys.com
Header for the RDM Cursor APIs. More... #include "rdmtypes.h" #include "rdmrowidtypes.h" Go to the source code of this file. Header for the RDM Cursor APIs. Definition in file rdmcursorapi.h. Read data from a blob column. This function reads contents of a blob column from the current row of cursor. The read will start from the offset position and will attempt to read up to bytesIn bytes. If there are not bytesIn bytes left to read and the bytesOut parameter is NULL then the function will return the eBLOBBADSIZE error code. If bytesOut is not NULL then the number of bytes actually read into value will be put into that parameter. If value is NULL and bytesOut is not NULL then bytesOut will have the remaining number of bytes to be read from the offset position.
https://docs.raima.com/rdm/14_1/rdmcursorapi_8h.html
2019-02-16T06:23:44
CC-MAIN-2019-09
1550247479885.8
[]
docs.raima.com
Using Triggers Overview Working with triggers is very simple. You can create, update and delete triggers just like any other object in the ClearBlade system. Each trigger consists of a name, an event to handle (with or without filters), and a code service to be executed when the trigger “fires”. This section shows you how to triggers are created and manipulated from the ClearBlade console. Create Trigger The first step in creating a trigger is to create your code service that will be executed. In this example, we’ll create a trigger for the Data:ItemCreated event, for items created in any collection. To do this, go to your system/application and click the “Code” tab. Next, enter the javascript for your service as explained in the “Code” section of the documentation. Call the service “myCodeService”. When you have finished entering the javascript, click on this icon in the upper-right portion of the screen. A dialog will pop up with a set of tabs on top. Select “Triggers”. Your screen should look something like this: Click on “Add Trigger +”. In the dialog that appears, enter “anyItemCreated” in the “Name:” field. In the “Source:” pulldown menu, select “Data”. When you do this, and “Action:” dropdown menu will appear. Click on that, and select “ItemCreated”. One more dropdown will appear, titled “Collection:“. Leave that blank. At this point, the dialog should look like this: Now, go ahead and click the “Apply” button. Your trigger has been created and is immediately active. Any future items that are created will cause “myCodeServce” to be executed. Update Trigger Updating an existing trigger is very easy. The only thing you can update for an existing trigger are its filters – the things specific to the actual trigger/event itself. For example, in the trigger we created above, we can only update the collection associated with the trigger. To do that, do the following: - Navigate to the “myCodeService” service. - Click on the icon and select the “Triggers” tab. - Find the “anyItemCreated” trigger and select any collection (assuming you have at least one collection in your application). The “anyItemCreated” trigger will now only “fire” when an item is created in that collection. Delete Trigger Deleting a trigger is also very simple. Follow steps 1-3 above, and then click on the icon. Tap “Confirm” in the dialog that appears. Note that deleting a trigger does not delete the code service associated with that trigger.
https://docs.clearblade.com/v/3/2-console_administration/Triggers/
2019-02-16T05:24:14
CC-MAIN-2019-09
1550247479885.8
[array(['https://docs.clearblade.com/v/3/static/img/CreateTrigger.png', 'Trigger Dialog'], dtype=object) array(['https://docs.clearblade.com/v/3/static/img/ReadyToCreateTrigger.png', 'Ready To Create'], dtype=object) ]
docs.clearblade.com
Bridge component overview¶ Contents - Bridge component overview - Introduction - Terminology - Message path between peer nodes - Operating modes of the Bridge and Float Introduction¶ The Corda bridge/float component is designed for enterprise deployments and acts as an application level firewall and protocol break on all internet facing endpoints. The corda-bridgeserver.jar encapsulates the peer network functionality of the basic Corda Enterprise node, so that this can be operated separately from the security sensitive JVM runtime of the node. This gives separation of functionality and ensures that the legal identity keys are not used in the same process as the internet TLS connections. Also, it adds support for enterprise deployment requirements, such as High Availability (HA) and SOCKS proxy support. This document is intended to provide an overview of the architecture and options available. Terminology¶ The component referred to here as the bridge is the library of code responsible for managing outgoing links to peer nodes and implements the AMQP 1.0 protocol over TLS 1.2 between peers to provide reliable flow message delivery. This component can be run as a simple integrated feature of the node. However, for enhanced security and features in Corda Enterprise, the in-node version should be turned off and a standalone and HA version can be run from the corda-bridgeserver.jar, possibly integrating with a SOCKS proxy. The float component refers to the inbound socket listener, packet filtering and DMZ compatible component. In the simple all-in-one node all inbound peer connections terminate directly onto an embedded Artemis broker component hosted within the node. The connection authentication and packet the filtering is managed directly via Artemis permission controls managed directly inside the node JVM. For Corda Enterprise deployments we provide a more secure and configurable isolation component that is available using code inside corda-bridgeserver.jar. This component is designed to provide a clear protocol break and thus prevents the node and Artemis server ever being directly exposed to peers. For simpler deployments with no DMZ the float and bridge logic can also be run as a single application behind the firewall, but still protecting the node and hosted Artemis. In future we may also host the Artemis server out of process and shared across nodes, but this will be transparent to peers as the interchange protocol will continue to be AMQP 1.0 over TLS 1.2. Note All deployment modes of the bridge, float, or all-in-one node are transparently interoperable, if correctly configured. Message path between peer nodes¶ When a flow within a node needs to send a message to a peer there is a carefully orchestrated sequence of steps to ensure correct secure routing based upon the network map information and to ensure safe, restartable delivery to the remote flow. Adding the bridge and float to this process adds some extra steps and security checking of the messages. The complete sequence is therefore: The flow calls send, or sendAndReceiveto propagate a message to a peer. This leads to checkpointing of the flow fiber within the StateMachineand posting the message to the internal MessagingService. This ensures that the send activity will be retried if there are any errors before further durable transmission of the message. The MessagingServicechecks if this is a new destination node and if an existing out queue and bridge exists in Artemis. If the durable out queue does not exist then this will need to be created in Artemis: - First the durable queue needs to be created in the peer-to-peer Artemis. Each queue is uniquely named based upon the hash of the legal identity PublicKeyof the target node. - Once the queue creation is complete a bridge creation request is also published onto the Artemis bus via the bridge control protocol. This message uses information from the network map to link the out queue to the target host and port and TLS credentials. The flow does not need to wait for any response at this point and can carry on to send messages to the Artemis out queue. - The message when received by the bridge process opens a TLS connection to the remote peer (optionally, this connection can be made via a SOCKS4/5 proxy). On connect the two ends of the TLS link exchange certificate details and confirm that the certificate path is anchored at the network root certificate and that the X500 subject matches the expected target as specified in the create bridge message using details contained in the network map. The links are long lived so as to reduce the setup cost of the P2P messaging. In future, there may also be denial-of-service protection measures applied. - If the outgoing TLS 1.2 link is created successfully then the bridge opens a consumer on the Artemis out queue. The pending messages will then be transferred to the remote destination using AMQP 1.0, with final removal from the out queue only occurring when the remote end fully acknowledges safe message receipt. This ensures at least once delivery semantics. - Note that at startup of either the node or the bridge, the bridge control protocol resynchronises the bridging state, so that all out queues have an active bridge. Assuming an out queue exists the message can be posted to Artemis and the bridge should eventually deliver this message to the remote system. On receipt of a message acknowledge from Artemis the StateMachinecan continue flow if it is not awaiting a response i.e. a sendoperation. Otherwise it remains suspended waiting for the reply. The receiving end of the bridge TLS 1.2/AMQP 1.0 link might be the Artemis broker of a remote node, but for now we assume it is an enterprise deployment that is using a float process running behind a firewall. The receiver will already have confirmed the validity of the TLS originator when it accepted the TLS handshake. However, the float does some further basic checking of received messages and their associated headers. For instance the message must be targeted at an inbox address and must be below the network parameters defined maxMessageSize. Having passed initial checks on the message the float bundles up the message and originator as a payload to be sent across the DMZ internal firewall. This inbound message path uses a separate AMQP 1.0/TLS 1.2 control tunnel. (N.B. This link is initiated from the local master bridge in the trusted zone to the float in the DMZ. This allows a simple firewall rule to be configured which blocks any attempts to probe the internal network from the DMZ.) Once the message is forwarded the float keeps track of the delivery acknowledgements, so that the original sender will consume the message in the source queue, only on final delivery to the peer inbox. Any disconnections, or problems will send a reject status leading to redelivery from source. The bridge process having now received custody of the message does further checks that the message is good. At the minute the checks are essentially of well formedness of the message and that the source and destination are valid. However, future enhancements may include deep inspection of the message payload for CorDapp blacklisting, and other purposes. Any problems and the message is acknowledged to prevent further redelivery, logged to audit and dropped. Assuming this is a normal message it is passed onto the Artemis inbox and on acknowledgment of delivery is cascaded back. Thus, Artemis acknowledgement, leads to acknowledgement of the tunnel AMQP packet, which acknowledges the AMQP back to the sending bridge and that finally marks the Artemis out queue item as consumed. To prevent this leading to very slow one after the other message delivery the AMQP channels using sliding window flow control. (Currently, a practical default is set internally and the window size is not user configurable.) The MessagingServiceon the peer node will pick up the message from inbox on Artemis, carry out any necessary deduplication. This deduplication is needed as the distributed restartable logic of the Corda wire protocol only offers ‘at least once’ delivery guarantees. The resulting unique messages are then passed to the StateMachineso that the remote flow can be woken up. The reply messages use the authenticated originator flag attached by the float to route the replies back to the correct originator. Note That the message reply path is not via the inbound path, but instead is via a separately validated route from the local bridge to the original node’s float and then on to the original node via Artemis. Operating modes of the Bridge and Float¶ Embedded Developer Node (node + artemis + internal bridge, no float, no DMZ)¶ The simplest development deployment of the bridge is to just use the embedded Peer-to-Peer Artemis with the node as TLS endpoint and to have the outgoing packets use the internal bridge functionality. Typically this should only be used for easy development, or for organisations evaluating on Open Source Corda, where this is the only available option: Node + Bridge (no float, no DMZ)¶ The next simplest deployment is to turn off the built in bridge using the externalBridge enterprise config property and to run a single combined bridge/float process. This might be suitable for a test environment, to conserve VM’s. Note Note that to run the bridge and the node on the same machine there could be a port conflict with a naive setup, but by using the messagingServerAddressproperty to specify the bind address and port plus setting messagingServerExternal = falsethe embedded Artemis P2P broker can be set to listen on a different port rather than the advertised p2paddressport. Then configure an all-in-one bridge to point at this node: DMZ ready (node + bridge + float)¶ To familiarize oneself with the a more complete deployment including a DMZ and separated inbound and outbound paths the bridgeMode property in the bridge.conf should be set to BridgeInner for the bridge and FloatOuter for the DMZ float. The diagram below shows such a non-HA deployment. This would not be recommended for production, unless used as part of a cold DR type standby. Note Note that whilst the bridge needs access to the official TLS private key, the tunnel link should use a private set of link specific keys and certificates. The float will be provisioned dynamically with the official TLS key when activated via the tunnel and this key will never be stored in the DMZ: DMZ ready with outbound SOCKS¶ Some organisations require dynamic outgoing connections to operate via a SOCKS proxy. The code supports this option by adding extra information to the outboundConfig section of the bridge process. An simplified example deployment is shown here to highlight the option: Full production HA DMZ ready mode (hot/cold node, hot/warm bridge)¶ Finally, we show a full HA solution as recommended for production. This does require adding an external Zookeeper cluster to provide bridge master selection and extra instances of the bridge and float. This allows hot-warm operation of all the bridge and float instances. The Corda Enterprise node should be run as hot-cold HA too. Highlighted in the diagram is the addition of the haConfig section to point at zookeeper and also the use of secondary addresses in the alternateArtemisAddresses to allow node failover and in the floatAddresses to point at a pool of DMZ float processes.:
https://docs.corda.r3.com/corda-bridge-component.html
2019-02-16T04:59:07
CC-MAIN-2019-09
1550247479885.8
[array(['_images/node_embedded_bridge.png', '_images/node_embedded_bridge.png'], dtype=object) array(['_images/simple_bridge.png', '_images/simple_bridge.png'], dtype=object) array(['_images/bridge_and_float.png', '_images/bridge_and_float.png'], dtype=object) array(['_images/bridge_with_socks.png', '_images/bridge_with_socks.png'], dtype=object) array(['_images/ha_bridge_float.png', '_images/ha_bridge_float.png'], dtype=object) ]
docs.corda.r3.com
File:GKCHistory02.png Size of this preview: 800 × 453 pixels. Other resolution: 320 × 181 pixels. Original file (1,377 × 779 pixels, file size: 109 KB, MIME type: image/png) File history Click on a date/time to view the file as it appeared at that time. - You cannot overwrite this file. File usage This page was last modified on March 8, 2016, at 12:44.
https://docs.genesys.com…GKCHistory02.png
2019-02-16T06:27:36
CC-MAIN-2019-09
1550247479885.8
[array(['/images/1/1f/GKCHistory02.png', 'File:GKCHistory02.png'], dtype=object) ]
docs.genesys.com
Scanning Manually You can scan a drawing by placing it on the scanner registration pegs yourself. If the drawings you want to scan are too large to fit in the scanner’s automatic document feeder or do not feed properly due to rips, wrinkles, or folds in the drawing, you can manually peg the drawings. However, this mode does not scan each drawing automatically when you click the Preview button. - Select one of the following options from the Feed menu: - Pegged: Scans the drawing from the flatbed. Click Scan for each drawing. - Pegged (Many): Scans the drawing from the flatbed every five to ten seconds (depending on the time required to process the drawings). - Check the Drawing List to see which drawing you are scanning next. It will appear highlighted. - You can select the drawing you want to scan by pointing to the desired drawing in the Drawing list, and clicking Scan. - You can reverse the order of the Drawing list by selecting Edit > Reverse List Order. The Drawing list sorts the cell names from your exposure sheet alphanumerically. You should keep this in mind when assigning names to these cells, otherwise, you may need to resort your drawings so the Harmony can assign them in the correct order. - Lift the scanner cover and place the drawing face down on the scanner glass with the peg holes of the drawing positioned on the registration pegs bar on the scanner. This should be the paper drawing that corresponds to the drawing highlighted in the Drawing List. You can place both 12 field and 16 field paper on the same registration pegs on the scanner. The Field menu in the scanner’s setup options defines the paper size being scanned. - Close the scanner cover, making sure the paper lays flat against the glass, free of wrinkles and folds. You are now ready to scan the drawing. - Click Scan. - If you set the scan mode to Pegged, the scanner scans the drawing, then stops and waits for you to change the drawing and click the Scan button again. - If you set the scan mode to Pegged (Many), the scanner continues to capture a drawing every five to ten seconds (depending on how long it takes to process each drawing). While the scanner captures the drawing, the Scan button becomes inactive. You can interrupt the scan by clicking Stop—see Stopping the Scanning Process. When the scan is complete, the captured image appears in the Harmony. In the Drawing list, the drawing is indicated as 'scanned' and the next drawing to scan appears highlighted. - If you set the scan mode to Pegged (Many), wait for the scanner to stop after each drawing and place the next one before it starts again.
https://docs.toonboom.com/help/harmony-15/advanced/scan-module/how-to-scan/scan-manually.html
2019-02-16T05:37:11
CC-MAIN-2019-09
1550247479885.8
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Replication origins are intended to make it easier to implement logical replication solutions on top of logical decoding. They provide a solution to two common problems: How to safely keep track of replication progress How to change replication behavior based on the origin of a row; for example, to prevent loops in bi-directional replication setups Replication origins have just two properties, a name and an OID. The name, which is what should be used to refer to the origin across systems, is free-form text. It should be used in a way that makes conflicts between replication origins created by different replication solutions unlikely; e.g. by prefixing the replication solution's name to it. The OID is used only to avoid having to store the long version in situations where space efficiency is important. It should never be shared across systems. Replication origins can be created using the function pg_replication_origin_create(); dropped using pg_replication_origin_drop(); and seen in the pg_replication_origin system catalog. One nontrivial part of building a replication solution is to keep track of replay progress in a safe manner. When the applying process, or the whole cluster, dies, it needs to be possible to find out up to where data has successfully been replicated. Naive solutions to this, such as updating a row in a table for every replayed transaction, have problems like run-time overhead and database bloat. Using the replication origin infrastructure a session can be marked as replaying from a remote node (using the pg_replication_origin_session_setup() function). Additionally the LSN and commit time stamp of every source transaction can be configured on a per transaction basis using pg_replication_origin_xact_setup(). If that's done replication progress will persist in a crash safe manner. Replay progress for all replication origins can be seen in the pg_replication_origin_status view. An individual origin's progress, e.g. when resuming replication, can be acquired using pg_replication_origin_progress() for any origin or pg_replication_origin_session_progress() for the origin configured in the current session. In replication topologies more complex than replication from exactly one system to one other system, another problem can be that it is hard to avoid replicating replayed rows again. That can lead both to cycles in the replication and inefficiencies. Replication origins provide an optional mechanism to recognize and prevent that. When configured using the functions referenced in the previous paragraph, every change and transaction passed to output plugin callbacks (see Section 49.6) generated by the session is tagged with the replication origin of the generating session. This allows treating them differently in the output plugin, e.g. ignoring all but locally-originating rows. Additionally the filter_by_origin_cb callback can be used to filter the logical decoding change stream based on the source. While less flexible, filtering via that callback is considerably more efficient than doing it in the output plugin.
http://docs.postgresql.tw/server-programming/replication-progress-tracking
2019-02-16T06:11:55
CC-MAIN-2019-09
1550247479885.8
[]
docs.postgresql.tw
9. The DHCPv6 Server¶ 9.1. Starting and Stopping the DHCPv6 Server¶ It is recommended that the Kea server opens privileged ports, it requires root access. This daemon must be run as root. During startup, the server will attempt to create a PID file of the form: [runstatedir]/kea/[conf name].kea-dhcp. 9.2. DHCPv6 Server Configuration¶ 9.2.1. Introduction¶ object instead of this deprecated object. can’t use this address for creating new connections. renew-timer and rebind-timer are values (also in seconds) that define T1 and T2 timers that govern when the client will begin in may be preferable to use more compact notation. After all the parameters are specified, we have two contexts open: global and Dhcp6; thus, we need two closing curly brackets to close them. 9.2.2. Lease Storage¶ All leases issued by the server are stored in the lease database. Currently there are four database backends available: memfile (which is the default backend), MySQL, PostgreSQL, and Cassandra.6": { "lease-database": { "type": "memfile", "persist": true, "name": "/tmp/kea-leases6.csv", "lfc-interval": 1800 } } This configuration selects the /tmp/kea-leases. 9.2.2.2. Lease Database Configuration¶ Note Lease database access information must be configured for the DHCPv6 server, even if it has already been configured for the DHCPv4 server. The servers store their information independently, so each server can use a separate database or both servers can use the same database. Lease database configuration is controlled through the Dhcp6/lease-database parameters. The database type must be set to “memfile”, “mysql”, lease database after connectivity has been lost may also be specified: also the default.) 9.2.2.3. Cassandra-Specific Parameters¶ The parameters are the same for both DHCPv4 and DHCPv6. See Cassandra-Specific Parameters for details.. For example, the following configuration can be used to configure a connection to MySQL: "Dhcp6": { "hosts-database": { "type": "mysql", "name": "kea", "user": "kea", "password": "secret123", "host": "localhost", "port": 3306 } } Note that depending on the database configuration, many of the parameters may be optional.. 9.2.3.1. DHCPv6 Hosts Database Configuration¶": { "host-databases": [ { "type": "mysql", ... }, ... ], ... } For additional Cassandra-specific parameters, see Cassandra-Specific Parameters.¶ UDP/IPv6 sockets only. The following example shows how to disable the¶¶ will cause¶ range 2001:db8:1::1 to 2001:db8:1::ffff are going to be managed by the Dhcp ‘f’s is cumbersome. It can be expressed more simply as 2001:db8:1:0:5::/80. Both formats are supported by Dhcp6 and can be mixed in the pool list. For example, one could define the following pools: "Dhcp6": { "subnet6": [ { "subnet": "2001:db8:1::/64", "pools": [ { "pool": "2001:db8:1::1-2001:db8:1::ffff" }, { "pool": "2001:db8:1:05::/80" } ], ... } ] }. To add a second subnet, use a command similar to the following: "Dhcp6": { will also be¶¶ For each delegated prefix, the delegating router may choose to exclude a single prefix out of the delegated prefix as specified in RFC 6603. The requesting router must not assign the excluded prefix to any of its downstream interfaces, and it¶ will assign options from all pools from which the leases have been obtained. However, if the particular option is specified in multiple pools from which the client obtains the leases, only one instance of this option will be handed out to the client. The server’s administrator must not try to prioritize assignment of pool-specific options by trying to order pools declarations in the server configuration. The following configuration snippet demonstrates how to specify the DNS servers option, which will be assigned to a client only if the client obtains an address from the given pool: "Dhcp6": { "subnet6": [ { "pools": [ { "pool": "2001:db8:1::100-2001:db8:1::300", "option-data": [ { "name": "dns-servers", "data": "2001:db8:1::10" } ] } ] }, ... ], ... } Options can also be specified in class or host reservation scope. The current Kea options precedence order is (from most important): host reservation, pool, subnet, shared network, class, global. The currently supported standard DHCPv6 options are listed in List of Standard DHCPv in such an option. For example, the option dns-servers allows the specification of more than one IPv6 address, enabling clients to obtain the addresses of multiple DNS servers.. 9.2.12. Common Softwire46 Options¶ Softwire46 options are involved in IPv4 over IPv6 provisioning by means of tunneling or translation as specified in RFC 7598. The following sections provide configuration examples of these options. 9.2.12.1. Softwire46 Container Options¶ options described below are included in one of the container options. Thus, they must be included in appropriate option spaces by selecting a “space” name, which specifies in which option they are supposed to be included. 9.2.12.2. S46 Rule Option¶ The S46 Rule option is used for conveying- a specific number of bits (defined in prefix4-len) are reserved, and MUST be initialized to zero by the sender and ignored by the receiver. IPv6 prefix- in prefix/length notation that specifies the IPv6 domain prefix for the S46 rule. The field is padded on the right with zero bits up to the nearest octet boundary, when prefix6-len is not evenly divisible by 8. 9.2.12.3. S46 BR Option¶¶¶¶¶ and does not set its value(s). The name, code, and type parameters are required; all others are optional. The array default value is false. The record-types and encapsulate default values are blank (i.e. “”). The default space is simple use of primitives (uint8, string, ipv6-address, etc.); it is possible to define an option comprising a number of existing primitives. For example, assume accept also "true", "false", 0, 1, "0", and "1". 9.2.14. DHCPv6 Vendor-Specific Options¶ Currently there are two option spaces defined for the DHCPv6 daemon: “dhcp6” (for the top-level DHCPv6 options) and “vendor-opts-space”, which is empty by default but in which options can be defined. Those options-space", "data": "2001:db8:1::10, 123, Hello World" }, ... ], ... } We should also define a value (enterprise-number) for the Vendor-Specific Information option, that conveys our option “foo”. "Dhcp6": { "option-data": [ ..., { "name": "vendor-opts", "data": "12345" } ], ... } Alternatively, the option can be specified using its code. "Dhcp6": { "option-data": [ ..., { "code": 17, "data": "12345" } ], ... } 9.2.15. Nested DHCPv6 Options (Custom Option Spaces)¶). Assume that we want to have a DHCPv6 option called “container” with code 102 that conveys two sub-options with codes 1 and 2. First we", "type": "empty", "array": false, "record-types": "", "encapsulate": "isc" } ], ... } The name of the option space in which the sub-options are defined is set in the encapsulate field. The type field is set to empty, which limits this option to only carrying data in sub-options. Finally, we can set values for the new options: ¶). When the client does not specify lifetimes the default is used. When it specifies a lifetime using IAADDR or IAPREFIX sub option with not zero values these values are used when they are between configured minimum (lower values are round up) and maximal (larger values are round.. 9.2.18. IPv6 Subnet Selection¶¶. DHCPv6 Relays¶coded¶" ], ... } As of¶ The DHCPv three mechanisms that take advantage of client classification in DHCPv6: subnet selection, address pool selection, and DHCP options assignment. address/prefix/pd. 9.2.22.1. Defining and Using Custom Classes¶" } ], "require-client-classes": [ "Client_foo" ], ... }, ... ], ... } Required evaluation can be used to express complex dependencies like subnet membership. It can also be used to reverse the precedence; if an option-data is set in a subnet it takes precedence over an option-data in a class. When¶ As mentioned earlier, kea-dhcpAAA6": { "dhcp-ddns": { "enable-updates": false }, ... } and for example: "Dhcp": "" 9.2.23.1. DHCP-DDNS Server Connectivity¶ For NCRs to reach the D2 server, kea-dhcp6 must be able to communicate with it. kea-dhcp6 uses the following configuration parameters to control this communication: enable-updates- this determines whether kea-dhcp6 will generate NCRs. If missing, this value is assumed to be false, so DDNS updates are disabled. To enable DDNS updates set this value to true.?¶ will create the DDNS update request for only the first of these addresses. In general, kea-dhcp6 will generate a DDNS update request only if the DHCPREQUEST contains the FQDN option (code 39). By default, kea-dhcp6 will honor the client’s wishes and generate a DDNS request to D2 to update only reverse DNS data. The parameter override-client-update can be used to instruct the server to override client delegation requests. When this parameter is “true”, kea-dhcp6.) To override client delegation, set the following values in the configuration file: "Dhcp6": { "dhcp-ddns": { "override-no-update": true, ... }, ... } 9.2.23.3. kea-dhcp6 Name Generation for DDNS Update Requests¶ Each Name Change Request must of course include the fully qualified domain name whose DNS entries are to be affected.": { "qualifying-suffix": "foo.example.org", ... }, ... } When qualifying a partial name, kea-dhcp6 will construct the name in the format: [candidate-name].[qualifying-suffix]. where candidate-name is the partial name supplied in the DHCPREQUEST. For example, if the FQDN domain name value is “some-computer” and the qualifying-suffix “example.com”, the generated FQDN is: some-computer.example.com. When generating the entire name, kea-dhcp6 will construct the name in the format: [generated-prefix]-[address-text].[qualifying-suffix]. where address-text is simply the lease IP address converted to a hyphenated string. For example, if the lease address is 3001:1::70E, the qualifying suffix “example.com”, and the default value is used for generated-prefix, the generated FQDN is: myhost-3001-1–70E.example.com. 9.2.23.4. Sanitizing Client FQDN Names¶ Some DHCP clients may provide values in the name component of the FQDN option (option code 39) that contain undesirable characters. It is possible to configure kea-dhcp client value..¶ DHCP 4o6 server address option (name “dhcp4o6-server-addr”, code 88). The following configuration was used during some tests: { # for situations. Since delegated prefixes do not have to belong to a subnet in which they are offered, there is no way to implement such a mechanism for IPv6 prefixes. As such, the mechanism works for IPv6 addresses only. There are five levels which are supported: none- do no special checks; accept the lease as is. warn- if problems are detected display a warning, but accept the lease data anyway. This is the default value.6": { "sanity-checks": { "lease-checks": "fix-del" }, ... } 9.3. Host Reservation in DHCPv6¶. Note that¶ In a typical. Please see Conflicts in DHCPv6 Reservations for more details. 9.3.2. Conflicts in DHCPv6 Reservations¶ take. 9.3.3. Reserving a Hostname¶ When the reservation for a client includes the hostname, the server will assign this hostname to the client and send it back in the Client FQDN, if the client sent the FQDN option¶ host level have the highest priority. In other words, if there are options defined with the same type on global, subnet, class, and host levels, the host-specific values will be used. 9.3.5. Reserving Client Classes in DHCPv the client belongs to classes reserved-class1 and reserved-class2. Those classes are associated with specific options that are sent to the clients which belong to them. { "client-classes": [ { "name": "reserved-class1", "option-data": [ { "name": ", . 9.3 reservations operations. Note In Kea, the maximum length of an option specified per-host is arbitrarily set to 4096 bytes. 9.3.7. Fine-Tuning DHCPv or a prefix,6": { "subnet6": [ { "subnet": "2001:db8:1::/64", "reservation-mode": "disabled", ... } ] } DHCP logic implemented in Kea. It is very easy to misuse this feature and get a configuration that is inconsistent. To give a specific example, imagine a global reservation for an IP address 2001:db8:ffff::1, which will. 9.5. Server Identifier in DHCPv6¶ will be used after server restart, because the entire server identifier is explicitly specified in the configuration. 9.6. DHCPv6 data directory¶)¶ only options and no addresses or prefixes. If the options have the same value in each subnet, the configuration can define)¶¶ The relay must have an interface connected to the link on which the clients are being configured. Typically the relay has a global IPv6 address configured on that interface, which belongs to the subnet from which the server will assign addresses. Normally, the server is able to use the IPv6 address inserted by the relay (in the link-addr field in RELAY-FORW message) to select the appropriate subnet. However, that is not always the case. The relay address may not match the subnet in certain deployments.6. Note The current version of Kea uses the “ip-addresses” parameter, which supports specifying a list of addresses. 9.10. Segregating IPv 3000::/64 subnet, while everything connected behind the modems should get addresses from another subnet (2001:db8:1::/64). The CMTS that acts as a relay uses address 3000::1. The following configuration can serve that configuration: ¶ of which of the supported methods should be used and in what order. This configuration may be considered a fine tuning of the DHCP deployment.. This parameter can also be specified as rfc6939, which is an alias for client-link-addr-option. remote-id- RFC 4649 defines a remote-id option that is inserted by a relay agent. Depending on the relay agent configuration, the inserted option may convey the client’s MAC address information. This parameter can also be specified as rfc4649, which is an alias for remote-id. subscriber-id- Another option that is somewhat similar to the previous one is subscriber-id, defined in RFC 4580. It, too, is inserted by a relay agent that is configured to insert it. This parameter can also be specified as rfc4580, which is not allowed. Administrators who do not want to specify it should either simply omit the mac-sources definition or specify it with the “any” value, which is the default. 9.12. Duplicate Addresses (DECLINE Support)¶ The DHCPv6 server is configured with a certain pool of addresses that it is expected to hand out to DHCPv Duplicate Address Detection) and reported to the DHCPv6 server using a DHCPDECLINE message. The server will do a sanity check (to see whether the client declining an address really was supposed to use it), and then will conduct a clean-up operation and confirm it by sending back a REPLY message.6": { "decline-probation-period": 3600, "subnet6": [ ... ], ... } The parameter is expressed in seconds, so the example above will instruct the server to recycle declined leases after message. 9.13. Statistics in the DHCPv6 Server¶ Note This section describes DHCPv6-specific statistics. For a general overview and usage of statistics, see Statistics. The DHCPv6 server supports the following statistics: 9.14. Management API for the DHCPv6": { "control-socket": { "socket-type": "unix", "socket-name": "/path/to/the/unix/socket" }, "subnet. 9.15. User Contexts in IPv¶. - The Dynamic Host Configuration Protocol for IPv6 (DHCPv6) Client Fully Qualified Domain Name (FQDN) Option, RFC 4704: Supported option is CLIENT_FQDN. - Dynamic Host Configuration Protocol for IPv6 (DHCPv6) Option for Dual-Stack Lite, RFC 6334: the AFTR-Name DHCPv6 Option is supported. - Relay-Supplied DHCP Options, RFC 6422: Full functionality is supported: OPTION_RSOO, ability of the server to echo back the options, checks whether an option is RSOO-enabled, ability to mark additional options as RSOO-enabled. - Prefix Exclude Option for DHCPv6-based Prefix Delegation, RFC 6603: Prefix Exclude option is supported. - Client Link-Layer Address Option in DHCPv6, RFC 6939: Supported option¶¶ A collection of simple-to-use examples for the DHCPv6 component of Kea is available with the source files, located in the doc/examples/kea6 directory. 9.19. Configuration Backend in DHCPv6¶.196-specific parameters supported by the Configuration Backend, with an indication on which level of the hierarchy it is currently supported. “n/a” is used in cases when a. 9.19.2. Enabling Configuration Backend¶ Configuration Backend for the detailed description).
https://kea.readthedocs.io/en/latest/arm/dhcp6-srv.html
2019-08-17T17:23:05
CC-MAIN-2019-35
1566027313436.2
[]
kea.readthedocs.io
scheduled search noun A standard search which has been scheduled to run on a specific interval, such as daily, every two hours, two hours after midnight on the first of the month, and so on, usually for alerting purposes, as well as summary indexing. You can define these schedule intervals by picking from a predefined list (with values such as "every minute," "every five minutes," and "each day at midnight") or by using standard cron notation. You can also define "earliest time" and "latest time" ranges, which enable you to set up searches that collect data for intervals that are some set time in the past. For example, you could have a search that runs on the half hour for a search interval of each hour, so when it runs at 2:30pm it is collecting events that Splunk indexed between 1:00pm and 1:59pm. When scheduled searches are used for alerting, their interval usually corresponds with the search time range. For example, if the search collects data from 20 minutes prior to launch to 10 minutes prior to launch, then you might want it to run on a 10 minute interval for alerting purposes. This way there won't be any gaps or overlaps in the data being searched in each scheduled run. You can also define real-time searches, which gather data in real time (as events are received by Splunk) and run continuously for an indefinite period. Because they run continuously, there is no need to schedule them. Related terms For more information In the User Manual: In the Knowledge Manager Manual: - Configure the priority of scheduled searches - Use summary indexing for increased reporting efficiency
http://docs.splunk.com/Splexicon:Scheduledsearch
2012-05-27T09:10:57
crawl-003
crawl-003-021
[]
docs.splunk.com
Controlling Access to AWS Import/Export Jobs AWS Import/Export integrates with AWS Identity and Access Management (IAM), which allows you to control which actions a user can perform. By default, IAM users have no access to AWS Import/Export actions. If you want IAM users to be able to work with AWS Import/Export, you must grant them permissions. You do this by creating an IAM policy that defines which Import/Export actions the IAM user is allowed to perform. You then attach the policy to the IAM user or to an IAM group that the user is in. You can give IAM users of your AWS account access to all AWS Import/Export actions or to a subset of them. For more information on the different AWS Import/Export actions, see Actions. Related IAM Documentation AWS Identity and Access Management (IAM) detail page What Is IAM? in the AWS Identity and Access Management documentation Managing IAM Policies in the AWS Identity and Access Management documentation Example IAM User Policies for AWS Import/Export This section shows three simple policies for controlling access to AWS Import/Export. AWS Import/Export does not support resource-level permissions, so in policies for Import/Export, the "Resource" element is always "*", which means all resources. Allow read-only access to the jobs created under the AWS account The following policy only allows access to the ListJobs and GetStatus actions, which are read-only actions. Copy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "importexport:ListJobs", "importexport:GetStatus" ], "Resource": "*" } ] } Allow full access to all AWS Import/Export jobs created under the AWS account The following policy allows access to all AWS Import/Export actions. Copy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "importexport:*", "Resource": "*" } ] } Deny a set of actions from an IAM user By default, all permissions are denied; if you do not explicitly grant access to Import/Export actions, users are not allowed to perform those actions. It's also possible to explicitly deny access to specific actions. This might be useful if one policy (or statement in a policy) grants access to a set of actions, but you want to exclude one or more individual actions. The following policy contains two statements. The first statement allows access to all the AWS Import/Export actions. The second statement explicitly denies access to UpdateJob. If new actions are added to AWS Import/Export, this policy automatically grants permission for those new actions because of the first statement. However, the user will always be denied access to the UpdateJob action, even if that action is explicitly allowed in another policy. Copy { "Version": "2012-10-17", "Statement": [ { "Effect":"Allow", "Action":"importexport:*" }, { "Effect":"Deny", "Action":"importexport:UpdateJob", "Resource": "*" } ] }
http://docs.aws.amazon.com/AWSImportExport/latest/DG/iam-access-importexport-actions.html
2017-12-11T04:16:36
CC-MAIN-2017-51
1512948512121.15
[]
docs.aws.amazon.com
In This Section This guide will take you through the process of configuring a Direct Connection and route streaming Connect data into your terminal window. At the end of this walkthrough you will have successfully: - Generated an access token and started the data flow. - Constructed a cURL command that: - Connects to the Urban Airship Connect server. - Passes in no parameters. - Receives the raw firehose of events. Connect is an add-on service. Please talk to your Urban Airship Account Manager to enable Connect for your account. Even if you don’t require direct access to the Connect data stream, we still recommend setting up a Direct Connection. Configuring a Direct Connection will give you a good idea of how the Connect stream works, and you can simply delete the connection once you have completed the tutorial. While anyone should be able to follow these instructions, we do assume conceptual familiarity with the Connect service throughout the document. Configure a Direct Connection A Direct Connection is similar to other integrations, but rather than funneling mobile engagement data through an external service, you are routing it directly through your backend systems. The Direct Connection integration is designed for customers who are interested in building their own custom applications on top of the Connect data stream. Connect to the Test Server (optional) While completing this section is not required, connecting to the test server is an easy way to see what the Connect stream looks like. If you would like to proceed directly to integrating your application with Connect, skip to Generate an Access Token. If you don’t have an app in the Urban Airship system or just want to try a test connection, you can use our test server. This data is randomly generated and, while it attempts to provide an approximation of a mobile user’s potential lifecycle, it should not be assumed that anything in here is useful or truly similar to what will be happening in your app. This section assumes that you’re in a terminal window on Mac OSX or a Linux environment. - Paste the following code: curl -vv \ --compressed \ -u "sample_connection:sample_connection" \ -H "Accept: application/vnd.urbanairship+x-ndjson; version=3;" \ -d "{}" Wait a little bit, and you should see Connect events appear. Using this method, there might be a lag in between when events are delivered via Connect and when they show up in your terminal. This is due to how cURL processes compressed events and does not reflect how the events are actually being delivered. Generate an Access Token If you completed the previous section, you have a general idea of what the Connect stream looks like, albeit when filled with dummy data. Now we will generate a real stream of data by accessing your app’s Connect stream. - Choose your project from the Urban Airship dashboard, then click Connect in the navigational header. - Under Connections, click the pane for Direct Connection. Previously configured integrations are listed under Active Connections. - Name and configure the new integration: - Enter a user-friendly name and description. - Check the box if you’d like to send location events through this connection. - Click the Save & Create Access Token button. - Copy the App Key and Access Token and save in a secure location. You will use both in the next section, Connect to Your App’s Connect Stream. You will not be able to view the App Key and Access Token after leaving this screen, so copy and save them now. You may, however, add new tokens and delete existing tokens. See: Manage Connections. - Click the Save & Exit button. Connect to Your App’s Connect Stream To complete this section, you need the App Key and Access Token created in the previous section, Generate an Access Token. This section assumes that you’re in a terminal window on Mac OSX or a Linux environment. - Make request: Create cURL a request to the Connect API. Example Request:Be sure to replace curl -vv \ --compressed \ -H "Authorization: Bearer <access-token>" \ -H "X-UA-Appkey: <app-key>" \ -H "Accept: application/vnd.urbanairship+x-ndjson; version=3;" \ -d "{}" <app-key>and <access-token>with your actual App Key and Access Token: If you are already connected and seeing events on your screen, you are done! Feel free to skip to Next Steps. - Get Cookie: After making the request in step 1, you’ll likely get an HTTP 307 status code and be disconnected. Part of this 307 response will include a Set Cookie header, as shown below. Copy the sXXXXportion of the header: < Set-cookie: SRV=sXXXX; path=/ - Make request with cookie: Now you will recreate the request you made in step 1 but with the addition of the Cookieheader:In the above example, the -H "Cookie: SRV=sXXXX; path=/" \ sXXXXis the string you copied in step 2. Here is the request made in step 1, but with the additional header: Example Request: Again, be sure to replace curl -vv \ --compressed \ -H "Authorization: Bearer your-access-token" \ -H "X-UA-Appkey: your-app-key" \ -H "Accept: application/vnd.urbanairship+x-ndjson; version=3;" \ -H "Cookie: SRV=sXXXX; path=/" \ -d "{}" <app-key>and <access-token>with your actual App Key and Access Token, and sXXXXwith the text you copied in step 2. After executing the command in step 3, you should now be connected and receiving events. Congratulations! Next Steps Read About Connect Was this tutorial just a series of weird technical instructions that resulted in a stream of seemingly meaningless text flying across your terminal window? If that’s the case, please check out About Connect for a conceptual overview of Connect. Explore the Connect API Experiment with additional filters and offsets to get a feel for what the API can do. See the Connect API documentation for details. Explore Other Integrations In addition to Direct Connections, we provide seamless integrations with a number of third-party providers. Click Integrations in the left-hand menu on this page to see the current available Connect Integrations.
https://docs.urbanairship.com/connect/getting-started/
2017-12-11T04:08:01
CC-MAIN-2017-51
1512948512121.15
[]
docs.urbanairship.com
Recently Viewed Topics Plugins The Advanced Scan templates include Plugin options. Plugins options enables you to select security checks by Plugin Family or individual plugins checks. Click Plugin Family to enable (green) or disable (gray) the entire family. Select a family to display the list of its plugins. Individual plugins can be enabled or disabled to create very specific scans. A family with some plugins disabled turns blue and display Mixed to indicate only some plugins are enabled. Click on the plugin family to load the complete list of plugins, and allow for granular selection based on your scanning preferences. Select a specific Plugin Name to display the plugin output that displays as seen in a report. The plugin details include a Synopsis, Description, Solution, Plugin Information, and Risk Information. When a scan or policy is created and saved, it records all of the plugins that are initially selected. When new plugins are received via a plugin update, they are automatically enabled if the family they are associated with is enabled. If the family has been disabled or partially enabled, new plugins in that family are also automatically disabled. Caution: The Denial of Service family contains some plugins that could cause outages on a network if the Safe Checks option is not enabled, in addition to some useful checks that.
https://docs.tenable.com/cloud/Content/Scans/Plugins.htm
2017-12-11T04:06:49
CC-MAIN-2017-51
1512948512121.15
[]
docs.tenable.com
In This Section This tutorial expands on templated funnels to show how you can create your own funnel and path reports. These reports are powerful tools for analyzing the behaviors of your users. Funnel and path are available for Custom Events. Since most users are unable to complete a goal in a single session, our example funnel and path report shows goals achieved over a time range. What You’ll Do In this tutorial, you will: - Explore the Revenue funnel report. - Add filters and set their values. - Run the report. - Set additional filters. - Run the the report again to see the path of users. Steps Show Goals Achieved Over a Time Range - Navigate to the Revenue dashboard. - In the Funnel report, click the gear icon and select Explore From Here. - Click the arrow next to Filters to display the full menu. - Edit the filters and values as desired, then click the Run button. To remove the mapping of the event names from the Visualization, click the Visualization row’s gear icon going to the gear icon, click Style in the top row, and delete the text in the Series Labels boxes: See the Path of Users To see the path of users, e.g., abandoned cart, follow these steps after completing the steps in Show Goals Achieved Over a Time Range:
https://docs.urbanairship.com/insight/funnel-path-tutorial/
2017-12-11T04:08:56
CC-MAIN-2017-51
1512948512121.15
[]
docs.urbanairship.com
. There are a couple of differences, though. One lies in how you initialize the form. See this example: class UserProfileMultiForm(MultiForm): form_classes = { 'user': UserForm, 'profile': ProfileForm, } UserProfileMultiForm(initial={ 'user': { # User's initial data }, 'profile': { # Profile's initial data }, }) The initial data argument has to be a nested dictionary so that we can associate the right initial data with the right form class. The other major difference is that there is no direct field access because this could lead to namespace clashes. You have to access the fields from their forms. All forms are available using the key provided in form_classes: form = UserProfileMultiForm() # get the Field object form['user'].fields['name'] # get the BoundField object form['user']['name'] MultiForm, however, does all you to iterate over all the fields of all the forms. {% for field in form %} {{ field }} {% endfor %} If you are relying on the fields to come out in a consistent order, you should use an OrderedDict to define the form_classes. from collections import OrderedDict class UserProfileMultiForm(MultiForm): form_classes = OrderedDict(( ('user', UserForm), ('profile', ProfileForm), )) Working with ModelForms¶ MultiModelForm adds ModelForm support on top of MultiForm. That simply means that it includes support for the instance parameter in initialization and adds a save method. class UserProfileMultiForm(MultiModelForm): form_classes = { 'user': UserForm, 'profile': ProfileForm, } user = User.objects.get(pk=123) UserProfileMultiForm(instance={ 'user': user, 'profile': user.profile, }) Working with CreateView¶ It is pretty easy to use MultiModelForms with Django’s CreateView, usually you will have to override the form_valid() method to do some specific saving functionality. For example, you could have a signup form that created a user and a user profile object all in one: # forms.py from django import forms from authtools.forms import UserCreationForm from betterforms.multiform import MultiModelForm from .models import UserProfile class UserProfileForm(forms.ModelForm): class Meta: fields = ('favorite_color',) class UserCreationMultiForm(MultiModelForm): form_classes = { 'user': UserCreationForm, 'profile': UserProfileForm, } # views.py from django.views.generic import CreateView from django.core.urlresolvers import reverse_lazy from django.shortcuts import redirect from .forms import UserCreationMultiForm class UserSignupView(CreateView): form_class = UserCreationMultiForm success_url = reverse_lazy('home') def form_valid(self, form): # Save the user first, because the profile needs a user before it # can be saved. user = form['user'].save() profile = form['profile'].save(commit=False) profile.user = user profile.save() return redirect(self.get_success_url()) Note In this example, we used the UserCreationForm from the django-authtools package just for the purposes of brevity. You could of course use any ModelForm that you wanted to. Of course, we could put the save logic in the UserCreationMultiForm itself by overriding the MultiModelForm.save() method. class UserCreationMultiForm(MultiModelForm): form_classes = { 'user': UserCreationForm, 'profile': UserProfileForm, } def save(self, commit=True): objects = super(UserCreationMultiForm, self).save(commit=False) if commit: user = objects['user'] user.save() profile = objects['profile'] profile.user = user profile.save() return objects If we do that, we can simplify our view to this: class UserSignupView(CreateView): form_class = UserCreationMultiForm success_url = reverse_lazy('home') Working with UpdateView¶ Working with UpdateView likewise is quite easy, but you most likely will have to override the django.views.generic.edit.FormMixin.get_form_kwargs method in order to pass in the instances that you want to work on. If we keep with the user/profile example, it would look something like this: # forms.py from django import forms from django.contrib.auth import get_user_model from betterforms.multiform import MultiModelForm from .models import UserProfile User = get_user_model() class UserEditForm(forms.ModelForm): class Meta: fields = ('email',) class UserProfileForm(forms.ModelForm): class Meta: fields = ('favorite_color',) class UserEditMultiForm(MultiModelForm): form_classes = { 'user': UserEditForm, 'profile': UserProfileForm, } # views.py from django.views.generic import UpdateView from django.core.urlresolvers import reverse_lazy from django.shortcuts import redirect from django.contrib.auth import get_user_model from .forms import UserEditMultiForm User = get_user_model() class UserSignupView(UpdateView): model = User form_class = UserEditMultiForm success_url = reverse_lazy('home') def get_form_kwargs(self): kwargs = super(UserSignupView, self).get_form_kwargs() kwargs.update(instance={ 'user': self.object, 'profile': self.object.profile, }) return kwargs Working with WizardView¶ MultiForms also support the WizardView classes provided by django-formtools (or Django before 1.8), however you must set a base_fields attribute on your form class. # forms.py from django import forms from betterforms.multiform import MultiForm class Step1Form(MultiModelForm): # We have to set base_fields to a dictionary because the WizardView # tries to introspect it. base_fields = {} form_classes = { 'user': UserEditForm, 'profile': UserProfileForm, } Then you can use it like normal. # views.py try: from django.contrib.formtools.wizard.views import SessionWizardView except ImportError: # Django >= 1.8 from formtools.wizard.views import SessionWizardView from .forms import Step1Form, Step2Form class MyWizardView(SessionWizardView): def done(self, form_list, form_dict, **kwargs): step1form = form_dict['1'] # You can get the data for the user form like this: user = step1form['user'].save() # ... wizard_view = MyWizardView.as_view([Step1Form, Step2Form]) The reason we have to set base_fields to a dictionary is that the WizardView does some introspection to determine if any of the forms accept files and then it makes sure that the WizardView has a file_storage on it. By setting base_fields to an empty dictionary, we can bypass this check. Warning If you have have any forms that accept Files, you must configure the file_storage attribute for your WizardView. API Reference¶ - class betterforms.multiform. MultiForm[source]¶ The main interface for customizing MultiFormsis through overriding the form_classesclass attribute. Once a MultiForm is instantiated, you can access the child form instances with their names like this: >>> class MyMultiForm(MultiForm): form_classes = { 'foo': FooForm, 'bar': BarForm, } >>> forms = MyMultiForm() >>> foo_form = forms['foo'] You may also iterate over a multiform to get all of the fields for each child instance. MultiForm API The following attributes and methods are made available for customizing the instantiation of multiforms. __init__(*args, **kwargs)[source]¶ The __init__()is basically just a pass-through to the children form classes’ initialization methods, the only thing that it does is provide special handling for the initialparameter. Instead of being a dictionary of initial values, initialis now a dictionary of form name, initial data pairs. UserProfileMultiForm(initial={ 'user': { # User's initial data }, 'profile': { # Profile's initial data }, }) form_classes¶ This is a dictionary of form name, form class pairs. If the order of the forms is important (for example for output), you can use an OrderedDict instead of a plain dictionary. get_form_args_kwargs(key, args, kwargs)[source]¶ This method is available for customizing the instantiation of each form instance. It should return a two-tuple of args and kwargs that will get passed to the child form class that corresponds with the key that is passed in. The default implementation just adds a prefix to each class to prevent field value clashes. Form API The following attributes and methods are made available for mimicking the FormAPI. - class betterforms.multiform. MultiModelForm[source]¶ MultiModelFormdiffers from MultiFormonly in that adds special handling for the instanceparameter for initialization and has a save()method. __init__(*args, **kwargs)[source]¶ MultiModelForm'sinitialization method provides special handling for the instanceparameter. Instead of being one object, the instanceparameter is expected to be a dictionary of form name, instance object pairs. UserProfileMultiForm(instance={ 'user': user, 'profile': user.profile, }) save(commit=True)[source]¶ The save()method will iterate through the child classes and call save on each of them. It returns an OrderedDict of form name, object pairs, where the object is what is returned by the save method of the child form class. Like the ModelForm.savemethod, if commitis False, MultiModelForm.save()will add a save_m2mmethod to the MultiModelForminstance to aid in saving the many-to-many relations later. Addendum About django-multiform¶ There is another Django app that provides a similar wrapper called django-multiform that provides essentially the same features as betterform’s MultiForm. I searched for an app that did this feature when I started work on betterform’s version, but couldn’t find one. I have looked at django-multiform now and I think that while they are pretty similar, but there are some differences which I think should be noted: - django-multiform’s MultiFormclass actually inherits from Django’s Form class. I don’t think it is very clear if this is a benefit or a disadvantage, but to me it seems that it means that there is Form API that exposed by django-multiform’s MultiFormthat doesn’t actually delegate to the child classes. - I think that django-multiform’s method of dispatching the different values for instance and initial to the child classes is more complicated that it needs to be. Instead of just accepting a dictionary like betterform’s MultiFormdoes, with django-multiform, you have to write a dispatch_init_initial method.
https://django-betterforms.readthedocs.io/en/latest/multiform.html
2019-08-17T13:00:49
CC-MAIN-2019-35
1566027313259.30
[]
django-betterforms.readthedocs.io
Payouts tab shows you the money that you have paid or are yet to pay to the affiliates for their referrals. You can also view the history of payouts you have made to affiliates in the past. The Pending Payments tab shows you the payment you are yet to make to affiliates who have made successful referrals. You can filter the pending payments by affiliates, a total of pending payments on top of the table and the table showing the name of the affiliate, amount to be paid to the affiliate, payment details for the affiliate and actions. You can also download the pending payments data, in addition to this, you can make the process of payment in a bulk. The Payment History tab shows you the history of payments you have to affiliates in the past. You can search for payments made to particular affiliates and search for payments made during specific periods. There is also a total of payment that has been made till now above the table. The table shows data such as the date of the payment, the name of the affiliate who was paid, the amount that was paid to the affiliate, payment details used for paying the affiliate and actions. It is advised to keep your payouts in order and regular, to maintain a healthy relationship with affiliates and for a successful affiliate campaign.
https://docs.goaffpro.com/goaffpro-admin/initial-startup/payouts
2019-08-17T13:51:57
CC-MAIN-2019-35
1566027313259.30
[]
docs.goaffpro.com
Configure Your Scratch Orgs with New Features We’re providing more add-on features for scratch orgs. Where: This change applies to Lightning Experience, Salesforce Classic, and all versions of the Salesforce app in Developer, Enterprise, Group, and Professional editions. How: Add the features to your scratch org definition file. - ChatterAnswers - CustomerNotificationType - DevelopmentWave - EinsteinAssistant - Pardot - TerritoryManagement - TimeSheetTemplateSettings - UiPlugin See Scratch Org Configuration Values in the Salesforce DX Developer Guide for the complete list of supported features. Deprecated Scratch Org Features - CustomApps (replaced by AddCustomApps:<value>) - CustomTabs (replaced by AddCustomTabs:<value>)
https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_scratch_org_features.htm
2019-08-17T13:39:44
CC-MAIN-2019-35
1566027313259.30
[]
docs.releasenotes.salesforce.com
Copying Business Calendar You can copy a business calendar and paste that copy in either the same, or a different, rule package. Follow these steps to copy a business calendar: - Navigate to the rule package to which the business calendar belongs in the Explorer Tree (verify that you have selected the correct Tenant from the Tenant drop-down list). Select Business Calendars under the rule package in the Explorer Tree. - Locate the business calendar in the list and click Copy Calendar. - If you want the copy to be in the same rule package, click Paste Calendar. Enter a name for the new business calendar. - If you want the copy to be in a different rule package, locate that rule package in the explorer tree, and select Business Calendars under that rule package. Click Paste Calendar. Enter a name for the new business calendar. - Update the information as needed. Click Save. Refer to Creating Business Calendars for information about the various fields and configuring business calendar rules. This page was last modified on May 23, 2014, at 00:38. Feedback Comment on this article:
https://docs.genesys.com/Documentation/GRS/8.5.0/GRATHelp/CopyingBC
2019-08-17T12:33:36
CC-MAIN-2019-35
1566027313259.30
[]
docs.genesys.com
Running Constellation Installation¶ Prerequisites¶ - Install supporting libraries: - Ubuntu: apt-get install libdb-dev libleveldb-dev libsodium-dev zlib1g-dev libtinfo-dev - Red Hat: dnf install libdb-devel leveldb-devel libsodium-devel zlib-devel ncurses-devel - MacOS: brew install berkeley-db leveldb libsodium Downloading precompiled binaries¶ Constellation binaries for most major platforms can be downloaded here. Installation from source¶ First time only: Install Stack: - Linux: curl -sSL | sh - MacOS: brew install haskell-stack First time only: run stack setupto install GHC, the Glasgow Haskell Compiler Run stack install Generating keys¶ - To generate a key pair “node”, run constellation-node --generatekeys=node If you choose to lock the keys with a password, they will be encrypted using a master key derived from the password using Argon2id. This is designed to be a very expensive operation to deter password cracking efforts. When constellation encounters a locked key, it will prompt for a password after which the decrypted key will live in memory until the process ends. Running¶ - Run constellation-node <path to config file>or specify configuration variables as command-line options (see constellation-node --help) Please refer to the Constellation client Go library for an example of how to use Constellation.
https://docs.goquorum.com/en/latest/Privacy/Constellation/Installation%20&%20Running/
2019-08-17T13:19:46
CC-MAIN-2019-35
1566027313259.30
[]
docs.goquorum.com
Rocky Series Release Notes¶ 1.11.0¶ New Features¶ Watcher services can be launched in HA mode. From now on Watcher Decision Engine and Watcher Applier services may be deployed on different nodes to run in active-active or active-passive mode. Any ONGOING Audits or Action Plans will be CANCELLED if service they are executed on is restarted. 1.10.0¶ New Features¶ Feature to exclude instances from audit scope based on project_id is added. Now instances from particular project in OpenStack can be excluded from audit defining scope in audit templates. Added a strategy for one compute node maintenance, without having the user’s application been interrupted. If given one backup node, the strategy will firstly migrate all instances from the maintenance node to the backup node. If the backup node is not provided, it will migrate all instances, relying on nova-scheduler. 1.9.0¶ New Features¶ Audits have ‘name’ field now, that is more friendly to end users. Audit’s name can’t exceed 63 characters. Watcher has a whole scope of the cluster, when building compute CDM which includes all instances. It filters excluded instances when migration during the audit. Watcher got an ability to calculate multiple global efficacy indicators during audit’s execution. Now global efficacy can be calculated for many resource types (like volumes, instances, network) if strategy supports efficacy indicators. Added notifications about cancelling of action plan. Now event based plugins know when action plan cancel started and completed. Instance cold migration logic is now replaced with using Nova migrate Server(migrate Action) API which has host option since v2.56. Upgrade Notes¶ Nova API version is now set to 2.56 by default. This needs the migrate action of migration type cold with destination_node parameter to work.
https://docs.openstack.org/releasenotes/watcher/rocky.html
2019-08-17T12:54:39
CC-MAIN-2019-35
1566027313259.30
[]
docs.openstack.org
Default Account Access This report provides a six-month rolling view of attempts to access cardholder systems using default user accounts. This report looks at all activity by accounts categorized in the Identity table with category=default. A default list of accounts is provided in the Identity table, which can be edited using the List and Lookups configuration dashboard. Relevant data sources Relevant data sources for this report includes Windows Security, Unix SSH, and any other application, system, or device that produces authentication data. The report looks at data in the access_summary summary index. How to configure this report 1. Index authentication data from a device, application, or system in Splunk software. 2. Map the data to the following Common Information Model fields: host, action, app, src, src_user, dest, user 3. Set the category column of the Identity table to default for all accounts that are considered default accounts. Report description The data in the Default Account Access report is populated by an ad hoc search that runs against the access_summary summary index. This index is created by the Access -- All Authentication -- Summary Gen saved search, which is a post-process task of the Access -- All Authentication -- Base saved search. This search runs on a 15 minute cycle and looks at 15 minutes of data. Note: The report window stops at 5 minutes ago because some data sources may not have provided complete data in a more recent time frame. Useful searches or modify the saved search to exclude particular users. - The Access – All Authentication – Summary Genis a post-process task. You can find the details of this search in the $SPLUNK_HOME/etc/apps/SA-Access!
https://docs.splunk.com/Documentation/PCI/2.1.1/Install/DefaultAccountAccess
2019-08-17T13:37:20
CC-MAIN-2019-35
1566027313259.30
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Message-ID: <329565709.89045.1566050475391.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_89044_547382421.1566050475391" ------=_Part_89044_547382421.1566050475391 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: TrueSight App Visibility Manager 10.7.00 offers the following key featur= es and enhancements: =20=20 The following components include updated support to keep in line with th= e latest supported technologies. Operating systems for server components: Support fo= r App Visibility server components now includes the following operating sys= tems, which coordinates with the latest supported versions: Application servers and Java versions: Support= for the App Visibility agent for Java now includes the following appl= ication servers and Java versions: TEA Agents: For more information about system requirements, see S= ystem requirements for App Visibility Manager = and T= EA Agent system requirements. =20=20 Newer operating system versions were added, and older operating sys= tem versions were removed, for the R= eal User Analyzer and Collector and the Cloud Probe.= The Windows OS support only applies to the Cloud Probe component. *These OS versions are no longer supported by t= he vendor. Also, added the Network Security Services (NSS) libraries to the requi= red software list. You can now migrate the BMC Real End User Experience Monitoring v2.7 Rea= l User Analyzer and Collector component database and configuration informat= ion to the new Software Edition. Migrating is especially helpful because it enables you to reuse configur= ed information from the earlier version, including the Real User Analyzer's= v2.7 summary data. This saves time in getting started with the latest Real= Analyzer or Collector components. For more information, see Migrating from BMC Real End User Exp= erience Monitoring v2.7 to Software Edition. = There is also new information for upgrading the Real User Analyzer, Coll= ector, and Cloud Probe components to the new v10.7. For more information, see Upgrading the Real User Experience Monitoring Software= Edition. The following security vulnerabilities were corrected in Real End User E= xperience Monitoring components. The following enhancements to the App Visibility agent for .NET provide = expanded data visualization for your .NET Framework applications. The featu= res were added to version 10.5.00.001 (Fix Pack 1), and incorporated in thi= s version. TrueSight Synthetic Monitor now includes metric rules for monitoring the= metrics of synthetic executions. Events are generated by these rules. For more details, see Managing synthetic metric rules. The Synthetic Health view shows the current health and a 24-hour overvie= w of your application based on its synthetic executions, and the metric rul= es defined for it. You can also view historical synthetic health data by se= lecting a different day. The Synthetic Health view is updated every minute.= Data is displayed for all transactions as they run, and is not tied to any= fixed intervals (TOM-14392, TOM-20390, TOM-24470). For more details, see <= a href=3D"/docs/display/tsavm107/Investigating+application+issues+reported+= by+synthetic+health+events">Investigating application issues reported by sy= nthetic health events. The Synthetic Health Analysis view shows detailed synthetic health data = for a single hour from the Synthetic Health view. The view can be fil= tered to show the health based on selected metrics, locations, Execution Pl= ans, and transactions. The following tabs are included: For more details, see Analyzing synthetic health details. Reports, based on the results of synthetic executions can be viewed in t= he user interface, or exported as PDFs or as raw data in a CSV file. Report= s can be generated for a single day, a week, or a month (TOM-13611). For mo= re details, see Viewing synthetic health reports. Reports also include comparisons to a parallel time period. In TrueSight Synthetic Monitor you can now see the results of the most r= ecent executions of an Execution Plan directly from the Execution Plan sett= ings. For more details, see To view the recent executions fo= r an Execution Plan. You can integrate App Visibility Manager with Real End User Experience M= onitoring Software Edition to access code-level diagnostics data about a tr= ansaction from a monitored browser session. For more information, see Integrating end-user experience monitoring with deep-dive applicat= ion an= d custom reports. As part of this change to HTML based on-screen reports, there are = some changes in the reports capabilities. Some reports types included two Y axes. For example, the Report for Late= ncy has two Y axes, "Latency (ms)" on the left side and for "Requests" on t= he right side. In the old implementation, the user can zoom and scroll both axes indepe= ndently and each zoom/scroll only changes the parts of the graph that corre= spond to the zoom/scroll action. In new implementation, the user can zoom and scroll only using the = axis on the right side of the report, and each zoom/scroll change= s all the parts of the graph, not just the graph elements that correspond t= o the axis that the user performed the zoom/scroll action. BMC Real End User Experience Monitoring Software Edition can repor= t information through SNMP to a managing system in your network. = ; This passive end-user experience monitoring provides= an SNMP MIB that enables your SNMP manager to get (r= ead-only) management data from the end-user experience component, such as t= he following metrics: To set up REUEM as a managed system, you enable the service, configure a local SNMP agent, and then configur= e the SNMP MIB or SNMP traps. For BMC Real End User Experience Monitoring Sof= tware Edition, added a new extraction rule to a Trace object custom= field, you must use only the following Data Source field = options after selecting the rule format type: The title verification in the Title Checker of the URLChecker script sup= plied by BMC includes improved handling for tab and space characters (TOM-2= 2934). Execution Plan name now supports up to 256 characters (TOM-23015). You can now reclassify Execution errors and Accuracy errors as Availabil= ity errors in Synthetic Monitor (TOM-23305, TOM-25956). For more details, s= ee Reclassifying Synthe= tic Monitor Execution errors and Accuracy errors as Availability errors= . The process for downloading execution logs in TrueSight Synthetic Monito= r has been simplified to remove unnecessary steps (TOM-13551). Silk Performer version 17.5 now includes a Project Version field that is= used in TrueSight Synthetic Monitor to track the version numbers of your s= cripts. Labels used in the user interface were modified to more closely adhere t= o industry standards (TOM-24464). Notes in the TEA Agent configuration file were expanded to clearly ident= ify parameters that should not be modified (TOM-24469). Connection handling between App Visibility and the Presentation Server w= as enhanced to avoid stuck requests (TOM-28685). The event payload for events in TrueSight Synthetic Monitor has been exp= anded to populate more event slots and include more data. Execution logs for scripts created with Silk Performer version 17.5 gene= rate .tlz files instead of .xlg files. Sc= ripts that were created with previous versions of Silk Performer will conti= nue to generate .xlg files. File contents and function are= identical, and both are fully supported. The Web= beacon JavaScript library code was revised to better handle non-standa= rd ports. The JavaScript is also in the webBeacon_toolki= t.zip that can be downloaded from the Toolkits table in the PDFs topic. In earlier versions of the TrueSight Operations Management solution, the= online documentation was combined into a single space, which sometimes cau= sed confusion about the use cases that applied to specific component produc= ts. The documentation is now divided into documentation spaces that are ali= gned to the individual products. A separate space now documents how to depl= oy the TrueSight Operations Management solution. You can use the TrueSight product documentation = page to locate any topic in the component based documentatio= n. TrueSight Operations Management Deployment TrueSight Presentation Server //<! TrueSight App Visibility Manager
https://docs.bmc.com/docs/exportword?pageId=655615267
2019-08-17T14:01:15
CC-MAIN-2019-35
1566027313259.30
[]
docs.bmc.com
Contents IT Operations Management Previous Topic Next Topic Bind alerts to a process CI Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Bind alerts to a process CI The association between an application process and a CI type is helpful for alert remediation. The association allows an event rule transform to generate an alert and bind the correct CI type. A CI type can map to more than one business process. For example, the cmdb_ci_apache_web_server CI type can bind to http.* and apache.* business processes. Before you beginRole required: evt_mgmt_admin or evt_mgmt_operator About this task You can define a pattern of processes that matches with the sa_process_name variable. The sa_process_name variable must be filled with the process name using an event rule. The Process to CI Type Mapping table is used for process binding. The table includes default patterns. The pattern variables extract from this table according to the host related CI Type. The table contains these columns for process binding: The Process field contains the process pattern. Use any combination of free text, regex, and pattern. The ci_type field contains the CI type in CMDB. Binding an alert to a process, Event Management first receives all the processes that are running on the node (the alert resolving node). Then, Event Management matches every related process pattern with the name that is defined in the sa_process_name variable. Regular expressions are supported. For example, if the sa_process_name contains /ora92/pmon_U2P3_db for an Oracle instance: The sa_process_name variable must be filled by the event rule to store the process name. The Process to CI Type Mapping table has the following record: Process: ${oracle_home}/pmon_${instance} CI type: cmdb_ci_db_ora_instance. Binding the process, Event Management finds a process of type cmdb_ci_db_ora_instance is running on the node. Event Management matches /ora92/pmon_U2P3_db from the sa_process_name variable with the process pattern from the Process to CI Type Mapping table for type cmdb_ci_db_ora_instance. If the Oracle instance has ${oracle_home} = ora92 and ${instance} = U2P3_db extracted from the database, then there is a match and this Oracle process is bound in the alert cmdb_ci. /ora92/pmon_U2P3_db= ${oracle_home}/pmon_${instance}. You can assign custom-developed business process applications to CI types. The following default CI Types automatically bind to application process names during alert generation. The CI Types have global domain availability. The Process to CI Type Mapping [em_binding_process_map] table stores the mappings for the binding process. Table 1. Default mappings Default CI Type Application Process Name cmdb_ci_apache_web_server httpd.*, apache.* cmdb_ci_appl_ibm_wmq runmqlsr.* cmdb_ci_app_server_jboss org.jboss.* cmdb_ci_app_server_tomcat tomcat.* cmdb_ci_app_server_weblogic weblogic.Server, beasvc.exe cmdb_ci_app_server_websphere com.ibm.ws.runtime.WsServer cmdb_ci_db_db2_instance db2sysc.* cmdb_ci_db_mssql_server sqlServer.exe cmdb_ci_db_mysql_instance mysqld.* cmdb_ci_db_ora_instance pmon_${instance}, *tnslsnr.* cmdb_ci_endpoint_iis w3wp.exe cmdb_ci_nginx_web_server nginx.* Procedure Navigate to Event Management > Settings > Process to CI Type Mapping. Click New or double-click a CI type. Type the process Name, and then select a CI type. Click Submit. Related referenceAlert binding procedures On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-it-operations-management/page/product/event-management/task/t_EMAssignProcessCIType.html
2019-08-17T13:17:21
CC-MAIN-2019-35
1566027313259.30
[]
docs.servicenow.com
gp_version_at_initdb A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 5.x documentation. gp_version_at_initdb The gp_version_at_initdb table is populated on the master and each segment in the Greenplum Database system. It identifies the version of Greenplum Database used when the system was first initialized. This table is defined in the pg_global tablespace, meaning it is globally shared across all databases in the system.
https://gpdb.docs.pivotal.io/5110/ref_guide/system_catalogs/gp_version_at_initdb.html
2019-08-17T13:25:32
CC-MAIN-2019-35
1566027313259.30
[]
gpdb.docs.pivotal.io
AR/VR primer¶ This tutorial gives you a springboard into the world of AR and VR in the Godot game engine. A new architecture was introduced in Godot 3 called the AR/VR Server. On top of this architecture, specific implementations are available as interfaces, most of which are plugins based on GDNative. This tutorial focuses purely on the core elements abstracted by the core architecture. This architecture has enough features for you to create an entire VR experience that can then be deployed for various interfaces. However, each platform often has some unique features that are impossible to abstract. Such features will be documented on the relevant interfaces and fall outside of the scope of this primer. AR/VR server¶ When Godot starts, each available interface will make itself known to the AR/VR server. GDNative interfaces are setup as singletons; as long as they are added to the list of GDNative singletons in your project, they will make themselves known to the server. You can use the function get_interfaces() to return a list of available interfaces, but for this tutorial, we’re going to use the native mobile VR interface in our examples. This interface is a straightforward implementation that uses the 3DOF sensors on your phone for orientation and outputs a stereoscopic image to the screen. It is also available in the Godot core and outputs to screen on desktop, which makes it ideal for prototyping or a tutorial such as this one. To enable an interface, you execute the following code: var arvr_interface = ARVRServer.find_interface("Native mobile") if arvr_interface and arvr_interface.initialize(): get_viewport().arvr = true var arvrInterface = ARVRServer.FindInterface("Native mobile"); if (arvrInterface != null && arvrInterface.Initialize()) { GetViewport().Arvr = true; } This code finds the interface we wish to use, initializes it and, if that is successful, binds the main viewport to the interface. This last step gives some control over the viewport to the interface, which automatically enables things like stereoscopic rendering on the viewport. For our mobile VR interface, and any interface where the main input is directly displayed on screen, the main viewport needs to be the viewport where arvr is set to true. But for interfaces that render on an externally attached device, you can use a secondary viewport. In the latter case, a viewport that shows its output on screen will show an undistorted version of the left eye, while showing the fully processed stereoscopic output on the device. Finally, you should only initialize an interface once; switching scenes and reinitializing interfaces will just introduce a lot of overhead. If you want to turn the headset off temporarily, just disable the viewport or set arvr to false on the viewport. In most scenarios though, you wouldn’t disable the headset once you’re in VR, this can be disconcerting to the gamer., which tracks the location of a controller - ARVRAnchor - an anchor point for an AR implementation mapping a real world location into your virtual world The first two must exist in your scene for AR/VR to work and this tutorial focuses purely on them. ARVROrigin is an important node, you must have one and only one of these somewhere in your scene. This node maps the center of your real world tracking space to a location in your virtual world. Everything else is positionally tracked in relation to this point. Where this point lies exactly differs from one implementation to another, but the best example to understand how this node works is to take a look at a room scale location. While we have functions to adjust the point to center it on the player by default, the origin point will be the center location of the room you are in. As you physically walk around the room, the location of the HMD is tracked in relation to this center position and the tracking is mirror in the virtual world. To keep things simple, when you physically move around your room, the ARVR Origin point stays where it is, the position of the camera and controllers will be adjusted according to your movements. When you move through the virtual world, either through controller input or when you implement a teleport system, it is the position of the origin point which you will have to adjust. ARVRCamera is the second node that must always be a part of your scene and it must always be a child node of your origin node. It is a subclass of Godot’s. Note that, for our native mobile VR implementation, there is no positional tracking, only the orientation of the phone and by extension, the HMD is tracked. This implementation artificially places the camera at a height (Y) of 1.85. Conclusion: your minimum setup in your scene to make AR or VR work should look like this: And that’s all you need to get started., assume that 1 unit = 1 meter. The AR/VR server, however, has a property that, for convenience, is also exposed on the ARVROrigin node called world scale. For instance, setting this to a value of 10 changes our coordinate system so 10 units = 1 meter. Performance is another thing that needs to be carefully considered. Especially VR taxes your game a lot more than most people realise. For mobile VR, you have to be extra careful here, but even for desktop games, there are three factors that make life extra difficult: - You are rendering stereoscopic, two for the price of one. While not exactly doubling the work load and with things in the pipeline such as supporting the new MultiView OpenGL extension in mind, there still is an extra workload in rendering images for both eyes - A normal game will run acceptably on 30fps and ideally manages 60fps. That gives you a big range to play with between lower end and higher end hardware. For any HMD application of AR or VR, however, 60fps is the absolute minimum and you should target your games to run at a stable 90fps to ensure your users don’t get motion sickness right off the bat. - The high FOV and related lens distortion effect require many VR experiences to render at double the resolution. Yes a VIVE may only have a resolution of 1080x1200 per eye, we’re rendering each eye at 2160x2400 as a result. This is less of an issue for most AR applications. All in all, the workload your GPU has in comparison with a normal 3D game is a fair amount higher. While things are in the pipeline to improve this, such as MultiView and foveated rendering, these aren’t supported on all devices. This is why you see many VR games using a more art style and if you pay close attention to those VR games that go for realism, you’ll probably notice they’re a bit more conservative on the effects or use some good old optical trickery.
https://docs.godotengine.org/en/latest/tutorials/vr/vr_primer.html
2019-08-17T13:07:44
CC-MAIN-2019-35
1566027313259.30
[array(['../../_images/minimum_setup.png', '../../_images/minimum_setup.png'], dtype=object)]
docs.godotengine.org
Tutorial: Deploy a container application to Azure Container Instances This is the final tutorial in a three-part series. Earlier in the series, a container image was created and pushed to Azure Container Registry. This article completes the series by deploying the container to Azure Container Instances. In this tutorial, you: - Deploy the container from Azure Container Registry to Azure Container Instances - View the running application in the browser - Display the container's logs Before you begin You must satisfy the following requirements to complete this tutorial: Azure CLI: You must have Azure CLI version 2.0.29 or later installed on your local computer. Run az --version to find the version. If you need to install or upgrade, see Install the Azure CLI. Docker: This tutorial assumes a basic understanding of core Docker concepts like containers, container images, and basic docker commands. For a primer on Docker and container basics, see the Docker overview. Docker Engine: To complete this tutorial, you need Docker Engine installed locally. Docker provides packages that configure the Docker environment on macOS, Windows, and Linux. Important Because the Azure Cloud shell does not include the Docker daemon, you must install both the Azure CLI and Docker Engine on your local computer to complete this tutorial. You cannot use the Azure Cloud Shell for this tutorial. Deploy the container using the Azure CLI In this section, you use the Azure CLI to deploy the image built in the first tutorial and pushed to Azure Container Registry in the second tutorial. Be sure you've completed those tutorials before proceeding. Get registry credentials When you deploy an image that's hosted in a private container registry like the one created in the second tutorial, you must supply credentials to access the registry. As shown in Authenticate with Azure Container Registry from Azure Container Instances, a best practice for many scenarios is to create and configure an Azure Active Directory service principal with pull permissions to your registry. See that article for sample scripts to create a service principal with the necessary permissions. Take note of the service principal ID and service principal password. You use these credentials when you deploy the container. You also need the full name of the container registry login server (replace <acrName> with the name of your registry): az acr show --name <acrName> --query loginServer Deploy container Now, use the az container create command to deploy the container. Replace <acrLoginServer> with the value you obtained from the previous command. Replace <service-principal-ID> and <service-principal-password> with the service principal ID and password that you created to access the registry. Replace <aciDnsLabel> with a desired DNS name. az container create --resource-group myResourceGroup --name aci-tutorial-app --image <acrLoginServer>/aci-tutorial-app:v1 --cpu 1 --memory 1 --registry-login-server <acrLoginServer> --registry-username <service-principal-ID> --registry-password <service-principal-password> --dns-name-label <aciDnsLabel> --ports 80 Within a few seconds, you should receive an initial response from Azure. The --dns-name-label value must be unique within the Azure region you create the container instance. Modify the value in the preceding command if you receive a DNS name label error message when you execute the command. Verify deployment progress To view the state of the deployment, use az container show: az container show --resource-group myResourceGroup --name aci-tutorial-app --query instanceView.state Repeat the az container show command until the state changes from Pending to Running, which should take under a minute. When the container is Running, proceed to the next step. View the application and container logs Once the deployment succeeds, display the container's fully qualified domain name (FQDN) with the az container show command: az container show --resource-group myResourceGroup --name aci-tutorial-app --query ipAddress.fqdn For example: $ az container show --resource-group myResourceGroup --name aci-tutorial-app --query ipAddress.fqdn "aci-demo.eastus.azurecontainer.io" To see the running application, navigate to the displayed DNS name in your favorite browser: You can also view the log output of the container: az container logs --resource-group myResourceGroup --name aci-tutorial-app Example output: $ az container logs --resource-group myResourceGroup --name aci-tutorial-app listening on port 80 ::ffff:10.240.0.4 - - [21/Jul/2017:06:00:02 +0000] "GET / HTTP/1.1" 200 1663 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36" ::ffff:10.240.0.4 - - [21/Jul/2017:06:00:02 +0000] "GET /favicon.ico HTTP/1.1" 404 150 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36" Clean up resources If you no longer need any of the resources you created in this tutorial series, you can execute the az group delete command to remove the resource group and all resources it contains. This command deletes the container registry you created, as well as the running container, and all related resources. az group delete --name myResourceGroup Next steps In this tutorial, you completed the process of deploying your container to Azure Container Instances. The following steps were completed: - Deployed the container from Azure Container Registry using the Azure CLI - Viewed the application in the browser - Viewed the container logs Now that you have the basics down, move on to learning more about Azure Container Instances, such as how container groups work: Feedback
https://docs.microsoft.com/en-gb/azure/container-instances/container-instances-tutorial-deploy-app
2019-08-17T14:09:44
CC-MAIN-2019-35
1566027313259.30
[array(['media/container-instances-quickstart/aci-app-browser.png', 'Hello world app in the browser'], dtype=object) ]
docs.microsoft.com
AutoFilter.Range property (Excel) Returns a Range object that represents the range to which the specified AutoFilter applies. Syntax expression.Range expression A variable that represents an AutoFilter object. Example The following example stores in a variable the address for the AutoFilter applied to the Crew worksheet. rAddress = Worksheets("Crew").AutoFilter.Range.Address This example scrolls through the workbook window until the hyperlink range is in the upper-left corner of the active window. Workbooks(1).Activate Set hr = ActiveSheet.Hyperlinks(1).Range ActiveWindow.ScrollRow = hr.Row ActiveWindow.ScrollColumn = hr.Column Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/excel.autofilter.range
2019-08-17T14:35:19
CC-MAIN-2019-35
1566027313259.30
[]
docs.microsoft.com
How AWS Lambda Works with IAM Before you use IAM to manage access to Lambda, you should understand what IAM features are available to use with Lambda. To get a high-level view of how Lambda and other AWS services work with IAM, see AWS Services That Work with IAM in the IAM User Guide. For an overview of permissions, policies, and roles as they are used by Lambda, see AWS Lambda Permissions.
https://docs.aws.amazon.com/lambda/latest/dg/security_iam_service-with-iam.html
2019-08-17T13:05:08
CC-MAIN-2019-35
1566027313259.30
[]
docs.aws.amazon.com
Hiding File Types You can hide Composer file types by using base Eclipse functionality in Composer. This may be desired when certain functionality is not applicable to your environment. For example, when using Voice capabilities, and VXML is used but not CCXML, you may wish for the CallControlXML file type to be hidden from the File > New menu. The following steps may be used: - Right-click the one of the buttons for the Composer-provided perspectives (e.g. Composer Design, Composer) and click Customize....The Customize Perspective dialog appears. - Click the Shortcuts tab. - To remove CallControlXML File from the File > New menu, set Submenus to New. - Expand Composer and check Others. - In the list of shortcuts, uncheck <file-type>, where <file-type> is the type to be hidden. - Click OK. - Repeat for other perspectives if desired. This customization is specific to the workspace. If you use other workspaces, you must customize them as well. This is base Eclipse behavior where customization is saved within the workspace. This page was last modified on July 2, 2013, at 06:06. Feedback Comment on this article:
https://docs.genesys.com/Documentation/Composer/8.1.3/Help/HidingFileTypes
2019-08-17T12:48:41
CC-MAIN-2019-35
1566027313259.30
[]
docs.genesys.com
Use IAM Policies to Allow and Deny User Permissions Amazon EMR supports AWS Identity and Access Management (IAM) policies. IAM is a web service that enables AWS customers to manage users and their permissions. You can use IAM to create policies and attach them to principals, such as users and groups. The policies grant or deny permissions and determine what actions a user can perform with Amazon EMR and other AWS resources. For example, you can allow a user to view EMR clusters in an AWS account but not create or delete them. In addition, you can tag EMR clusters and then use the tags to apply fine-grained permissions to users on individual clusters or a group of clusters that share the same tag. IAM is available at no charge to all AWS account holders. You don't need to sign up for IAM. You can use IAM through the Amazon EMR console, the AWS CLI, and programmatically through the Amazon EMR API and the AWS SDKs. IAM policies adhere to the principle of least privilege, which means that a user can't perform an action until permission is granted to do so. For more information, see the IAM User Guide. Topics Amazon EMR Actions in User-Based IAM Policies In IAM user-based policies for Amazon EMR, all Amazon EMR actions are prefixed with the lowercase elasticmapreduce element. You can specify the "elasticmapreduce:*" key, using the wildcard character (*), to specify all actions related to Amazon EMR, or you can allow a subset of actions, for example, "elasticmapreduce:Describe*". You can also explicitly specify individual Amazon EMR actions, for example "elasticmapreduce:DescribeCluster". For a complete list of Amazon EMR actions, see the API action names in the Amazon EMR API Reference. Because Amazon EMR relies on other services such as Amazon EC2 and Amazon S3, users need to be allowed a subset of permissions for these services as well. For more information, see IAM Managed Policy for Full Access. Note At a minimum, to access the Amazon EMR console, an IAM user needs to have an attached IAM policy that allows the following action: Copy elasticmapreduce:ListClusters For more information about permissions and policies, see Access Management in the IAM User Guide. Amazon EMR does not support resource-based and resource-level policies, but you can use the Condition element (also called the Condition block) to specify fine-grained access control based on cluster tags. For more information, see Use Cluster Tagging with IAM Policies for Cluster-Specific Control. Because Amazon EMR does not support resource-based or resource-level policies, the Resource element always has a wildcard value. The easiest way to grant permissions to users is to use the managed policies for Amazon EMR. Managed policies also offer the benefit of being automatically updated if permission requirements change. If you need to customize policies, we recommend starting with a managed policy and then customizing privileges and conditions according to your requirements. Use Managed Policies for User Access The easiest way to grant full access or read-only access to required Amazon EMR actions is to use the IAM managed policies for Amazon EMR. Managed policies offer the benefit of updating automatically if permission requirements change. These policies not only include actions for Amazon EMR; they also include actions for Amazon EC2, Amazon S3, and Amazon CloudWatch, which Amazon EMR uses to perform actions like launching instances, writing log files, and managing Hadoop jobs and tasks. If you need to create custom policies, it is recommended that you begin with the managed policies and edit them according to your requirements. For information about how to attach policies to IAM users (principals), see Working with Managed Policies Using the AWS Management Consolein the IAM User Guide. IAM Managed Policy for Full Access To grant all the required actions for Amazon EMR, attach the AmazonElasticMapReduceFullAccess managed policy. The content of this policy statement is shown below. It reveals all the actions that Amazon EMR requires for other services. Note Because the AmazonElasticMapReduceFullAccess policy is automatically updated, the policy shown here may be out-of-date. Use the AWS Management Console to view the current policy. Copy { ": "*" } ] } Note The ec2:TerminateInstances action enables the IAM user to terminate any of the Amazon EC2 instances associated with the IAM account, even those that are not part of an EMR cluster. IAM Managed Policy for Read-Only Access To grant read-only privileges to Amazon EMR, attach the AmazonElasticMapReduceReadOnlyAccess managed policy. The content of this policy statement is shown below. Wildcard characters for the elasticmapreduce element specify that only actions that begin with the specified strings are allowed. Keep in mind that because this policy does not explicitly deny actions, a different policy statement may still be used to grant access to specified actions. Note Because the AmazonElasticMapReduceReadOnlyAccess policy is automatically updated, the policy shown here may be out-of-date. Use the AWS Management Console to view the current policy. Copy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticmapreduce:Describe*", "elasticmapreduce:List*", "elasticmapreduce:ViewEventsFromAllClustersInConsole" "s3:GetObject", "s3:ListAllMyBuckets", "s3:ListBucket", "sdb:Select", "cloudwatch:GetMetricStatistics" ], "Resource": "*" } ] } Use Cluster Tagging with IAM Policies for Cluster-Specific Control You can use the Condition element (also called a Condition block) along with the following Amazon EMR condition context keys in an IAM user-based policy to control access based on cluster tags: Use the elasticmapreduce:ResourceTag/TagKeyStringcondition context key to allow or deny user actions on clusters with specific tags. Use the elasticmapreduce:RequestTag/TagKeyStringcondition context key to require a specific tag with actions/API calls. Important The condition context keys apply only to those Amazon EMR API actions that accept ClusterID as a request parameter. For a complete list of Amazon EMR actions, see the API action names in the Amazon EMR API Reference. For more information about the Condition element and condition operators, see IAM Policy Elements Reference in the IAM User Guide, particularly String Condition Operators. For more information about adding tags to EMR clusters, see Tagging Amazon EMR Clusters. Example Amazon EMR Policy Statements The following examples demonstrate different scenarios and ways to use condition operators with Amazon EMR condition context keys. These IAM policy statements are intended for demonstration purposes only and should not be used in production environments. There are multiple ways to combine policy statements to grant and deny permissions according to your requirements. For more information about planning and testing IAM policies, see the IAM User Guide. Allow Actions Only on Clusters with Specific Tag Values The examples below demonstrate a policy that allows a user to perform actions based on the cluster tag with the value dev department and also allows a user to tag clusters with that same tag. The final policy example demonstrates how to deny privileges to tag EMR clusters with anything but that same tag. dev Important Explicitly denying permission for tagging actions is an important consideration. This prevents users from granting permissions to themselves through cluster tags that you did not intend to grant. If the actions shown in the last example had not been denied, a user could add and remove tags of their choosing to any cluster, and circumvent the intention of the preceding policies. In the following policy example, the StringEquals condition operator tries to match with the value for the tag dev . If the tag department hasn't been added to the cluster, or doesn't contain the value department , the policy doesn't apply, and the actions aren't allowed by this policy. If no other policy statements allow the actions, the user can only work with clusters that have this tag with this value. dev Copy { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt14793345241244", "Effect": "Allow", "Action": [ "elasticmapreduce:DescribeCluster", "elasticmapreduce:ListSteps", "elasticmapreduce:TerminateJobFlows ", "elasticmapreduce:SetTerminationProtection ", "elasticmapreduce:ListInstances", "elasticmapreduce:ListInstanceGroups", "elasticmapreduce:ListBootstrapActions", "elasticmapreduce:DescribeStep" ], "Resource": [ "*" ], "Condition": { "StringEquals": { "elasticmapreduce:ResourceTag/department": "dev" } } } ] } You can also specify multiple tag values using a condition operator. For example, to allow all actions on clusters where the tag contains the value department or dev , you could replace the condition block in the earlier example with the following. test Copy "Condition": { "StringEquals": { "elasticmapreduce:ResourceTag/department":["dev", "test"] } } As in the preceding example, the following example policy looks for the same matching tag: the value for the dev tag. In this case, however, the department RequestTag condition context key specifies that the policy applies during tag creation, so the user must create a tag that matches the specified value. Copy { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1479334524000", "Effect": "Allow", "Action": [ "elasticmapreduce:RunJobFlow", "iam:PassRole" ], "Resource": [ "*" ], "Condition": { "StringEquals": { "elasticmapreduce:RequestTag/department": "dev" } } } ] } In the following example, the EMR actions that allow the addition and removal of tags is combined with a StringNotEquals operator specifying the dev tag we've seen in earlier examples. The effect of this policy is to deny a user the permission to add or remove any tags on EMR clusters that are tagged with a tag that contains the department value. dev Copy { "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": [ "elasticmapreduce:AddTags", "elasticmapreduce:RemoveTags" ], "Condition": { "StringNotEquals": { "elasticmapreduce:ResourceTag/department": "dev" } }, "Resource": [ "*" ] } ] } Allow Actions on Clusters with a Specific Tag, Regardless of Tag Value You can also allow actions only on clusters that have a particular tag, regardless of the tag value. To do this, you can use the Null operator. For more information, see Condition Operator to Check Existence of Condition Keys in the IAM User Guide. For example, to allow actions only on EMR clusters that have the tag, regardless of the value it contains, you could replace the Condition blocks in the earlier example with the following one. The department Null operator looks for the presence of the tag on an EMR cluster. If the tag exists, the department Null statement evaluates to false, matching the condition specified in this policy statement, and the appropriate actions are allowed. Copy "Condition": { "Null": { "elasticmapreduce:ResourceTag/department":"false" } } The following policy statement allows a user to create an EMR cluster only if the cluster will have a tag, which can contain any value. department Copy { "Version": "2012-10-17", "Statement": [ { "Action": [ "elasticmapreduce:RunJobFlow", "iam:PassRole" ], "Condition": { "Null": { "elasticmapreduce:RequestTag/ department": "false" } }, "Effect": "Allow", "Resource": [ "*" ] } ] } Require Users to Add Tags When Creating a Cluster The following policy statement allows a user to create an EMR cluster only if the cluster will have a tag that contains the value department when it is created. dev Copy { "Version": "2012-10-17", "Statement": [ { "Action": [ "elasticmapreduce:RunJobFlow", "iam:PassRole" ], "Condition": { "StringEquals": { "elasticmapreduce:RequestTag/ department": " dev" } }, "Effect": "Allow", "Resource": [ "*" ] } ] }
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-access-iam.html
2017-12-11T05:45:19
CC-MAIN-2017-51
1512948512208.1
[]
docs.aws.amazon.com
Logging Introduction You can turn on additional logging to diagnose and troubleshoot issues with the GoCD server and agent. GoCD Server To turn on additional logging on the GoCD server, you must: - create/edit the file CONFIG_DIR/logback-include.xml. The config directory is typically /etc/goon Linux and C:\Program Files\Go Server\configon Windows. See the section Log configuration syntax for the log configuration syntax. The table below describes the various loggers that can be configured with the server: GoCD Agent To turn on additional logging on the GoCD agent, you must: - create/edit the file CONFIG_DIR/agent-logback-include.xml. The config directory is typically /var/lib/go-agent/configon Linux and C:\Program Files\Go Agent\configon Windows. See the section Log configuration syntax for the log configuration syntax. The table below describes the various loggers that can be configured with the server: Log configuration syntax To configure logging, you can specify the configuration below. You must tweak the <logger /> and optionally <appender /> to write to specific log files. You can read more about logback configuration in the logback configuration documentation. This file will be reloaded every 5 seconds, so a restart of the GoCD server or agent is not necessary. Note: It is recommended that you do not set the log level to DEBUGor TRACEfor long periods of time since this can significantly affect performance. "1.0" encoding="UTF-8" xml version=<!-- since this file is included in another file, use `<included />` and not `<configuration />` --> <included> <!-- send logs from `com.example.component-b` to the default log file (`go-agent.log` or `go-sever.log`) --> <logger name="com.example.component-b" level="DEBUG" /> <!-- Uncomment the lines below to redirect specific logs to a file different from the default log file (`go-agent.log` or `go-sever.log`) The configuration below will: - rollover daily; - each log will will be at most 10MB, keep 10 days worth of history, but at most 512 MB --> <!-- <appender name="my-appender" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>logs/example.log</file> <encoder> <pattern>%date{ISO8601} %-5level [%thread] %logger{0}:%line - %msg%n</pattern> </encoder> <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> <fileNamePattern>logs/example.log.%d{yyyy-MM-dd}.%i.gz</fileNamePattern> <maxFileSize>10 MB</maxFileSize> <maxHistory>10</maxHistory> <totalSizeCap>512 MB</totalSizeCap> </rollingPolicy> </appender> <logger name="com.example.component-a" level="DEBUG"> <appender-ref </logger> --> </included> Advanced logging features If you'd like to send log events to a log aggregator service (like logstash, graylog, splunk) of your choice, you may require additional configuration to be performed: - ensure that the relevant java libraries along with their dependencies are present in the libsdirectory - configure appenders and encoders in the relevant logback-include.xmlfile for your agent or server For e.g. to send logs to logstash (using logstash-logback-encoder) one would need to perform the following: - download all logstash-logback-encoder jars and dependencies into libsdir: logstash-logback-encoder-4.11.jar jackson-databind-2.9.1.jar jackson-annotations-2.9.1.jar jackson-core-2.9.1.jar Then follow the instructions on the README to configure your logback-include.xml to setup relevant appenders and encoders: "1.0" encoding="UTF-8" xml version=<included> <!-- see for more examples and configuration --> <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>127.0.0.1:4560</destination> <encoder class="net.logstash.logback.encoder.LogstashEncoder" /> </appender> <root level="info"> <appender-ref </root> </included>
https://docs.gocd.org/current/advanced_usage/logging.html
2017-12-11T05:55:13
CC-MAIN-2017-51
1512948512208.1
[]
docs.gocd.org
Tips on How to Find a 4 Bedrooms + 4 Bathrooms Home for Sale When it comes to purchasing a house in Tallahassee, to be as specific as you could is something that needs to be catered accordingly throughout to avoid making wrong investments and selection down the line. You will actually see that there will be a plethora of things that needs to be considered when it comes to purchasing a house, which is why being specific about knowing what matters is very important.. To be as specific as you could throughout basically is what you need to have made and considered and this is because of the fact that such purchases most likely is among the largest investment you will have made. To ensure that you will get all of the possible 4 bedroom home for sale Tallahassee options, considering online results is one of the best things you could do. It will be easier for you to end up having a list of properties you could find and there should be a number of websites you could find down the line that has listings you could rely. As much as possible, you will need to not just rely on online results or driving by the area since there are still other means for you to get to increase such option. As much as possible, you will want to be as specific as you could when it comes to gathering options from classifieds since this should give you the very advantage of being able to choose the right 4 bedroom home for sale Tallahassee. Do not forget that you will have to consider making a purchase from a developer.
http://docs-prints.com/2017/11/26/why-not-learn-more-about-experts/
2017-12-11T05:46:00
CC-MAIN-2017-51
1512948512208.1
[]
docs-prints.com
Decodes the JSON-encoded data structure and returns a Lua object (table) with the data. The return value is a Lua object when the data is successfully decoded or, in the case of an error, three values: nil, the position of the next character that doesn't belong to the object, and an error message. json.decode( data [, position [, nullval]] ) Value to be returned for items with a value of json.null (see json.encode()). This is useful if your data contains items which are "null" but you need to know of their existence (in Lua, table items with values of nil don't normally exist). local json = require( "json" ) local t = { ["name1"] = "value1", ["name2"] = { 1, false, true, 23.54, "a \021 string" }, name3 = json.null } local encoded = json.encode( t ) print( encoded ) --> {"name1":"value1","name3":null,"name2":[1,false,true,23.54,"a \u0015 string"]} local encoded = json.encode( t, { indent = true } ) print( encoded ) --> { --> "name1":"value1", --> "name3":null, --> "name2":[1,false,true,23.54,"a \u0015 string"] --> } -- Since this was just encoded using the same library, it's unlikely to fail, but it's good practice to handle errors anyway local decoded, pos, msg = json.decode( encoded ) if not decoded then print( "Decode failed at "..tostring(pos)..": "..tostring(msg) ) else print( decoded.name2[4] ) --> 23.54 end
https://docs.coronalabs.com/api/library/json/decode.html
2017-12-11T05:54:19
CC-MAIN-2017-51
1512948512208.1
[]
docs.coronalabs.com
This module provides a comprehensive interface for the bz2 compression library. It implements a complete file interface, one-shot (de)compression functions, and types for sequential (de)compression. in universal newlines mode.. BZ2File supports the with statement. Changed in version 3.1: Support for the with statement was added. Note This class does not support input files containing multiple streams (such as those produced by the pbzip2 tool). When reading such an input file, only the first stream will be accessible. If you require support for multi-stream files, consider using the third-party bz2file module (available from PyPI). This module provides a backport of Python 3.3’s BZ2File class, which does support multi-stream files. Close the file. Sets data attribute closed to true. A closed file cannot be used for further I/O operations. close() may be called more than once without error. Read. Provide more data to the decompressor object. It will return chunks of decompressed data whenever possible. If you try to decompress data after the end of stream is found, EOFError will be raised. If any data was found after the end of stream, it’ll be ignored and saved in unused_data attribute. One-shot.
https://docs.python.org/3.2/library/bz2.html
2017-12-11T05:30:55
CC-MAIN-2017-51
1512948512208.1
[]
docs.python.org
Installation Instructions for OpenCart 2.x WarningIf your OpenCart is not a fresh installation, files and database backup is highly recommended. These installation instructions assume that you have either a fresh or a customized installation of whichever version of OpenCart 2. Step 1 Unzip the downloaded ZIP file into a new folder. Step 2 Login to your OpenCart admin panel and go to Extensions > Extension Installer. Upload the file comingsoon comingsoon.ocmod.zip again. If it does not work and you receive the same error "Could not connect as...", follow the next step. - Go to this link: and download and install the Quickfix for the OpenCart Extension Installer. After you are done, try uploading comingsoon.ocmod.zip again. Step 3 Go to Extensions > Modules > ComingSoon and click the Install button. Step 4 Go to Extensions > Modifications and click the blue Refresh button on the top right. Step 5 ComingSoon is now installed. You can access it from Extensions > Modules > ComingSoon. Step 6 If you are using the ComingSoon module on a live server, make sure to insert your license key in the Extensions > Modules > ComingSoon > Support. NoteIf you are using the module on a test server, feel free to use it without a license. The license is needed to grant you access to Premium iSenseLabs support and downloads for future updates of ComingSoon. Upgrade Instructions Follow the installation instructions above. When you are done, go to the module and click Save.
http://docs.isenselabs.com/comingsoon/installation_and_licensing_opencart_2
2017-12-11T05:42:25
CC-MAIN-2017-51
1512948512208.1
[]
docs.isenselabs.com
ClientCreate software licenses and counters in Software Asset Management for all catalog items deployed from the service catalog.
https://docs.servicenow.com/bundle/geneva-it-operations-management/page/product/orchestration/concept/c_ClientSoftwareDistribution.html
2017-12-11T05:35:24
CC-MAIN-2017-51
1512948512208.1
[]
docs.servicenow.com
Configuring Encrypted Communication Between HiveServer2 and Client Drivers. Configuring TLS/SSL Encryption for HiveServer2 HiveServer2 can be configured to support TLS/SSL connections from JDBC/ODBC clients using the Cloudera Manager Admin Console (for clusters that run in the context of Cloudera Manager Server), or manually using the command line. Requirements and Assumptions $ See the appropriate How-To guide from the above list for more information. Using Cloudera Manager to Enable TLS/SSL - Log in to the Cloudera Manager Admin Console. - Click the Configuration tab. - Select Hive (Service-Wide) for the Scope filter. - Select Security for the Category filter. The TLS/SSL configuration options display. - Enter values for your cluster as follows: The entry field for certificate trust store password has been left empty because the trust store is typically not password protected—it contains no keys, only publicly available certificates that help establish the chain of trust during the TLS/SSL handshake. In addition, reading the trust store does not require the password. - Click Save Changes. - Restart the Hive service. Client Connections to HiveServer2 Over TLS/SSL Clients connecting to a HiveServer2 over TLS/SSL must be able to access the trust store on the HiveServer2 host system. The trust store contains intermediate and other certificates that the client uses to establish a chain of trust and verify server certificate authenticity. The trust store is typically not password protected. -or, - Set the path to the trust store one time in the Java system javax.net.ssl.trustStore property: java -Djavax.net.ssl.trustStore=/usr/java/jdk1.7.0_67-cloudera/jre/lib/security/jssecacerts \ -Djavax.net.ssl.trustStorePassword=extraneous MyClass \ jdbc:hive2://fqdn.example.com:10000/default;ssl=true Configuring SASL Encryption for HiveServer2 <property> <name>hive.server2.thrift.sasl.qop</name> <value>auth-conf</value> </property> Client Connections to HiveServer2 Using SASL beeline> !connect jdbc:hive2://fqdn.example.com:10000/default; \ principal=hive/[email protected];sasl.qop=auth-confThe _HOST is a wildcard placeholder that gets automatically replaced with the fully qualified domain name (FQDN) of the server running the HiveServer2 daemon process.
https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/sg_hive_encryption.html
2020-02-17T03:04:58
CC-MAIN-2020-10
1581875141653.66
[]
docs.cloudera.com
New Articles for Tales from the Edge We're happy to announce publishing of two more articles on the Tales from the Edge community site: Part 1 of a three-part series on using Network Monitor 3 to troubleshoot Firewall and TMG client traffic: Network Monitor 3.3 RWS Parser Basics, Part 1: Introduction to RWS Protocol Analysis A description of Forefront TMG packet management through TCP/IP re-injection: Understanding the Re-Injection Mechanism Improvement on Forefront TMG We hope you find these useful and please feel free to provide comments on them here. Jim Harrison, Program manager, Forefront TMG
https://docs.microsoft.com/en-us/archive/blogs/isablog/new-articles-for-tales-from-the-edge
2020-02-17T05:38:08
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Migrating Between Versions¶ While we are committed to keeping Great Expectations as stable as possible, sometimes breaking changes are necessary to maintain our trajectory. This is especially true as the library has evolved from just a data quality tool to a slightly more opinionated framework. Great Expectations provides a warning when the currently-installed version is different from the version stored in the expectation suite. Since expectation semantics are usually consistent across versions, there is little change required when upgrading great expectations, with some exceptions noted here. Using the check-config Command¶ To facilitate this substantial config format change, starting with version 0.8.0 we introduced check-config to sanity check your config files. From your project directory, run: >>> great_expectations. See Data Docs Reference. - check-config - it will offer to create a new config file. The new config file will not have any customizations you made, so you will have to copy these from the old file. If you run into any issues, please ask for help on Slack. Upgrading to 0.7.x¶ In version 0.7, GE introduced several new features, and significantly changed the way DataContext objects work: - A DataContexts.
https://docs.greatexpectations.io/en/latest/reference/migrating_versions.html
2020-02-17T03:04:26
CC-MAIN-2020-10
1581875141653.66
[]
docs.greatexpectations.io
By preparing custom classes that inherit from an appropriate base class, you can extend the functionality of Kentico. This approach allows you to implement the following types of objects: - Integration connectors - Marketing automation actions - Notification gateways - Payment gateways - Scheduled tasks - Custom Smart search components: - Translation services - Workflow actions If you create the custom classes in your project's App_Code folder (or CMSApp_AppCode -> Old_App_Code on web application installations), you do not need to integrate a new assembly into the web project. Code in the App_Code folder is compiled dynamically and automatically referenced in all other parts of the system. Registering custom classes in the App_Code folder To ensure that the system can load custom classes placed in App_Code, you need to register each class. Edit your custom class. - Add a using statement for the CMS namespace. Add the RegisterCustomClass assembly attribute above the class declaration (for every App_Code class that you want to register). using CMS; // Ensures that the system loads an instance of 'CustomClass' when the 'MyClassName' class name is requested. [assembly: RegisterCustomClass("MyClassName", typeof(CustomClass))] ... public class CustomClass { ... } The RegisterCustomClass attribute accepts two parameters: - The first parameter is a string identifier representing the name of the class. - The second parameter specifies the type of the class as a System.Type object. When the system requests a class whose name matches the first parameter, the attribute ensures that an instance of the given class is provided. Once you have registered your custom classes, you can use them as the source for objects in the system. When assigning App_Code classes to objects in the administration interface, fill in the following values: - Assembly name: (custom_classes) - Class: must match the value specified in the first parameter of the corresponding RegisterCustomClass attribute
https://docs.kentico.com/k9/custom-development/loading-custom-classes-from-app_code
2020-02-17T03:52:36
CC-MAIN-2020-10
1581875141653.66
[]
docs.kentico.com
In some cases, you might want to exclude certain Assets in your Project from publishing to the cloud. Collaborate uses a gitignore file, named .collabignore, to exclude files. To add your own exclusions to the .collabignore file: If you can’t see the .collabignore file, or the Asset files you want to exclude, your system may be configured to hide system files. Windows and Macs hide system files by default. To display hidden files in Windows 10 and above: For details, see Windows support. To display hidden files in MacOS: For details, see Macintosh support There are Project files and folders that you can never exclude from Collaborate using the .collabignore file. These are:
https://docs.unity3d.com/ja/current/Manual/UnityCollaborateIgnoreFiles.html
2020-02-17T05:05:45
CC-MAIN-2020-10
1581875141653.66
[]
docs.unity3d.com
Instructions on how to upgrade your Ushahidi deployment. Make a backup copy of the current folder where you have installed the Ushahidi Platform. Make a backup copy of your database. The exact procedure to do this depends on your environment. Download the latest .tar.gz file from our github releases page . Please note that pre-releases are not considered stable and you may find issues with them. Uncompress the downloaded file on the same location where the Ushahidi Platform is currently installed. If you had made any changes to .htaccess files, application config files or similar after installation, restore those from your backup copy. Re-run some of the installation steps (refer to the installation guide for more detailed instructions). In particular, re-run these two steps Running database migrations Ensuring that log, cache and media/uploads under platform/application are owned by the proper user From your local repository fetch the latest code and run `npm install` to update your modules: git pull npm install gulp build The updated version should load when you reload your browser. From your local repository fetch the latest code and run `bin/update` or `bin/update --production` if you are running on a production environment: git pull bin/update OR bin/update --production
https://docs.ushahidi.com/platform-developer-documentation/development-and-code/upgrading-ushahidi/upgrading-to-latest-release
2020-02-17T03:03:27
CC-MAIN-2020-10
1581875141653.66
[]
docs.ushahidi.com
Install.. Change Encoding Issues¶. - --log-count¶ This is the number of log rotations to keep around. You can either specify a number or None to keep all twistd.log files around. The default is 10. - --allow-shutdown¶ - Can also be passed directly.
http://docs.buildbot.net/0.8.8/manual/installation.html
2020-02-17T04:54:56
CC-MAIN-2020-10
1581875141653.66
[]
docs.buildbot.net
Automatic Configured Problem Step Recorder (PSR) Everyone that knows me will tell you that I advocate two tools more than any other. Matter of fact, some will say that I nag them about PSR the most; “Where is your PSR”, “How come you haven’t started your PSR”. I always try to remember to start the Problem Step Recorder when I log into a system which I am planning to perform troubleshooting, make a change, or even to document something. One of the BIGGEST problems with the Problem Step Recorder is once you launch the application you have to configure the settings. Where to save the file and how many screen shots do you want , if you don’t do this, you’ll only get the 25 screen shots, and if you click more than 25 times, you loose the oldest screen shots for each click of the mouse. I created a script that will perform the following task each time it is launched: - Specify the output file and path of the psr.exe recorded session, the default value is C:\Temp\PSR_(2-digit month).(2-digit day).(4-digit year).Hour-Minute(AM/PM).zip. - Specify the maximum number of screen captures psr.exe will capture. Valid values are 1-999. Note that Windows 7, 2008 and earlier versions only allow the max value of 100. Notes: - You may need to change your “ExecutionPolicy” so that these scripts will run. - If you want a different path to save the screen captures, you need to update the paths in the “InvokePSR.psm1” and “PSR.ps1”. I placed the “PSR.ps1” file on my desktop; when executed to run the following may occur the first time, but after that it should start the Problem Step Recorder automatically. First the script is going to verify if the directory where the recording will be save does exists. If it does not it will ask if you want it to be created. Next the script is going to detect the operating system and it will configure the screen capture value to the maximum setting allowed. You may or may not see the following, if you do, then this information is only stating what the screen capture is set to and where the recording will be located when it is stopped. You will need to stop the PSR manually, so that it will be save into the specified directory provided inside the “InvokePSR.psm1”. Note: I recommend that you close the Problem Step Recorder, and re-run the “PSR.ps1” script again so that you will get a new file name automatically and not over write the one you just created. Simple as that. One of the things that I have configured in my lab is for the PSR to automatically launch when I log on to a system. The reasoning behind this, so that the PSR automatically start when I logs into a lab server, it is ready to capturing screenshots. Sometime I simply forget to start the PSR when I am working on something in which I needed to be captured. There is nothing like trying to remember what you did, what happen, or I needed it to be documented only to end up back at step one. Open up the Task Scheduler and create a new task. On the “General” tab provide a name, you can name it whatever you like. On the “Triggers” tab, create a task to execute “At log on”. On the “Action” input the following: Action: Start a program Program/script: PowerShell.exe Arguments: -Command "Import-Module C:\Invoke-PSR\InvokePSR.psm1";" Invoke-PSR -Start -RunAsAdministrator" Hit the “Ok” button when done. and the next time you log in it will fire off the Problem Step Recorder. KEEP IN MIND that these files take up space and will fill up a drive over time, so you need to clean them out when no longer needed. If you would like a copy of these script you can find it here: Here is what the scripts look like inside PSR.ps1: Import-Module "C:\Invoke-PSR\InvokePSR.psm1" Invoke-PSR -Start -RunAsAdministrator InvokePSR.psm1: Function Invoke-PSR { <# .SYNOPSIS Starts psr.exe .DESCRIPTION Starts psr.exe with specified parameters .PARAMETER Output Specify the output file and path of the psr.exe recorded session, the default value is C:\Temp\PSR_(2-digit month).(2-digit day).(4-digit year).Hour-Minute(AM/PM).zip .PARAMETER MaxScreenCapture Specify the maximum number of screen captures psr.exe will capture. Valid values are 1-999. Note that Windows 7 and earlier versions the max value is 100 .PARAMETER Start Specify if you want to start psr.exe with the recording started .PARAMETER RunAsAdministrator Specify that you want to start psr.exe with elevated permissions .EXAMPLE Invoke-PSR -Start -RunAsAdministrator This will start and elevated psr.exe with the recording started, with a maximum of 999 screen captures and save the recording to C:\Temp\PSR_09.07.2016.10-37PM.zip .EXAMPLE Invoke-PSR -Output C:\Temp\PSRRecording.zip -MaxScreenCapture 100 -Start This will start psr.exe with the recording started, with a maximum of 100 screen captures and save the recording to C:\Temp\PSRRecording.zip .NOTES Authors and Contributors: Jeramy Evers Lynne Taggart Import-Module "C:\Invoke-PSR\InvokePSR.psm1" Invoke-PSR -Start -RunAsAdministrator .LINK Current Version: 1.0 Date Created: 9/7/2016 Last Modified: 9/7/2016 - Version 4.0 created - Script edited and tested #> [CmdletBinding()] Param ( [parameter(Mandatory=$False)][System.IO.FileInfo]$Output="C:\Temp\$(Get-Date -UFormat "PSR_%m.%d.%Y.%I-%M%p.zip")", [parameter(Mandatory=$False)][Int][ValidateRange(1,999)]$MaxScreenCapture=999, [parameter(Mandatory=$False)][Switch]$Start, [parameter(Mandatory=$False)][Switch]$RunAsAdministrator ) $OSVersion = [System.Environment]::OSVersion.Version If ($OSVersion.Major -eq 6 -and $OSVersion.Minor -le 1 -and $MaxScreenCapture -gt 100) { Write-Warning "PSR.exe on Windows version $($OSVersion.ToString()) has a limitation of a maximum of 100 screen captures and $MaxScreenCapture was specified, so setting maximum screen captures to 100`n" $MaxScreenCapture = 100 } If ((Split-Path $Output -Parent | Test-Path) -eq $False) { Write-Warning "$(Split-Path $Output -Parent) doesn't exist. Do you want the directory created?" Do { $KeyPress = Read-Host -Prompt "Y/N" } Until ($KeyPress -eq "Y" -or $KeyPress -eq "N") If ($KeyPress -eq "Y") { New-Item $(Split-Path $Output -Parent) -Type Directory } If ($KeyPress -eq "N") { Break } } If ($Start) { Write-Host "Starting psr.exe with $MaxScreenCapture maximum screen caputures, recording started and saving to $Output" -ForeGroundColor Green If ($RunAsAdministrator) { Start-Process psr.exe -ArgumentList "/output $Output /maxsc $MaxScreenCapture /start" -Verb RunAs } Else { Start-Process psr.exe -ArgumentList "/output $Output /maxsc $MaxScreenCapture /start" } } Else { Write-Host "Starting psr.exe with $MaxScreenCapture maximum screen caputures and saving to $Output" -ForeGroundColor Green If ($RunAsAdministrator) { Start-Process psr.exe -ArgumentList "/output $Output /maxsc $MaxScreenCapture" -Verb RunAs } Else { Start-Process psr.exe -ArgumentList "/output $Output /maxsc $MaxScreenCapture" } } } Export-ModuleMember -Function Invoke-PSR I hope you enjoy this script.
https://docs.microsoft.com/en-us/archive/blogs/allthat/automatic-configured-problem-step-recorder-psr
2020-02-17T05:09:45
CC-MAIN-2020-10
1581875141653.66
[]
docs.microsoft.com
Lab Ops Part 10–Scale Out File Servers: - $StorageSubSsytem is actually an instance of a class as you can see when I reference it’s unique ID as in $StorageSubSsytem.Unique! - The use of the pipe command to pass objects along a process, and the simple where clause to filter objects by anyone of its/their properties. BTW we can easily find the properties of any object with get-member as in .. . The last few steps can of course be done in PowerShell as well so here’s how I do that, but note that this is my live demo script so some of the bits are slightly different
https://docs.microsoft.com/en-us/archive/blogs/andrew/lab-ops-part-10scale-out-file-servers
2020-02-17T05:29:13
CC-MAIN-2020-10
1581875141653.66
[array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/58/34/metablogapi/7455.tiered-storage6_thumb_36465E5E.png', 'tiered storage6 tiered storage6'], dtype=object) ]
docs.microsoft.com
Discovery Plugin¶ This guide describes how to install the discovery-plugin into orcharhino. The discovery-plugin is used to provision already existing hosts. This can be very useful, when orcharhino is not able to create the hosts itself. The. Installation¶ During the installation of orcharhino with the orcharhino installer (see Installation Guide), select the discovery-plugin at the plugin installation step. This will automatically install and configure the discovery-plugin. The following installation steps aren’t necessary then. For adding the discovery-plugin to an existing orcharhino installation, use the following command: foreman-installer --enable-foreman-plugin-discovery On a Smart Proxy use: foreman-installer --enable-foreman-proxy-plugin-discovery In case of installing the discovery-plugin manually, please use the following command to install the Foreman Discovery Image: yum install orcharhino-fdi Usage on Hosts¶ For hosts to be discovered by the plugin, they have to boot the Foreman Discovery Image. This image will automatically be added to the default network-boot-menu. Important Hosts already managed by orcharhino might not have this boot-menu-entry. This is due to orcharhino creating a custom network-boot-menu for the managed hosts. Note Keep in mind that the host has to receive it’s IP address from orcharhino’s DHCP server to be able to boot from network. If the Foreman Discovery Image entry is not present, you might have to re-generate the default-menu by clicking Build PXE Default on the Provisioning Templates page. Without intervention, the host configures the network interface via DHCP and automatically sends system-information to orcharhino. If the automatic report does not work for your setup, reboot the Foreman Discovery Image and press any key on the welcome screen of the image (see screenshot). This provides extended settings, such as: IP-configuration: e.g. manual IP-setting, using a specific interface or adding a VLAN-ID Credentials: defines the address of the orcharhino server to report to. Custom facts: set your own facts about the host. This can be used for easier identification of the system later or (with Discovery Rules) to automatically decide what to deploy on this host. If this was successful, the host will show a Discovery Status window showing the results and a menu of continuative actions: Resend sends a new report to orcharhino Status shows general status information about the report Facts shows all the information reported to orcharhino Logs provides debug-information SSH allows you to enable/disable SSH-access to the host (also defines SSH root-password) Reboot reboots the host Warning If SSH-access has been enabled and an SSH-session exists, the session will not automatically be terminated, if you disable the SSH-access again. Usage on orcharhino¶ After the host has reported itself successfully to orcharhino, it appears on the discovered hosts page (see also Discovered Hosts from the Management UI chapter). The Name column shows the generated name as well as an icon indicating when the host reported itself (details appear if the cursor is hovered over the icon). The default name of a newly discovered host is generated from it’s MAC-address. The prefix of the name can be customised in the Discovered-section on the Settings page by modifying the Hostname prefix-setting. By clicking on the provisional hostname, facts about the discovered host are shown. These facts can be used e.g. for generating the final hostname upon provisioning. Refreshing of this data can be done in two ways. Either from orcharhino by clicking on the Refresh facts button (the Provision button must be expanded for that). Or from the host itself by clicking on the aforementioned Resend button provided by the booted Foreman Discovery Image. At this stage the hosts are discovered, but not yet deployed. A discovered host can be deployed by clicking on the Provision button. In the upcoming pop-up you may specify a Host Group to provision the host as well as the Organization and the Location. Then you can either click Create Host to start the provisioning (given there are no problems with the settings) or hit Customize Host to get to the Provision Host form, which is similar to the Create Host form. If a Host Group was selected, the Provision Host form is prefilled with the values defined in the Host Group. There is also the possibility to automatically provision discovered hosts. It is essential that least one Discovery rule has been defined and matches the host that should be provisioned. Discovery rules are defined on the discovery rules page (see also Discovery Rules from the Management UI chapter). They are used to provision a set of hosts that have similar facts with the same configuration. A Discovery rule defines the following data that is used to provision the hosts that match the rule’s search-criteria: Host Group: as defined on Host Groups page Hostname: can be set dynamically using eRuby-template language Location Organization The Hosts Limit value defines the maximum amount of hosts that can be provisioned using this rule. A value of 0 disables the limitation. Considering multiple rules were defined that match the same discovered hosts, the rule with the lowest Priority value will take precedence. You can review the discovered hosts that would match a Discovery Rule by clicking on the rule’s Discovered Hosts button. To see the hosts that already have been provisioned with this rule, click on Associated Hosts. Furthermore, a rule can be temporarily disabled using the Disable button or deleted with the Delete button. The Discovery Rules can be applied to the discovered hosts by clicking on one of the Auto Provision buttons on the Discovered Hosts page. This can be done for all discovered hosts (1) or a single one (2) either by initiating Auto Provision manually on the Discovered Hosts page or automatically as soon as they are discovered. Another way to Auto Provision hosts is to let orcharhino initiate it automatically as soon as the hosts report themselves.
https://docs.orcharhino.com/sources/plugin_guides/discovery_plugin.html
2020-02-17T03:07:57
CC-MAIN-2020-10
1581875141653.66
[array(['../../_images/boot_menu.png', 'Discovery Boot Menu'], dtype=object) array(['../../_images/image_welcome.png', 'Welcome screen'], dtype=object) array(['../../_images/discovered_hosts.png', 'List of discovered hosts'], dtype=object) array(['../../_images/discovery_rules.png', 'Discovery rule specification'], dtype=object) array(['../../_images/hosts_auto_provision.png', 'Auto provision'], dtype=object) ]
docs.orcharhino.com
How Splunk Enterprise handles your data Splunk Enterprise consumes data and indexes it, transforming it into searchable knowledge in the form of events. The data pipeline shows the main processes that act on the data during indexing. These processes constitute event processing. After the data is processed into events, you can associate the events with knowledge objects to enhance their usefulness. The data pipeline Incoming data moves through the data pipeline, parse data locally and then forward the parsed data on to receiving indexers, where the final indexing occurs. Universal forwarders offer minimal parsing in specific cases such as handling structured data files. Additional parsing occurs on the receiving name of this topic under Introduction tab should be "How Splunk Enterprise handles your data" instead of "How Splunk DB Connect handles your data". Phularah, thanks for pointing out that anomaly. We've fixed the glitch that caused it.
https://docs.splunk.com/Documentation/Splunk/6.3.1/Data/WhatSplunkdoeswithyourdata
2020-02-17T03:39:22
CC-MAIN-2020-10
1581875141653.66
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
(Advanced) Custom Scripts This section is for advanced users experienced in programming languages. You can insert custom scripts written in PHP, JavaScript, or VBScript in any page of your website. To insert a script in a webpage: Go to the Modules tab, select Script, and drag the module to the page. Paste the code into the input field. For PHP, use the opening tag <?php. Make sure the code you insert into this field is correct, as Presence Builder does not validate it. Click OK. Your code will be active only on the published website. To remove a script: Place the mouse pointer over the script block and click Remove. Was this page helpful? Thank you for the feedback! Please tell us if we can improve further. Sorry to hear that. Please tell us how we can improve.
https://docs.plesk.com/en-US/obsidian/administrator-guide/website-management/creating-sites-with-presence-builder/editing-websites/content-text-tables-images-video-forms-and-scripts/advanced-custom-scripts.69150/
2020-02-17T04:54:55
CC-MAIN-2020-10
1581875141653.66
[]
docs.plesk.com
script interface for a projector component. The Projector can be used to project any material onto the scene - just like a real world projector. The properties exposed by this class are an exact match for the values in the Projector's inspector. It can be used to implement blob or projected shadows. You could also project an animated texture or a render texture that films another part of the scene. The projector will render all objects in its view frustum with the provided material. There is no shortcut property in GameObject or Component to access the Projector, so you must use GetComponent to do it: function Start() { // Get the projector var proj : Projector = GetComponent (Projector); // Use it proj.nearClipPlane = 0.5; } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Start() { Projector proj = GetComponent<Projector>(); proj.nearClipPlane = 0.5F; } } See Also: projector component.
https://docs.unity3d.com/530/Documentation/ScriptReference/Projector.html
2020-02-17T05:31:31
CC-MAIN-2020-10
1581875141653.66
[]
docs.unity3d.com
# Order An order is an instruction from the account owner to the Matcher to buy or sell the token. #. The time is specified in milliseconds that have passed since the beginning of the Unix epoch. # Order binary format See the Order binary format page. # Amount and price in order In order, the amount and price are represented in normalized (i.e. integer) format. To convert the amount to normalized format, it is being multiplied by 10amountAssetDecimals. To convert the price to normalized format, it is being multiplied by - in version 1, 2, 3 orders: by 10(8 + priceAssetDecimals - amountAssetDecimals), - in version 4 orders: by 108. In version 4 orders, the price asset quantity in normalized format is being calculated by following formula: spendAmount = amount × price × 10(priceAssetDecimals - amountAssetDecimals - 8), where - spendAmount — the quantity of price asset in the normalized format, - amount — the amount in a normalized format, - price — price in a normalized format, - priceAssetDecimals — price asset quantity of decimals, - amountAssetDecimals — price asset quantity of decimals. # Calculation example for order v4 Let's review the purchase of 2,13 Tidex for 0,35 WAVES per Tidex. The amount asset is Tidex, the value of amountAssetDecimals equals 2. Price asset is Waves, the value of priceAssetDecimals equals 8. Amount value in normalized format is equals to 2,13 × 10amountAssetDecimals = 213. Price value in normalized format is equals to 0,35016774 × 10amountAssetDecimals = 35016774. Quantity of price-asset in normalized format will be calculated by the following formula: 213 × 35016774 × 10(8 - 2 - 8) = 74585728. As a result, 0,74585728 Waves will be gained for 2,13 Tidex.
https://docs.wavesplatform.com/en/blockchain/order
2020-02-17T03:30:00
CC-MAIN-2020-10
1581875141653.66
[]
docs.wavesplatform.com
. Note Microsoft Windows XP Service Pack 2 enables Windows Firewall, which closes port 445 by default. Because MicrosoftSQL Server communicates over port 445, you must reopen the port if SQL Server is configured to listen for incoming client connections using named pipes. For information on configuring a firewall, see "How to: Configure a Firewall for SQL Server Access" in SQL Server Books Online or review your firewall documentation. Connecting to the Local Server). Verifying Your Connection Protocol The following query will return the protocol used for the current connection. SELECT net_transport FROM sys.dm_exec_connections WHERE session_id = @@SPID; Examples Connecting by server name to the default pipe: Alias Name <serveralias> Pipe Name <blank> Protocol Named Pipes Server <servername> Connecting by IP Address to the default pipe: Alias Name <serveralias> Pipe Name <leave blank> Protocol Named Pipes Server <IPAddress> Connecting by server name to a non-default pipe: Alias Name <serveralias> Pipe Name \\<servername>\pipe\unit\app Protocol Named Pipes Server <servername> Connecting by server name to a named instance: Alias Name <serveralias> Pipe Name \\<servername>\pipe\MSSQL$<instancename>\SQL\query Protocol Named Pipes Server <servername> Connecting to the local computer using localhost: Alias Name <serveralias> Pipe Name <blank> Protocol Named Pipes Server localhost Connecting to the local computer using a period: Alias Name <serveralias> Pipe Name <left blank> Protocol Named Pipes Server . Note To specify the network protocol as a sqlcmd parameter, see "How to: Connect to the Database Engine Using sqlcmd.exe" in SQL Server Books Online.
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms189307(v=sql.100)
2018-04-19T17:59:55
CC-MAIN-2018-17
1524125937015.7
[]
docs.microsoft.com
Getting OpenDCRE¶ OpenDCRE’s source can be found on GitHub, where the OpenDCRE Docker image will need to be built. Alternatively, a pre-built OpenDCRE image can be downloaded from DockerHub. Building from Source¶ Once the OpenDCRE source code is downloaded (either via git clone, or downloaded zip), a Docker image can be built. No additional changes are required to the source for a complete, functioning image, but customizations can be included in the image, e.g. the inclusion of site-specific TLS certificates, nginx configurations for authn/authz, etc. The included dockerfile can be used to package up the distribution: docker build -t opendcre:custom-v1.3.0 -f dockerfile/Dockerfile.x64 . A Makefile recipe also exists to build the OpenDCRE image and tag it as vaporio/opendcre-<arch>:1.3, where <arch> specifies the architecture (e.g., x64). For the x64 architecture, this recipe is: make x64 If building a custom image, apply whatever tag is most descriptive for that image. At this point, OpenDCRE can be tested (see Testing) and run (see Running OpenDCRE) to ensure the build was successful. Downloading from DockerHub¶ If no changes are needed to the source, the pre-packed version can be used. It can be downloaded from DockerHub simply with docker pull vaporio/opendcre:1.3.0
http://opendcre-docs.readthedocs.io/en/latest/installation.html
2018-04-19T17:19:17
CC-MAIN-2018-17
1524125937015.7
[]
opendcre-docs.readthedocs.io
7.45. ( Developers ) - I want to help with IP Masquerade development. What can I do? Join the Linux IP Masquerading DEVELOPERS list and ask the developers there what you can do to help. For more details on joining the list, check out Section 7.5 FAQ section. Please DON'T ask NON-IP-Masquerade development related questions there!!!!
http://tldp.docs.sk/howto/linux-ip-masquerade/developers.html
2018-04-19T17:34:14
CC-MAIN-2018-17
1524125937015.7
[]
tldp.docs.sk
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. BatchPredictionthat includes detailed metadata, status, and data file information for a Batch Predictionrequest. Namespace: Amazon.MachineLearning Assembly: AWSSDK.dll Version: (assembly version) An ID assigned to the BatchPrediction at creation. .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MMachineLearningMachineLearningGetBatchPredictionStringNET35.html
2018-04-19T17:52:05
CC-MAIN-2018-17
1524125937015.7
[]
docs.aws.amazon.com
Configuring Zone Properties Applies To: Windows Server 2008, Windows Server 2008 R2:.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753398(v=ws.11)
2018-04-19T18:34:24
CC-MAIN-2018-17
1524125937015.7
[]
docs.microsoft.com
DHCP Tab Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2, Windows Server 2008, Windows Server 2008 R2 The client that is booting and the server communicate using Dynamic Host Control Protocol (DHCP) packets. The settings on the DHCP tab allow you to configure the server’s DHCP settings. To view this tab, right-click the server in the MMC-snap in, and then click Properties. For more information, see Configuring DHCP (). This tab contains the following settings: Do not listen on port 67. Select this check box if DHCP is running on the same server as Windows Deployment Services (that is, this server). Configure DHCP option 60 to indicate that this server is also a PXE server. Note the following about this option: When you select this option, clients booting from the network are always notified that the PXE server is available, even if the server is not operational or has stopped. There are some scenarios (particularly those that require running a DHCP server) that do not support adding custom DHCP option 60 on the same physical computer as the Windows Deployment Services server. In these circumstances, it is possible to configure the server to bind to User Datagram Protocol (UDP) port 67 in nonexclusive mode by passing the SO_REUSEADDR option. For more information, see Using SO_REUSEADDR and SO_EXCLUSIVEADDRUSE (). If DHCP is installed on a server that is located in a different subnet, you will need to do one of the following: configure your IP Helper tables (recommended) or add DHCP options 66 and 67. For more information about these settings, see Managing Network Boot Programs ().
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc772021(v=ws.11)
2018-04-19T18:40:14
CC-MAIN-2018-17
1524125937015.7
[]
docs.microsoft.com
PPI::Token::Whitespace - Tokens representing ordinary white space NAME PPI::Token::Whitespace - Tokens representing ordinary white space INHERITANCE PPI::Token::Whitespace isa PPI::Token isa PPI::Element DESCRIPTION As a full "round-trip" parser, PPI records every last byte in a file and ensure that it is included in the PPI::Document object. This even includes whitespace. In fact, Perl documents are seen as "floating in a sea of whitespace", and thus any document will contain vast quantities of PPI::Token::Whitespace objects. For the most part, you shouldn't notice them. Or at least, you shouldn't have to notice them. This means doing things like consistently using the "S for significant" series of PPI::Node and PPI::Element methods to do things. If you want the nth child element, you should be using schild rather than child, and likewise snext_sibling, sprevious_sibling, and so on and so forth. METHODS Again, for the most part you should really not need to do anything very significant with whitespace. But there are a couple of convenience methods provided, beyond those provided by the parent PPI::Token and PPI::Element classes. null Because PPI sees documents as sitting on a sort of substrate made of whitespace, there are a couple of corner cases that get particularly nasty if they don't find whitespace in certain places. Imagine walking down the beach to go into the ocean, and then quite unexpectedly falling off the side of the planet. Well it's somewhat equivalent to that, including the whole screaming death bit. The null method is a convenience provided to get some internals out of some of these corner cases. Specifically it create a whitespace token that represents nothing, or at least the null string ''. It's a handy way to have some "whitespace" right where you need it, without having to have any actual characters. tidy tidy is a convenience method for removing unneeded whitespace. Specifically, it removes any whitespace from the end of a line. Note that this doesn't include POD, where you may well need to keep certain types of whitespace. The entire POD chunk lives in its own PPI::Token::Pod object..
http://docs.activestate.com/activeperl/5.26/perl/lib/PPI/Token/Whitespace.html
2018-04-19T17:43:01
CC-MAIN-2018-17
1524125937015.7
[]
docs.activestate.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Using the optional Qualifier parameter, you can specify a specific function version for which you want this information. If you don't specify this parameter, the API uses unqualified function ARN which return information about the $LATEST version of the Lambda function. For more information, see AWS Lambda Function Versioning and Aliases. This operation requires permission for the lambda:GetFunction action. Namespace: Amazon.Lambda Assembly: AWSSDK.dll Version: (assembly version) Container for the necessary parameters to execute the GetFunction service method. .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MLambdaILambdaGetFunctionGetFunctionRequestNET35.html
2018-04-19T17:44:09
CC-MAIN-2018-17
1524125937015.7
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. taskList. This initiates a long poll, where the service holds the HTTP connection open and responds as soon as a task becomes available. The maximum time the service holds on to the request before responding is 60 seconds. If no task is available within 60 seconds, the poll will return an empty result. An empty result, in this context, means that an ActivityTask is returned, but that the value of taskToken is an empty string. If a task is returned, the worker should use its type to identify and process it correctly.ForActivityTask service method. .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MSimpleWorkflowISimpleWorkflowPollForActivityTaskPollForActivityTaskRequestNET45.html
2018-04-19T17:47:45
CC-MAIN-2018-17
1524125937015.7
[]
docs.aws.amazon.com
Kajiki provides a XML-based template language that is heavily inspired by Kid, and Genshi which in turn was inspired by a number of existing template languages, namely XSLT, TAL, and PHP. This document describes the template language and will be most useful as reference to those developing Kajiki XML templates. Templates are XML files of some kind (such as XHTML) that include processing directives (elements or attributes identified by a separate namespace) that affect how the template is rendered, and template expressions that are dynamically substituted by variable data. Directives are elements and/or attributes in the template that are identified by the namespace py:. They can affect how the template is rendered in a number of ways: Kajiki provides directives for conditionals and looping, among others. All directives can be applied as attributes, and some can also be used as elements. The if directives for conditionals, for example, can be used in both ways: <html> ... <div py: <p>Bar</p> </div> ... </html> This is basically equivalent to the following: <html> ... :if¶> py:switch¶ The py:switch directive, in combination with the directives py:case and py:else provides advanced conditional processing for rendering one of several alternatives. The first matching py:case branch is rendered, or, if no py:case branch matches, the py:else branch is rendered. The nested py:case directives will be tested for equality to the parent py:switch value: <div> <py:switch <span py:0</span> <span py:1</span> <span py:2</span> </py:switch> </div> This would produce the following output: <div> <span>1</span> </div> Note The py:switch directive can only be used as a standalone tag and cannot be applied as an attribute of a tag. py:for¶> py:def¶> py:with¶> py:attrs¶> Note This directive can only be used as an attribute. py:content¶ This directive replaces any nested content with the result of evaluating the expression: <ul> <li py:Hello</li> </ul> Given bar='Bye' in the context data, this would produce: <ul> <li>Bye</li> </ul> This directive can only be used as an attribute. py:replace¶> py:strip¶). To reuse common snippets of template code, you can include other files using py:include and py:import.: <py:include whereas in the PackageLoader you would use <py:include With py:import, you can make the functions defined in another template available without expanding the full template in-place. Suppose that we saved the following template in a file lib.xml: <py:def <py:ifeven</py:if><py:else>odd</py:else> </py:def> Then (using the FileLoader) we could write a template using the evenness function as follows: <div> <py:import <ul> <li py:$i is ${lib.evenness(i)}</li> </ul> </div> Kajiki is a fast template engine which is 90% compatible with Genshi, all of Genshi directives work in Kajiki too apart those involved in templates inheritance as Kajiki uses blocks instead of XInclude and XPath. Simple templates hierarchies (like the one coming from TurboGears quickstart) can be moved to Kajiki blocks in a matter of seconds through the gearbox patch command. Note Please note that this guide only works on version 2.3.6 and greater. Note It’s suggested to try this steps on a newly quickstarted Genshi application and then test them on your real apps when you are confident with the whole process. Enabling Kajiki support involves changing the base_config.default_renderer option in your app_cfg.py and adding kajiki to the renderers: # Add kajiki support base_config.renderers.append('kajiki') # Set the default renderer base_config.default_renderer = 'kajiki' The only template we will need to adapt by hand is our master.html template, everything else will be done automatically. So the effort of porting an application from Genshi to Kajiki is the same independently from the size of the application. First of all we will need to remove the py:strip and xmlns attributes from the html tag: <html xmlns="" xmlns: should became: <html> Then let’s adapt our head tag to make it so that the content from templates that extend our master gets included inside it: <head py: <meta name="viewport" content="width=device-width, initial-scale=1.0"/> <meta charset="${response.charset}" /> <title py:Your generic title goes here</title> <meta py: <link rel="stylesheet" type="text/css" media="screen" href="${tg.url('/css/bootstrap.min.css')}" /> <link rel="stylesheet" type="text/css" media="screen" href="${tg.url('/css/style.css')}" /> </head> should became: <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"/> <meta charset="${response.charset}" /> <title py:Your generic title goes here</title> <py:block <link rel="stylesheet" type="text/css" media="screen" href="${tg.url('/css/bootstrap.min.css')}" /> <link rel="stylesheet" type="text/css" media="screen" href="${tg.url('/css/style.css')}" /> </head> Then we do the same with the body tag by disabling it as a block and placing a block with the same name inside of it: <body py: <!-- Navbar --> [...] <div class="container"> <!-- Flash messages --> <py:with <div class="row"> <div class="col-md-8 col-md-offset-2"> <div py: </div> </div> </py:with> <!-- Main included content --> <div py: </div> </body> Which should became: <body> <!-- Navbar --> [...] <div class="container"> <!-- Flash messages --> <py:with <div class="row"> <div class="col-md-8 col-md-offset-2"> <div py: </div> </div> </py:with> <!-- Main included content --> <py:block </div> </body> What is did is replacing all the XPath expressions that lead to insert content from the child templates into head and body with two head and body blocks. So our child templates will be able to rely on those blocks to inject their content into the master. Last importat step is renaming the master template, as Kajiki in turbogears uses .xhtml extension we will need to rename master.html to master.xhtml: $ find ./ -iname 'master.html' -exec sh -c 'mv {} `dirname {}`/master.xhtml' \; Note The previous expression will rename the master file if run from within your project directory. There are three things we need to do to upgrade all our child templates to Kajiki: - Replace xi:includewith py:extends - Strip <html>tags to avoid a duplicated root tag - Replace <head>tag with a kajiki block - Replace <body>tag with a kajiki block To perform those changes we can rely on a simple but helpful gearbox command to patch all our templates by replacing xi:include with py:extends which is used and recognized by Kajiki. Just move inside the root of your project and run: $ gearbox patch -R '*.html' 'xi:include href="master.html"' -r 'py:extends href="master.xhtml"' You should get an output similar to: 6 files matching ! Patching /private/tmp/prova/prova/templates/about.html ! Patching /private/tmp/prova/prova/templates/data.html ! Patching /private/tmp/prova/prova/templates/environ.html ! Patching /private/tmp/prova/prova/templates/error.html ! Patching /private/tmp/prova/prova/templates/index.html ! Patching /private/tmp/prova/prova/templates/login.html Which means that all our templates apart from master.html got patched properly and now correctly use py:extends. Now we can start adapting our tags to move them to kajiki blocks. First of all we will need to strip the html from all the templates apart master.xhtml to avoid ending up with duplicated root tag: $ gearbox patch -R '*.html' 'xmlns=""' -r 'py:strip=""' Then we will need to do the same for the head tag: $ gearbox patch -R '*.html' '<head>' -r '<head py:' Then repeat for body: $ gearbox patch -R '*.html' '<body>' -r '<body py:' Now that all our templates got upgraded from Genshi to Kajiki, we must remember to rename them all, like we did for master: $ find ./ -iname '*.html' -exec sh -c 'mv {} `dirname {}`/`basename {} .html`.xhtml' \; Restarting your application now should lead to a properly working page equal to the original Genshi one. Congratulations, you successfully moved your templates from Genshi to Kajiki.
http://turbogears.readthedocs.io/en/latest/turbogears/kajiki-xml-templates.html
2018-04-19T17:06:50
CC-MAIN-2018-17
1524125937015.7
[]
turbogears.readthedocs.io
VPC Flow Logs. Flow logs can help you with a number of tasks; for example, to troubleshoot why specific traffic is not reaching an instance, which in turn helps you diagnose overly restrictive security group rules. You can also use flow logs as a security tool to monitor the traffic that is reaching your instance. There is no additional charge for using flow logs; however, standard CloudWatch Logs charges apply. For more information, see Amazon CloudWatch Pricing. Topics Flow Logs Basics You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in the VPC or subnet is monitored. Flow log data is published to a log group in CloudWatch Logs, and each network interface has a unique log stream. Log streams contain flow log records, which are log events consisting of fields that describe the traffic for that network interface. For more information, see Flow Log Records. To create a flow log, you specify the resource for which to create the flow log, the type of traffic to capture (accepted traffic, rejected traffic, or all traffic), the name of a log group in CloudWatch Logs to which the flow log is published, and the ARN of an IAM role that has sufficient permissions to publish the flow log to the CloudWatch Logs log group. If you specify the name of a log group that does not exist, we attempt to create the log group for you. After you've created a flow log, it can take several minutes to begin collecting data and publishing to CloudWatch Logs. Flow logs do not capture real-time log streams for your network interfaces. You can create multiple flow logs that publish data to the same log group in CloudWatch Logs.. If you launch more instances into your subnet after you've created a flow log for your subnet or VPC, then a new log stream is created for each new network interface as soon as any network traffic is recorded for that network interface. You can create flow logs for network interfaces that are created by other AWS services; for example, Elastic Load Balancing, Amazon RDS, Amazon ElastiCache, Amazon Redshift, and Amazon WorkSpaces. However, you cannot use these services' consoles or APIs to create the flow logs; you must use the Amazon EC2 console or the Amazon EC2 API. Similarly, you cannot use the CloudWatch Logs console or API to create log streams for your network interfaces. If you no longer require a flow log, you can delete it. Deleting a flow log disables the flow log service for the resource, and no new flow log records or log streams are created. It does not delete any existing flow log records or log streams for a network interface. To delete an existing log stream, you can use the CloudWatch Logs console. After you've deleted a flow log, it can take several minutes to stop collecting data. Flow Log Limitations To use flow logs, you need to be aware of the following limitations: You cannot enable flow logs for network interfaces that are in the EC2-Classic platform. This includes EC2-Classic instances that have been linked to a VPC through ClassicLink. You cannot enable flow logs for VPCs that are peered with your VPC unless the peer VPC is in your account. You cannot tag a flow log. After you've created a flow log, you cannot change its configuration; for example, you can't associate a different IAM role with the flow log. Instead, you can delete the flow log and create a new one with the required configuration. None of the flow log API actions ( ec2:*FlowLogs) support resource-level permissions. To create an IAM policy to control the use of the flow log API actions, you must grant users permissions to use all resources for the action by using the * wildcard for the resource element in your statement. For more information, see Controlling Access to Amazon VPC Resources. If your network interface has multiple IPv4 addresses and traffic is sent to a secondary private IPv4 address, the flow log displays the primary private IPv4 address in the destination IP address field. Flow logs do not capture all IP traffic. The following types of traffic are not logged: Traffic generated by instances when they contact the Amazon DNS server. If you use your own DNS server, then all traffic to that DNS server is logged. Traffic generated by a Windows instance for Amazon Windows license activation. Traffic to and from 169.254.169.254for instance metadata. Traffic to and from 169.254.169.123for the Amazon Time Sync Service. DHCP traffic. Traffic to the reserved IP address for the default VPC router. For more information, see VPC and Subnet Sizing. Traffic between an endpoint network interface and a Network Load Balancer network interface. For more information, see VPC Endpoint Services (AWS PrivateLink). Flow Log Records A flow log record represents a network flow in your flow log. Each record captures the network flow for a specific 5-tuple, for a specific capture window. A 5-tuple is a set of five different values that specify the source, destination, and protocol for an internet protocol (IP) flow. The capture window is a duration of time during which the flow logs service aggregates data before publishing flow log records. The capture window is approximately 10 minutes, but can take up to 15 minutes. A flow log record is a space-separated string that has the following format: version account-id interface-id srcaddr dstaddr srcport dstport protocol packets bytes start end action log-status If a field is not applicable for a specific record, the record displays a '-' symbol for that entry. For examples of flow log records, see Examples: Flow Log Records. You can work with flow log records as you would with any other log events collected by CloudWatch Logs. For more information about monitoring log data and metric filters, see Searching and Filtering Log Data in the Amazon CloudWatch User Guide. For an example of setting up a metric filter and alarm for a flow log, see Example: Creating a CloudWatch Metric Filter and Alarm for a Flow Log. You can export log data to Amazon S3 and use Amazon Athena, an interactive query service, to analyze the data. For more information, see Querying Amazon VPC Flow Logs in the Amazon Athena User Guide. IAM Roles for Flow Logs The IAM role that's associated with your flow log must have sufficient permissions to publish flow logs to the specified log group in CloudWatch Logs. The IAM policy that's attached to your IAM role must include at least the following permissions: { "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogGroups", "logs:DescribeLogStreams" ], "Effect": "Allow", "Resource": "*" } ] } You must also ensure that your role has a trust relationship that allows the flow logs service to assume the role (in the IAM console, choose your role, and then choose Edit trust relationship to view the trust relationship): { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "vpc-flow-logs.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Alternatively, you can follow the procedures below to create a new role for use with flow logs. Creating a Flow Logs Role To. Select the name of your role. Under Permissions, choose Add inline policy. Choose the JSON tab. In the section IAM Roles for Flow Logs above, copy the first policy and paste it in the window. Choose Review policy. Enter a name for your policy, and then choose Create policy. In the section IAM Roles for Flow Logs above, copy the second policy (the trust relationship), and then choose Trust relationships, Edit trust relationship. Delete the existing policy document, and paste in the new one. When you are done, choose Update Trust Policy. On the Summary page, take note of the ARN for your role. You need this ARN when you create your flow log. Controlling the Use of Flow Logs By default, IAM users do not have permission to work with flow logs. You can create an IAM user policy that grants users the permissions to create, describe, and delete flow logs. To create a flow log, users must have permissions to use the iam:PassRole action for the IAM role that's associated with the flow log. The following is an example policy that grants users full permissions to create, describe, and delete flow logs, and view flow log records in CloudWatch Logs. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DeleteFlowLogs", "ec2:CreateFlowLogs", "ec2:DescribeFlowLogs", "logs:GetLogEvents" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "arn:aws:iam::account:role/flow-log-role-name" } ] } For more information about permissions, see Granting IAM Users Required Permissions for Amazon EC2 Resources in the Amazon EC2 API Reference. Working With Flow Logs You can work with flow logs using the Amazon EC2, Amazon VPC, and CloudWatch consoles. Creating a Flow Log You can create a flow log from the VPC page and the Subnet page in the Amazon VPC console, or from the Network Interfaces page in the Amazon EC2 console. To create a flow log for a network interface Open the Amazon EC2 console at. In the navigation pane, choose Network Interfaces. Select a network interface, choose the Flow Logs tab, and then Create Flow Log. In the dialog box, complete following information. When you are done, choose Create Flow Log: Filter: Select whether the flow log should capture rejected traffic, accepted traffic, or all traffic. Role: Specify the name of an IAM role that has permissions to publish logs to CloudWatch Logs. Destination Log Group: Enter the name of a log group in CloudWatch Logs to which the flow logs are to be published. You can use an existing log group, or you can enter a name for a new log group, which we create for you. To create a flow log for a VPC or a subnet Open the Amazon VPC console at. In the navigation pane, choose Your VPCs, or choose Subnets. Select your VPC or subnet, choose the Flow Logs tab, and then Create Flow Log. Note To create flow logs for multiple VPCs, choose the VPCs, and then select Create Flow Log from the Actions menu. To create flow logs for multiple subnets, choose the subnets, and then select Create Flow Log from the Subnet Actions menu. In the dialog box, complete following information. When you are done, choose Create Flow Log: Filter: Select whether the flow log should capture rejected traffic, accepted traffic, or all traffic. Role: Specify the name of an IAM role that has permission to publish logs to CloudWatch Logs. Destination Log Group: Enter the name of a log group in CloudWatch Logs to which the flow logs will be published. You can use an existing log group, or you can enter a name for a new log group, which we'll create for you. Viewing Flow Logs You can view information about your flow logs in the Amazon EC2 and Amazon VPC consoles by viewing the Flow Logs tab for a specific resource. When you select the resource, all the flow logs for that resource are listed. The information displayed includes the ID of the flow log, the flow log configuration, and information about the status of the flow log. To view information about your flow logs for your network interfaces Open the Amazon EC2 console at. In the navigation pane, choose Network Interfaces. Select a network interface, and choose the Flow Logs tab. Information about the flow logs is displayed on the tab. To view information about your flow logs for your VPCs or subnets Open the Amazon VPC console at. In the navigation pane, choose Your VPCs, or choose Subnets. Select your VPC or subnet, and then choose the Flow Logs tab. Information about the flow logs is displayed on the tab. You can view your flow log records using the CloudWatch Logs console. It may take a few minutes after you've created your flow log for it to be visible in the console. To view your flow log records for a flow log Open the CloudWatch console at. In the navigation pane, choose Logs. Choose the name of the log group that contains your flow log. A list of log streams for each network interface is displayed. Choose the name of the log stream that contains the ID of the network interface for which you want to view the flow log records. For more information about flow log records, see Flow Log Records. Deleting a Flow Log You can delete a flow log using the Amazon EC2 and Amazon VPC consoles. Note These procedures disable the flow log service for a resource. To delete the log streams for your network interfaces, use the CloudWatch Logs console. To delete a flow log for a network interface Open the Amazon EC2 console at. In the navigation pane, choose Network Interfaces, and then select the network interface. Choose the Flow Logs tab, and then choose the delete button (a cross) for the flow log to delete. In the confirmation dialog box, choose Yes, Delete. To delete a flow log for a VPC or subnet Open the Amazon VPC console at. In the navigation pane, choose Your VPCs, or choose your Subnets, and then select the resource. Choose the Flow Logs tab, and then choose the delete button (a cross) for the flow log to delete. In the confirmation dialog box, choose Yes, Delete. Troubleshooting Incomplete Flow Log Records If your flow log records are incomplete, or are no longer being published, there may be a problem delivering the flow logs to the CloudWatch Logs log group. In either the Amazon EC2 console or the Amazon VPC console, go to the Flow Logs tab for the relevant resource. For more information, see Viewing Flow Logs. The flow logs table displays any errors in the Status column. Alternatively, use the describe-flow-logs command, and check the value that's returned in the DeliverLogsErrorMessage field. One of the following errors may be displayed: Rate limited: This error can occur if CloudWatch logs throttling has been applied — when the number of flow log records for a network interface is higher than the maximum number of records that can be published within a specific timeframe. This error can also occur if you've reached the limit on the number of CloudWatch Logs log groups that you can create. For more information, see CloudWatch Limits in the Amazon CloudWatch User Guide. Access error: The IAM role for your flow log does not have sufficient permissions to publish flow log records to the CloudWatch log group. For more information, see IAM Roles for Flow Logs. Unknown error: An internal error has occurred in the flow logs service. Flow Log is Active, But No Flow Log Records or Log Group You've created a flow log, and the Amazon VPC or Amazon EC2 console displays the flow log as Active. However, you cannot see any log streams in CloudWatch Logs, or your CloudWatch Logs log group has not been created. The cause may be one of the following: The flow log is still in the process of being created. In some cases, it can take tens of minutes after you've created the flow log for the log group to be created, and for data to be displayed. There has been no traffic recorded for your network interfaces yet. The log group in CloudWatch Logs is only created when traffic is recorded. API and CLI Overview You can perform the tasks described on this page using the command line or API. For more information about the command line interfaces and a list of available API actions, see Accessing Amazon VPC. Create a flow log create-flow-logs (AWS CLI) New-EC2FlowLogs (AWS Tools for Windows PowerShell) CreateFlowLogs (Amazon EC2 Query API) Describe your flow logs describe-flow-logs (AWS CLI) Get-EC2FlowLogs (AWS Tools for Windows PowerShell) DescribeFlowLogs (Amazon EC2 Query API) View your flow log records (log events) get-log-events (AWS CLI) Get-CWLLogEvents (AWS Tools for Windows PowerShell) GetLogEvents (CloudWatch API) Delete a flow log delete-flow-logs (AWS CLI) Remove-EC2FlowLogs (AWS Tools for Windows PowerShell) DeleteFlowLogs (Amazon EC2 Query API) Examples: Flow Log Records Flow Log Records for Accepted and Rejected Traffic The following is an example of a flow log record in which SSH traffic (destination port 22, TCP protocol) to network interface eni-abc123de in account 123456789010 was allowed. 2 123456789010 eni-abc123de 172.31.16.139 172.31.16.21 20641 22 6 20 4249 1418530010 1418530070 ACCEPT OK The following is an example of a flow log record in which RDP traffic (destination port 3389, TCP protocol) to network interface eni-abc123de in account 123456789010 was rejected. 2 123456789010 eni-abc123de 172.31.9.69 172.31.9.12 49761 3389 6 20 4249 1418530010 1418530070 REJECT OK Flow Log Records for No Data and Skipped Records The following is an example of a flow log record in which no data was recorded during the capture window. 2 123456789010 eni-1a2b3c4d - - - - - - - 1431280876 1431280934 - NODATA The following is an example of a flow log record in which records were skipped during the capture window. 2 123456789010 eni-4b118871 - - - - - - - 1431280876 1431280934 - SKIPDATA Security Group and Network ACL Rules If you're using flow logs to diagnose overly restrictive or permissive security group rules or network ACL rules, then be aware of the statefulness of these resources. Security groups are stateful — this means that responses to allowed traffic are also allowed, even if the rules in your security group do not permit it. Conversely, network ACLs are stateless, therefore responses to allowed traffic are subject to network ACL rules. For example, you use the ping command from your home computer (IP address is 203.0.113.12) to your instance (the network interface's private IP address is 172.31.16.139). Your security group's inbound rules allow ICMP traffic and the outbound rules do not allow ICMP traffic; however, because security groups are stateful, the response ping from your instance is allowed. Your network ACL permits inbound ICMP traffic but does not permit outbound ICMP traffic. Because network ACLs are stateless, the response ping is dropped and will not reach your home computer. In a flow log, this is displayed as 2 flow log records: An ACCEPTrecord for the originating ping that was allowed by both the network ACL and the security group, and therefore was allowed to reach your instance. A REJECTrecord for the response ping that the network ACL denied. If your network ACL permits outbound ICMP traffic, the flow log displays two ACCEPT records (one for the originating ping and one for the response ping). If your security group denies inbound ICMP traffic, the flow log displays a single REJECT record, because the traffic was not permitted to reach your instance. Flow Log Record for IPv6 Traffic The following is an example of a flow log record in which SSH traffic (port 22) from IPv6 address 2001:db8:1234:a100:8d6e:3477:df66:f105 to network interface eni-f41c42bf in account 123456789010 was allowed. 2 123456789010 eni-f41c42bf 2001:db8:1234:a100:8d6e:3477:df66:f105 2001:db8:1234:a102:3304:8879:34cf:4071 34892 22 6 54 8855 1477913708 1477913820 ACCEPT OK Example: Creating a CloudWatch Metric Filter and Alarm for a Flow Log In this example, you have a flow log for eni-1a2b3c4d. You want to create an alarm that alerts you if there have been 10 or more rejected attempts to connect to your instance over TCP port 22 (SSH) within a 1 hour time period. First, you must create a metric filter that matches the pattern of the traffic for which you want to create the alarm. Then, you can create an alarm for the metric filter. To create a metric filter for rejected SSH traffic and create an alarm for the filter Open the CloudWatch console at. In the navigation pane, choose Logs, select the flow log group for your flow log, and then choose Create Metric Filter. In the Filter Pattern field, enter the following: [version, account, eni, source, destination, srcport, destport="22", protocol="6", packets, bytes, windowstart, windowend, action="REJECT", flowlogstatus] In the Select Log Data to Test list, select the log stream for your network interface. You can optionally choose Test Pattern to view the lines of log data that match the filter pattern. When you're ready, choose Assign Metric. Provide a metric namespace, a metric name, and ensure that the metric value is set to 1. When you're done, choose Create Filter. In the navigation pane, choose Alarms, and then choose Create Alarm. In the Custom Metrics section, choose the namespace for the metric filter that you created. Note It can take a few minutes for a new metric to display in the console. Select the metric name that you created, and then choose Next. Enter a name and description for the alarm. In the is fields, choose >= and enter 10. In the for field, leave the default 1 for the consecutive periods. Choose 1 Hour from the Period list, and Sum from the Statistic list. The Sumstatistic ensures that you are capturing the total number of data points for the specified time period. In the Actions section, you can choose to send a notification to an existing list, or you can create a new list and enter the email addresses that should receive a notification when the alarm is triggered. When you are done, choose Create Alarm.
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html
2018-04-19T17:59:04
CC-MAIN-2018-17
1524125937015.7
[]
docs.aws.amazon.com
Poster Companion Reference: Hyper-V Virtual Machine Mobility Applies To: Windows Server 2012 This document is part of a companion reference discussing the Windows Server® 2012 Hyper-V® Component Architecture Poster. This document refers to the poster section titled “Hyper-V Virtual Machine Mobility” and discusses new live migration features in Windows Server 2012, including live migration without shared storage (also known as “shared nothing live migration”), live migration with SMB shared storage, storage migration, and live migration with failover clusters. 1. Understanding Live Migration 2. New Functionality in Windows Server 2012 Live Migration 3. Windows Server 2012 Live Migration Requirements 4. Live Migration Without Shared Storage (“Shared Nothing Live Migration”) 5. Understanding Live Migration Without Shared Storage - 7. Understanding Storage Migration 8. Live Migration with Failover Clusters 9. Live Migration with SMB Shared Storage 10. Windows Server Component Architecture Posters You can obtain the Windows Server 2012 Hyper-V Component Architecture Poster from the Microsoft Download Center. This is a free resource. You can download it here: 1. Understanding Live Migration In Windows Server 2008 R2, live migration was introduced in Hyper-V—live migration with failover clusters. It provides you with the capability to move a virtual machine from one node in a Windows Server 2008 R2 failover cluster to another node without a perceived interruption in service. Live migration quickly became one of the most popular features in Hyper-V and one of the most useful tools for managing a virtualized environment. It is very helpful to be able to migrate a virtual machine off your server that needs maintenance, then later migrate it back when the maintenance is complete. You can also use live migration to manage high resource utilization periods in your environments by moving virtual machines to new servers that have greater performance or storage capacities. 2. New Functionality in Windows Server 2012 Live Migration In Windows Server 2012 Hyper-V, new functionality extends live migration beyond just live migration with failover clusters. Now it works without any requirement for a failover cluster (live migration without shared storage or “shared nothing” live migration), allows you to migrate your virtual disk storage (storage migration), and works with your new SMB 3.0 file shares (live migration with SMB shared storage). 3. Windows Server 2012 Live Migration Requirements All types of Windows Server 2012 live migrations have the following requirements: Two or more servers running Windows Server 2012 with the Hyper-V role installed that: Support hardware virtualization. Use processors from the same manufacturer (for example, all AMD or all Intel) Are part of the same Active Directory® domain. Virtual machines that are configured to use virtual hard disks or virtual Fibre Channel disks (no direct-attached storage). A private network for live migration network traffic. Live migration with failover clusters has the following additional requirements: The Failover Clustering feature is enabled and configured. Cluster Shared Volume (CSV) storage in the failover cluster is enabled. Live migration with SMB shared storage has the following additional requirements: All files on a virtual machine (such as virtual hard disks, snapshots, and configuration) are stored on a Server Message Block share. Permissions on the SMB share must be configured to grant access to the computer accounts of all servers running Hyper-V. Live migration without shared storage (“shared nothing” live migration) has no additional requirements. 4. Live Migration Without Shared Storage (“Shared Nothing Live Migration”) Live migration without shared storage (also known as “shared nothing” live migration) is new in Windows Server 2012. It enables you to migrate your virtual machines and their associated storage between servers running Hyper-V within the same domain. This kind of live migration requires only an Ethernet connection. Note that this type of live migration does not provide high availability—live migration with failover clusters offers this benefit with its shared storage. Live migration without shared storage offers the following benefits: It allows increased flexibility of your virtual machine placements. It increases administrator flexibility. It reduces downtime for migrations across cluster boundaries. Many administrators are using live migration without shared storage for more dynamically evolving and managing their infrastructure. For example, if you are experiencing hardware problems with your server running Hyper-V, you can use this kind of live migration to quickly move the entire virtual machine to another server while you troubleshoot the original server. Then, you can simply move it back later. There is no failover cluster and no complex detailed configuration required. Using live migration without shared storage, you can also migrate your virtual machines between clusters, and from a non-clustered computer into a failover cluster. Additionally, you can migrate virtual machines between different storage types. You can initiate live migration without shared storage using Windows PowerShell™. .jpeg) It is easy to complete a live migration without shared storage operation. You use Hyper-V Manager to select the virtual machine, and then choose the option to move the virtual machine. A wizard guides you through the process, but you need to make a few basic decisions as you migrate. For example, you must decide on how to store your virtual machine on the destination server running Hyper-V. You can move the virtual machine data to a single location, or select individual locations for each component of your virtual machine (for example virtual hard disks). You can also choose to move only the virtual machine. If you choose this option, make sure your virtual hard disks are on shared storage. 5. Understanding Live Migration Without Shared Storage During the operation of live migration without shared storage, your virtual machine continues to run while all of its storage is mirrored across to the destination server running Hyper-V. After the Hyper-V storage is synchronized, the live migration completes its remaining tasks. Finally, the mirroring stops and the storage on the source server is deleted. When you perform a live migration of a virtual machine between two computers that do not share storage, Hyper-V first performs a partial migration of the virtual machine’s storage. Hyper-V performs the following steps: .jpeg) Throughout most of the move operation, disk reads and writes go to the source virtual hard disk (VHD). While reads and writes occur on the source virtual hard disk, the disk contents are copied (or mirrored). 6. Storage Migration In just about every organization, having the flexibility to manage storage without affecting the availability of your virtual machine workloads is a key capability. IT administrators require this flexibility to perform maintenance on storage subsystems, upgrade storage appliance firmware and software, respond to dynamic requests for organization resources, and balance loads as capacity is consumed—all without shutting down virtual machines. In Windows Server 2008 R2, you could move a running instance of a virtual machine by using live migration—but you were not able to move the virtual machine storage while the virtual machine was running. However, in Windows Server 2012 Hyper-V, you are now able to move the virtual machine storage while it is running using the new functionality called storage migration. Hyper-V storage migration enables you to move virtual machine storage (virtual hard disks (VHDs)) without downtime, and enables several new usage scenarios. For example, you can add more physical disk storage to a non-clustered computer or a Hyper-V failover cluster, and then move the virtual machines to the new storage while the virtual machines continue to run. You have two options to implement storage migration. You can perform a storage migration using the Hyper-V Manager (using the GUI) or through Windows PowerShell and associated scripts. The requirements for storage migration are Windows Server 2012 with the Hyper-V role installed and virtual machines configured to use virtual hard disks for storage. Storage migration allows you to migrate a complete virtual machine to another location—on either your current server or on shared storage (for example, an SMB 3.0 file share). It offers you a large variety of migration options. These include moving your virtual machine data to a single location, moving your virtual machine data to different locations, or just moving your virtual machine’s virtual hard disks. If you choose to move different components, you can specify virtual hard disks, current configuration, snapshots, and Smart Paging. 7. Understanding Storage Migration An important thing to understand about storage migration is that you can perform storage migration when your virtual machine is running or when it is turned off. Note that storage migration moves the storage, but not your virtual machine state. When you move a running virtual machine’s storage (virtual hard disks), Hyper-V performs the following steps. .jpeg) synchronized, the virtual machine switches over to using the destination virtual hard disk. The source virtual hard disk is deleted. 8. Live Migration with Failover Clusters Failover clustering provides your organization with protection against application and service failure, system and hardware failure (such as CPUs, drives, memory, network adapters, and power supplies), and site failure (which could be caused by natural disaster, power outages, or connectivity outages). The Windows Server Failover Clustering feature enables high-availability solutions for many workloads, and has included Hyper-V support since Hyper-V was released. By clustering your virtualized workloads, you can increase reliability and availability, and can enable access to your server-based applications in times of planned or unplanned downtime. Earlier in this guide, section 5 discussed live migration without shared storage. This is outside of a failover cluster and does not deliver high availability. However, Windows Server 2012 continues to offer a high availability solution that uses Failover Clustering with Hyper-V. Hyper-V live migration with failover clusters (first introduced in Windows Server 2008 R2) enables you to move running virtual machines from one clustered server (node) running Hyper-V to another server (node) without any disruption or perceived loss of service. Note that live migration is initiated by the administrator and is a planned operation. In Windows Server 2012, you can improve on this migration process by selecting multiple virtual machines within the failover cluster and performing multiple simultaneous live migrations of those virtual machines. You can also select and queue live migrations of multiple virtual machines. It is important to remember that live migration queuing is only supported within a failover cluster. .jpeg) As with all types of live migration, you can initiate live migration with failover clusters using Windows PowerShell. 9. Live Migration with SMB Shared Storage Some customers do not need to invest in clustering hardware to meet their business objectives. In fact, many organizations already have existing investments in simple SMB storage that they would like to leverage. In Windows Server 2012, you can now use live migration outside a failover cluster when your virtual machine is stored on a Server Message Block (SMB) share. Live migration with SMB shared storage enables you to move your virtual machines between servers running Hyper-V within the same domain while your virtual machine storage remains on the SMB-based file server. This allows you to benefit from virtual machine mobility without having to invest in a failover clustering infrastructure. Hosting providers and similar environments frequently need this capability. Windows Server 2012 also supports concurrent live migrations. Depending on your organizational infrastructure, live migration operations can be very fast—you are not migrating your virtual storage, so you are not moving larger virtual hard disks across your network. .jpeg) For live migration with SMB shared storage, your virtual hard disk (VHD) is on an SMB 3–based file server. Once you begin this migration, the actual running state of your virtual machine is migrated from one server to another. It is important to note that the connection to the SMB storage is migrated, but your virtual hard disk never moves. As with live migration without shared storage (“shared nothing” live migration), Windows Server 2012 makes it easy to migrate your virtual machine on an SMB file server. You can start the live migration using Hyper-V Manager (which is GUI-based), or you can initiate it using Windows PowerShell (see the “move-vm cmdlet”). There are a few things to keep in mind if you want to use live migration with SMB shared storage. Apart from having servers running Hyper-V that are part of the same Active Directory domain, you should keep all the virtual machine files (such as virtual hard disks, snapshots, and configuration files) stored on a SMB 3-based file server. In addition, you need to make sure you configure permissions on your SMB file server correctly. This means that you need to grant access to the computer accounts of all your servers running Hyper-V. Also, you need to use Windows Server 2012 on servers that support hardware virtualization—and these servers need to use processors from the same manufacturer (AMD-V or Intel VT)(same as the requirement in Windows Server 2008 and Windows Server 2008 R2). 10.. .jpeg). .jpeg). .jpeg)
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn641214(v=ws.11)
2018-04-19T18:02:05
CC-MAIN-2018-17
1524125937015.7
[array(['images%5cdn641214.4a459350-e76d-4d07-870d-286c22a4e0f3(ws.11', None], dtype=object) array(['images%5cdn641214.07945dff-5426-432d-8ba9-e0b06484c5d5(ws.11', None], dtype=object) array(['images%5cdn641214.ca8fbc89-df1a-4804-b3c8-c4e4c0a327c5(ws.11', None], dtype=object) array(['images%5cdn641214.00af69c0-e664-41c2-b414-385605a9daec(ws.11', None], dtype=object) array(['images%5cdn641214.2187d26b-d319-4a1e-8fbd-0207ab2deb43(ws.11', None], dtype=object) array(['images%5cdn641210.8672fa8a-7f2b-4e0d-891e-a1c996ec0b9b(ws.11', None], dtype=object) array(['images%5cdn641210.79d8e27d-e855-46d3-98d2-7ef795e349b9(ws.11', None], dtype=object) array(['images%5cdn641210.ca80bf2c-b748-475e-b4d5-b2dcb6f1bff1(ws.11', None], dtype=object) ]
docs.microsoft.com
This guide focuses on the AWS SDK for PHP client for Amazon Elastic Compute Cloud. This guide assumes that you have already downloaded and installed the AWS SDK for PHP. See Installation for more information on getting started. First you need to create a client object using one of the following techniques. The easiest way to get up and running quickly is to use the Aws\Ec2\Ec2Client::factory() method and provide your credential profile (via the profile option), which identifies the set of credentials you want to use from your ~/.aws/credentials file (see Using the AWS credentials file and credential profiles). A region parameter is required. You can find a list of available regions using the Regions and Endpoints reference. use Aws\Ec2\Ec2Client; $client = Ec2Client::factory(array( 'profile' => '<profile in your aws credentials file>', 'region' => '<region name>' )); You can provide your credential profile like in the preceding example, specify your access keys directly (via key and secret), or you can choose to omit any credential information if you are using AWS Identity and Access Management (IAM) roles for EC2 instances or credentials sourced from the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. Note The profile option and AWS credential file support is only available for version 2.6.1 of the SDK and higher. We recommend that all users update their copies of the SDK to take advantage of this feature, which is a safer way to specify credentials than explicitly providing key and secret. A more robust way to connect to Amazon Elastic Compute Cloud is through the service builder. by namespace $client = $aws->get('Ec2'); For more information about configuration files, see Configuring the SDK. Please see the Amazon Elastic Compute Cloud Client API reference for a details about all of the available methods, including descriptions of the inputs and outputs.
https://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-ec2.html
2018-04-19T17:59:01
CC-MAIN-2018-17
1524125937015.7
[]
docs.aws.amazon.com
Installation Guide¶ This page gives instructions on how to build and install the xgboost package from scratch on various systems. It consists of two steps: - First build the shared library from the C++ codes ( libxgboost.sofor linux/osx and libxgboost.dllfor windows). - Exception: for R-package installation please directly refer to the R package section. - Then install the language packages (e.g. Python Package). Important the newest version of xgboost uses submodule to maintain packages. So when you clone the repo, remember to use the recursive option as follows. git clone --recursive For windows users who use github tools, you can open the git shell, and type the following command. git submodule init git submodule update Please refer to Trouble Shooting Section first if you had any problem during installation. If the instructions do not work for you, please feel free to ask questions at xgboost/issues, or even better to send pull request if you can fix the problem. Contents¶ Python Package Installation¶ The python package is located at python-package. There are several ways to install the package: Install system-widely, which requires root permission cd python-package; sudo python setup.py install You will however need Python distutilsmodule for this to work. It is often part of the core python package or it can be installed using your package manager, e.g. in Debian use sudo apt-get install python-setuptools NOTE: If you recompiled xgboost, then you need to reinstall it again to make the new library take effect Only set the environment variable PYTHONPATHto tell python where to find the library. For example, assume we cloned xgbooston the home directory ~. then we can added the following line in ~/.bashrc. It is recommended for developers who may change the codes. The changes will be immediately reflected once you pulled the code and rebuild the project (no need to call setupagain) export PYTHONPATH=~/xgboost/python-package Install only for the current user. cd python-package; python setup.py develop --user If you are installing the latest xgboost version which requires compilation, add MinGW to the system PATH: import os os.environ['PATH'] = os.environ['PATH'] + ';C:\\Program Files\\mingw-w64\\x86_64-5.3.0-posix-seh-rt_v4-rev0\\mingw64\\bin' R Package Installation¶ You can install R package from cran just like other packages, or you can install from our weekly updated drat repo: install.packages("drat", repos="") drat:::addRepo("dmlc") install.packages("xgboost", repos="", type = "source") If you would like to use the latest xgboost version and already compiled xgboost, use library(devtools); install('xgboost/R-package') to install manually xgboost package (change the path accordingly to where you compiled xgboost). For OSX users, single threaded version will be installed, to install multi-threaded version. First follow Building on OSX to get the OpenMP enabled compiler, then: Set the Makevarsfile in highest piority for R. The point is, there are three Makevars: ~/.R/Makevars, xgboost/R-package/src/Makevars, and /usr/local/Cellar/r/3.2.0/R.framework/Resources/etc/Makeconf(the last one obtained by running file.path(R.home("etc"), "Makeconf")in R), and SHLIB_OPENMP_CXXFLAGSis not set by default!! After trying, it seems that the first one has highest piority (surprise!). Then inside R, run install.packages("drat", repos="") drat:::addRepo("dmlc") install.packages("xgboost", repos="", type = "source") Due to the usage of submodule, install_github is no longer support to install the latest version of R package. To install the latest version run the following bash script, git clone --recursive cd xgboost git submodule init git submodule update alias make='mingw32-make' cd dmlc-core make -j4 cd ../rabit make lib/librabit_empty.a -j4 cd .. cp make/mingw64.mk config.mk make -j4 Trouble Shooting¶ Compile failed after git pull Please first update the submodules, clean all and recompile: git submodule update && make clean_all && make -j4 Compile failed after config.mkis modified Need to clean all first: make clean_all && make -j4 Makefile: dmlc-core/make/dmlc.mk: No such file or directory We need to recursively clone the submodule, you can do: git submodule init git submodule update Alternatively, do another clone git clone --recursive
http://xgboost.readthedocs.io/en/latest/build.html
2017-01-16T21:40:12
CC-MAIN-2017-04
1484560279368.44
[]
xgboost.readthedocs.io
If you just want to give Datawrapper a try, here is a brief run-through. Principle: First the data, than the chart With Datawrapper you can create basic, embeddable charts in four steps. This brief overview provides all information needed to do that. The basic principle is that first there needs to be some data before the best visualization is selected. Hint: When working with Datawrapper you can go jump between the main steps. All changes you did in one step will be there when you come back. This is useful, e.g. when you need to change the data or do need to transpose it. Step 1: Preparation of data and insertion into Datawrapper - Copy data into Excel or an other software for tables Just make sure to always have a column or line name as a description. - Prepare the data table in Excel or a similar software Data can come from a Google Table too. You can even try to simply copy an HTML table from the web directly. Hint: Best results are achieved with cleaned data sets. This means that your data should be free of formatting, dots or comma separators in the numbers and even currency symbols. In bigger projects it is a good idea to focus on one aspect in every visualization. If what can be found in the data is very interesting just do several charts and place them in one article. - Login to Datawrapper when you are done - Copy Data into Datawrapper If you are just starting simply use the drop field in the first step to do that, this way you ensure that the data contains just the information, no hidden elements from HTML or other software which sometimes mess up a visualization. Step 2: Check the data - Just look at it: Is all your data there and displayed as intended? - Mark the headers: If the first row or line contains headers mark them in Datawrapper (there is a checkbox to do that). Step 3: Visualize - Select a chart type: Datawrapper offers basic, versatile chart types. You can experiment, which type is best for your data. - Header and description: Tell your story, help the user to understand what is displayed in the chart. - Mark-up single lines or row that are important (if needed). - Additional features: Depending on the chart type you use there are a number of options to make sure that your chart is displayed correctly. Step 4: Publish and embed - In the last step Datawrapper generates an embed code. You can adapt the size freely and then embed your chart into any website, using either a content management system or weblog software. Supported Browsers Charts created with Datawrapper are optimized for HTML5. In modern browsers such as Firefox, Chrome, Safarai and newer versions of Internet Explorer they will display correctly. For users with older browsers Datawrapper generates a static image, loosing some of th interactive features. This image is created automatically. Pingback: Datawrapper | Interactive Storytelling Tools
http://docs.datawrapper.de/quick-start/
2017-01-16T21:41:11
CC-MAIN-2017-04
1484560279368.44
[]
docs.datawrapper.de
pyramid.chameleon_text¶ These APIs will will work against template files which contain simple ${Genshi} - style replacement markers. The API of pyramid.chameleon_text is identical to that of pyramid.chameleon_zpt; only its import location is different. If you need to import an API functions from this module as well as the pyramid.chameleon_zpt module within the same view file, use the as feature of the Python import statement, e.g.:
http://docs.pylonsproject.org/projects/pyramid/en/1.2-branch/api/chameleon_text.html
2017-01-16T21:43:52
CC-MAIN-2017-04
1484560279368.44
[]
docs.pylonsproject.org
The storage all paths down (APD) handling on your ESXi host is enabled by default. When it is enabled, the host continues to retry nonvirtual machine I/O commands to a storage device in the APD state for a limited time period. When the time period expires, the host stops its retry attempts and terminates any nonvirtual machine I/O. You can disable the APD handling feature on your host. About this task If you disable the APD handling, the host will indefinitely continue to retry issued commands in an attempt to reconnect to the APD device. Continuing to retry is the same behavior as in ESXi version 5.0. This behavior might cause virtual machines on the host to exceed their internal I/O timeout and become unresponsive or fail. The host might become disconnected from vCenter Server..
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-54F4B360-F32D-41B0-BDA8-AEBE9F01AC72.html
2017-07-20T18:55:01
CC-MAIN-2017-30
1500549423320.19
[]
docs.vmware.com
Follow all best practices for securing a vCenter Server system to secure your vCenter Server Appliance. Additional steps help you make your appliance more secure. Configure NTP Ensure that all systems use the same relative time source. This time source must be in syn with an agreed-upon time standard such as Coordinated Universal Time (UTC). Synchronized systems are essential for certificate validation. NTP also makes it easier to track an intruder in log files. Incorrect time settings make it difficult to inspect and correlate log files to detect attacks, and make auditing inaccurate. See Synchronize the Time in the vCenter Server Appliance with an NTP Server. Restrict vCenter Server Appliance network access Restrict access to components that are required to communicate with the vCenter Server Appliance. Blocking access from unnecessary systems reduces the potential for attacks on the operating system. See Required Ports for vCenter Server and Platform Services Controller and Additional vCenter Server TCP and UDP Ports. Follow the guidelines in VMware KB article 2047585 to set up your environment with firewall settings that are compliant with the DISA STIG.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.security.doc/GUID-6975426F-56D0-4FE2-8A58-580B40D2F667.html
2017-07-20T18:54:48
CC-MAIN-2017-30
1500549423320.19
[]
docs.vmware.com