content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Using audible alerts in Opsview We recommend you use the Events view for audible alerts, as that can show you which changes occurred. However, if you want audible alerts from the Nagios® Core CGI pages, such as status.cgi, you must configure Opsview to allow this. Add the following line to the opsview.conf file within /usr/local/nagios/etc/. $overrides .= <<'EOF'; cgi_host_unreachable_sound=state_change.wav cgi_host_down_sound=state_change.wav cgi_service_critical_sound=state_change.wav cgi_service_warning_sound=state_change.wav cgi_service_unknown_sound=state_change.wav EOF You may make your own custom sounds, if you wish. Just put them in the /usr/local/nagios/share/media directory. Edit the Apache proxy configuration (if being used) and ensure the following line exists in the ProxyPass directives. ProxyPass /media ! Restart Apache and reload the Opsview configuration. Alerts should now be enabled from the Nagios Core CGIs.
https://docs.opsview.com/doku.php?id=opsview4.6:audible_alerts
2019-02-16T01:55:36
CC-MAIN-2019-09
1550247479729.27
[]
docs.opsview.com
On the Select a template page of the wizard, you select a template to deploy from the list. This page appears only if you opened the New Virtual Machine wizard from a nontemplate inventory object, such as a host or cluster. If you opened the Convert Template to Virtual Machine wizard from a template, this page does not appear. Procedure - Browse or search to locate a template. - Select the template. - Click Next.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vm_admin.doc/GUID-C53589E4-F301-431C-926F-7BEAB55A44B5.html
2019-02-16T01:06:29
CC-MAIN-2019-09
1550247479729.27
[]
docs.vmware.com
Chapter 20. Free Text Search Abstract Virtuoso provides a compact and efficient free text indexing capability for text and XML data. A free text index can be created on any character column, including wide and long data. The contains SQL predicate allows content based retrieval of textual data. This predicate takes a column and a text expression and is true if the pattern of words in the text expression occurs in the column value. There must exist a previously created text index of the column. The text expression can contain single words and phrases connected by boolean connectives or the proximity operator. Words can contain wildcards but must begin with at least three non-wildcard characters if a wildcard is to be used. While it is enough to declare a free text index on a column and then just use the contains predicate for many applications, Virtuoso offers a range of options for tailoring how the indexing works. If a certain application specific order of search results is desired more frequently than others, it is possible to specify a single or multipart key in the order of which hits will be returned from contains searches. Both ascending and descending order of the key is supported. To restart a search in the middle it is possible to specify a starting and ending key value. This works if the results are generated in the order of the application specific doc ID . If non-text criteria are often used to filter or sort results of contains searches, it is possible to cluster these non-text data inside the free text index for faster retrieval. It is often substantially faster to retrieve the extra data from inside the text index than to get them from the row referenced by the text index. Such data are called offband data , since they are not actually text but are stored similarly to text. It is possible to pre-process the text before it is indexed or unindexed. This feature can be used for data normalization and/or for adding content from other than the primary text field being indexed into the index. One example is adding the names of all newsgroups where an article appears to the index when indexing a news article. Thus when retrieving articles based on text and newsgroup, group can be used to very efficiently filter out the hits that are not in the group, even if the text indexed does not itself contain the group name. Another application of the same technique is adding text from multiple columns into the same index. If the column being indexed is XML data, this can be declared and enforced by the text index. XML data will be indexed specially to support efficient XPATH predicate evaluation with the xcontains predicate. Text Triggers is a feature that allows the storage of a large body of free text queries and automatically generating hits when documents matching the criteria are added to the index. This is useful for personalized data feeds, user profiles, content classification etc, which Virtuoso can send the results to in an email message. The conditions can be either free text expressions or XPATH expressions for XML content. The text index can be kept synchronous with the data being indexed, so that the index is updated in the same transaction as the data. The other possibility is to maintain the text index asynchronously as a scheduled task (batch mode), which can execute up to an order of magnitude faster. The asynchronous mode of operation offers substantially higher performance if changes of multiple entries are processed in one batch index refresh. Table of Contents - 20.1. Basic Concepts - 20.2. Creating Free Text Indexes - 20.2.1. The CREATE TEXT INDEX statement - 20.2.2. Choosing An Application Specific Document ID - 20.2.3. The composite Data Type - 20.2.4. Free Text Index Examples - 20.2.5. Pre-processing and Extending the Content Being Indexed - 20.2.6. Hit Scores - 20.2.7. Word Ranges - 20.2.8. Using Offband Data for Faster Filtering - 20.2.9. Order of Hits - 20.2.10. Noise Words - 20.3. Querying Free Text Indexes - 20.3.1. CONTAINS predicate - 20.3.2. Comments - 20.3.3. Text Expression Syntax - 20.4. Text Triggers - 20.4.1. Creating Text Triggers - 20.4.2. Created Database Objects - 20.5. Generated Tables and Internals - 20.5.1. Generated Tables and Procedures - 20.5.2. The procedures are: - 20.5.3. Tables and Procedures Created By Text Triggers - 20.6. Removing A Text Index - 20.7. Removing A Text Trigger - 20.7.1. vt_drop_ftt_dedup - 20.8. Internationalization & Unicode - 20.9. Performance - 20.9.1. Restrictions - 20.10. Free Text Functions - 20.10.1. vt_batch_dedup - 20.10.2. vt_batch_d_id_dedup - 20.10.3. vt_batch_feed_dedup - 20.10.4. vt_batch_feed_offband_dedup - 20.10.5. vt_batch_update_dedup - 20.10.6. vt_is_noise_dedup - 20.11.
http://docs.openlinksw.com/virtuoso/ch-freetext/
2019-02-16T01:09:56
CC-MAIN-2019-09
1550247479729.27
[]
docs.openlinksw.com
Managing Targets¶ The targets page also known as the target manager presents a ton of information. It has three important features - A textarea to add new targets - A targets table to go search through targets - A session manager to manage sessions - A button to launch plugins against targets - A button to export targets to a text file - helpful when you have a large number of targets in scope New Targets¶ Just add the urls seperated by a new line & press the button to add targets Multiple targets can be added at once Remove Targets¶ To present the information in an orderly fashion, all targets are shown in the form of a table. The labels beside the target name shows the severity of any vulnerability discovered either by OWTF or by user (yes, user can have his own rankings) All the targets in the present session are shown in the targets table. A search box can be used to search among the targets
http://docs.owtf.org/en/develop/usage/targets.html
2019-02-16T02:15:26
CC-MAIN-2019-09
1550247479729.27
[array(['../_images/new_targets_textarea.png', '../_images/new_targets_textarea.png'], dtype=object) array(['../_images/target_table_filtered.png', '../_images/target_table_filtered.png'], dtype=object)]
docs.owtf.org
The page builder (Live Composer) is very slow when I am making changes to my website. Why is this happening? Most likely you are using Chrome browser. This slow-down happens because of the browser extensions/add-ons. These extensions inject their own JavaScript code on the page slowing down the Visual Composer page builder. There is an easy solution for that: Incognito/Private Window: Please, try to work on your site in incognito window and you will see the difference.
https://docs.lumbermandesigns.com/article/38-the-page-builder-live-composer-is-very-slow-when-i-am-making-changes-to-my-website-why-is-this-happening
2019-02-16T01:07:06
CC-MAIN-2019-09
1550247479729.27
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/54d0dd69e4b034c37ea8ceda/images/5511d5c0e4b0221aadf22b8e/file-cUC18Xx0X3.png', None], dtype=object) ]
docs.lumbermandesigns.com
Controlling Worklist¶ - work - When any plugin is launched against a target, it adds a (plugin, target) combination to the worklist. This combination is known as work. - worklist - The list consisting of all work which are yet to be assigned to a worker. Worklist can be managed from the worklist manager which looks like this Worklist table provides interesting information like: - Estimated time for which the plugin will run - All details about the plugin and the target against which it is launched Pausing Work¶ Individual works or whole worklist can be paused. This will stop the work from getting assigned to any worker. The interesting part is worklist is persistent ,i.e. if you pause the whole worklist and exit OWTF, the works will still be there in paused state when you start OWTF again.
http://docs.owtf.org/en/develop/usage/worklist.html
2019-02-16T02:15:05
CC-MAIN-2019-09
1550247479729.27
[]
docs.owtf.org
The gfxd utility provides several commands that use the DdlUtils 1.1 API to export and import database schemas and table data. You can use these gfxd commands with GemFire XD and other JDBC datasources. As with the import system procedures (see Exporting and Importing Bulk Data from Text Files), the GFXD commands for importing data can optionally use. When you migrate a third-party database schema to GemFire XD, use write-schema-to-sql and then modify the SQL statements to include GemFire XD-specific features such as table partitioning and replication. Then use an interactive gfxd session to execute the script in GemFire XD. See run. When you migrate a schema from one GemFire XD system to another, use write-schema-to-xml or use write-schema-to-sql with the -export-all option to include GemFire XD-specific extensions in the DDL commands. The sections that follow describe how to use the above gfxd commands to migrate a third-party database to GemFire XD.
http://gemfirexd.docs.pivotal.io/docs/1.3.1/userguide/tools/topics/ddlutils-gfxd-using.html
2019-02-16T02:10:32
CC-MAIN-2019-09
1550247479729.27
[]
gemfirexd.docs.pivotal.io
Disconnects from the database. DISCONNECT [ Connect command. If the Connect command without the AS clause was used, you can supply the name the system generated for the connection. If the current connection is the named connection, when the command completes, there will be no current connection and you must issue a Set Connection or Connect command. gfxd(PEERCLIENT)> disconnect peerclient; gfxd>
http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/reference/gfxd_commands/disconnect.html
2019-02-16T02:16:02
CC-MAIN-2019-09
1550247479729.27
[]
gemfirexd.docs.pivotal.io
Serves as a base for classes used to apply conditional formatting by modifying style settings. Gets whether the current FormatConditionStyleBase object is properly specified. Gets or sets style settings used in a current format condition..
https://docs.devexpress.com/Dashboard/DevExpress.DashboardCommon.FormatConditionStyleBase._members
2019-02-16T01:12:42
CC-MAIN-2019-09
1550247479729.27
[]
docs.devexpress.com
This page lists known issues for this version of Opsview: Security Change directory to the one that contains lib/ directory $ cd /opt/opsview/work/par-6e6167696f73/cache-6b70f1ad4fabaaf533b1e2d06dfeea687c47a070/inc Apply the patch $ patch -p1 < /path/to/cve-patch-4.6.patch Replace the dashboard file $ cp /path/to/Dashboard.pm-4.6 lib/Opsview/Web/Controller/Dashboard.pm Restart Opsview $ /etc/init.d/opsview-web restart The patch will now have been successfully applied to your Opsview Monitor system. This is fixed in Opsview version: 4.6.4.162391051
https://docs.opsview.com/doku.php?id=opsview4.6:known_issues
2019-02-16T02:10:09
CC-MAIN-2019-09
1550247479729.27
[]
docs.opsview.com
Contents Now Platform Administration Previous Topic Next Topic Activate HTML sanitizer Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Activate HTML sanitizer The HTML sanitizer provides a property to enable or disable the sanitizer for all HTML fields in the system. Before you beginRole required: admin About this taskBy default, the property is set to true for new instances. Procedure In the navigation filter, enter sys_properties.list. Set the properties glide.html.sanitize_all_fields and glide.translated_html.sanitize_all_fields to true. If the properties do not exist in the System Properties table, you can add them. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-platform-administration/page/administer/security/task/t_ActivateHTMLSanitizer.html
2019-02-16T02:07:18
CC-MAIN-2019-09
1550247479729.27
[]
docs.servicenow.com
Disk. no directory exists that is within its capacity limits, how Geode handles this depends on whether automatic compaction is enabled. If auto-compaction is enabled, Geode-sizedoes: Disk is full and rolling is disabled. No space can be created
http://gemfire.docs.pivotal.io/90/geode/managing/disk_storage/operation_logs.html
2019-02-16T02:00:59
CC-MAIN-2019-09
1550247479729.27
[]
gemfire.docs.pivotal.io
Sigcheck v2.71 By Mark Russinovich Published: December 11, 2018 Download Sigcheck (514 KB) Introduction: - Client: Windows Vista and higher - Server: Windows Server 2008 and higher - Nano Server: 2016 and higher Learn More - Malware Hunting with the Sysinternals Tools In this presentation, Mark shows how to use the Sysinternals tools to identify, analyze and clean malware.
https://docs.microsoft.com/en-us/sysinternals/downloads/sigcheck
2019-02-16T00:47:38
CC-MAIN-2019-09
1550247479729.27
[]
docs.microsoft.com
List of Mining Pools¶ Unless you want to solo mine, which is unfeasible for many people, you will need a pool to mine towards. Make sure to choose the one closest to you! Here are some of the TurtleCoin mining pools (arranged alphabetically): Definition of Fees¶ Rather simple; the pool operator will take a percentage of the reward of the block found for himself. Example- - the fee is 0.1% - the block reward is 30000 TRTL - 30000 x 0.1% = 30 Therefore, the pool operator will take 30 TRTL for himself. Definition of Different Types of Methods¶ Proportional¶ A proportional pool carries no risk to the pool operator as miners are simply paid out when a block is found. No blocks, no payout! With a proportional pool the risk is all on the miners if it takes longer than expected to find a block then the miners earn less. On the flip side, if the pool is lucky (they will all average out the same eventually) the miners get more. Example- - A block is found after 100,000 shares - You submitted 1,000 of those shares (you have 1% of the pool's total hash power) - There’s 30000 TRTL per block Quite simply you will get 1% of the block = 300 TRTL. Now if the pool has a bad round (a round is the time taken to find a block) and it takes 200,000 shares to find a block (twice as long) and you have submitted 2,000 shares (as you’ve been mining twice as long), you still only get 1% of the block = 300 TRTL This can also work in the miners favor too, as if it takes half the time (50,000 shares) to find a block and you submitted only 500 shares - again 1% - 300 TRTL. Basically, you always get a percentage of the block and you win/lose depending on the “luck” of the pool. The drawbacks to a proportional pool are that there is often a fee although some pool operators rely on donations only and you will have to bear the variance of the block times and luck unlike a PPLNS pool. Also they are susceptible to “pool hoppers” where PPLNS pools are not. PPLNS (Pay Per Last n Shares)¶ PPLNS does not pay out per block found, rather it pays based on the number of shares you last submitted, and helps to dissuade pool hoppers. How it works is, - you start mining with a PPLNS pool. - Rather than paying you out based on the number of shares you submitted since you started mining/the last block was found, it will pay depending on how many shares you submitted in a period of time, called the window, which is an estimate of the time in which the pool in question finds a block. - So, after you start mining, it will take a few hours for you to earn your normal earnings - and since the effect of pool hoppers is lessened, you may make comparatively more than other methods. Basically, you get paid based on - the number of shares you submitted (how good what you're mining with is) - and how long you have been mining.
https://docs.turtlecoin.lol/guides/mining/pools/
2019-02-16T01:33:50
CC-MAIN-2019-09
1550247479729.27
[]
docs.turtlecoin.lol
AWS Direct Connect AWS and Amazon Virtual Private Cloud, bypassing Internet service providers in your network path. Direct Connect is unique in the following ways: IPv6 is not supported. You can access the China (Beijing) only from the following AWS Direct Connect location that supports it. This AWS Direct Connect location cannot reach any other AWS Region. For more information, see Requesting Cross Connects at AWS Direct Connect Locations. Sinnet Jiuxianqiao IDC CIDS Jiachuang IDC You can access the China (Ningxia) Region only from the following AWS Direct Connect location that supports it. This AWS Direct Connect location cannot reach any other AWS Region. For more information, see Requesting Cross Connects at AWS Direct Connect Locations. Industrial Park IDC Shapotou ID.
http://docs.amazonaws.cn/en_us/aws/latest/userguide/directconnect.html
2019-02-16T01:34:35
CC-MAIN-2019-09
1550247479729.27
[]
docs.amazonaws.cn
Open Source Gitify .gitify file To define what to export, to where and how, we're using a .gitify file formatted in YAML. This file is located in the root of the project. An example .gitify may look like this: data_directory: _data/ backup_directory: _backup/ data: contexts: class: modContext primary: key context_settings: class: modContextSetting primary: - context_key - key content: type: content exclude_keys: - editedby - editedon categories: class: modCategory primary: category truncate_on_force: - modCategoryClosure templates: class: modTemplate primary: templatename extension: .html template_variables: class: modTemplateVar primary: name template_variables_access: class: modTemplateVarTemplate primary: - tmplvarid - templateid chunks: class: modChunk primary: name extension: .html snippets: class: modSnippet primary: name extension: .php plugins: class: modPlugin primary: name extension: .php plugin_events: class: modPluginEvent primary: - pluginid - event events: class: modEvent primary: name namespaces: class: modNamespace primary: name system_settings: class: modSystemSetting primary: key where: - 'editedon:!=': '0000-00-00 00:00:00' exclude_keys: - editedon extension_packages: class: modExtensionPackage primary: namespace exclude_keys: - created_at - updated_at fc_sets: class: modFormCustomizationSet primary: id fc_profiles: class: modFormCustomizationProfile primary: id fc_profile_usergroups: class: modFormCustomizationProfileUserGroup primary: - usergroup - profile fc_action_dom: class: modActionDom primary: - set - name mediasources: class: modMediaSource primary: id mediasource_elements: class: sources.modMediaSourceElement primary: - source - object_class - object - context_key dashboards: class: modDashboard primary: - id - name dashboard_widgets: class: modDashboardWidget primary: id dashboard_widget_placement: class: modDashboardWidgetPlacement primary: - dashboard - widget The .gitify file structure is real simple. There are root nodes for data_directory (the relative path where to store the files), backup_directory, data and packages. data contains an array of what we call "Partitions". These partitions are basically the name of the directory that holds all the files of that type, and can also be used in the Gitify extract and Gitify build commands. Each partition specifies either a type that has special processing going on (only content is available as type currently), or a class which specified the xPDOObject derivative that you want to use. The primary field determines the key to use in the name of the generated files. This defaults to id, but in many cases you may want to use the name as that is more human friendly. The primary is used for the file names and is also related to the automatic ID conflict resolution. By default files will be created with a .yaml extension, but if you want you can override that with a extension property. This can help with syntax highlighting in IDEs. Each partition can also specify a where property. This contains an array which can be turned into a valid xPDO criteria. When using GitifyWatch, there is also an environments root node in the gitify file, refer to the GitifyWatch documentation for more about that. Third party packages (models) When a certain class is not part of the core models, you can define a package as well. This will run $modx->addPackage before attempting to load the data. The path is determined by looking at a [package].core_path setting suffixed with model/, [[++core_path]]components/[package]/model/or a hardcoded package_path property. For example, you could use the following in your .gitify file to load ContentBlocks Layouts & Fields: data: cb_fields: class: cbField primary: name package: contentblocks cb_layouts: class: cbLayout primary: name As it'll load the package into memory, it's only required to add the package once. For clarify, it can't hurt to add it to each data entry that uses it. Dealing with Closures A Closure is a separate table in the database that a core or third party extra may use to keep information about a hierarchy in a convenient format. These are often automatically generated when creating a new object, which can result in a error messages and other issues when building, especially with the --force flag. To solve this, a truncate_on_force option was introduced in v0.6.0 that lets you define an array of class names that need their tables truncated on a force build. Truncating the closure table(s) before a forced build ensures that the model can properly create the rows in the closure table, without throwing errors. Here are two examples of using truncate_on_force: data: categories: class: modCategory primary: category truncate_on_force: - modCategoryClosure quip_comments: class: quipComment package: quip primary: [thread, id] truncate_on_force: - quipCommentClosure Composite Primary Keys When an object doesn't have a single primary key, or you want to get fancy with file names, it's possible to define a composite primary key, by setting the primary attribute to an array. For example, like this: data: chunks: class: modChunk primary: [category, name] extension: .html That would grab the category and the name as primary keys, and join them together with a dot in the file name. This is a pretty bad example, and you shouldn't really use it like this. Install MODX Packages You can also define packages to install with the Gitify package:install --all command. This uses the following format packages: modx.com: service_url: packages: - ace - wayfinder modmore.com: service_url: credential_file: '.modmore.com.key' packages: - clientconfig When a provider needs to be authenticated, like modmore.com the example above, you can provide a credential_file option which points to a file name. This file needs to contain a username and api_key value (in YAML format), like so: username: my_api_key_username api_key: some_api_key_password Alternatively, you can also define the username and api_key directly on the provider information in the .gitify file, but the credential_file approach is recommended to be able of keeping your authentication outside of the repository. For security, the key file needs to be kept out of the git repository using a .gitignore file, and you will also want to protect direct read access to it with your .htaccess file or keeping it out of the webroot. To install the packages that you added to the packages entry in the .gitify file, simply run the command Gitify package:install --all. That will attempt to install all packages that were mentioned, skipping any that are already installed.
http://docs.modmore.com/en/Open_Source/Gitify/dot-gitify.html
2019-02-16T01:01:39
CC-MAIN-2019-09
1550247479729.27
[]
docs.modmore.com
Opsview Unix Agent Customisation The Opsview unix agent package is based upon the NRPE daemon. Communication Protocol However, the NRPE version is patched to allow sending more data than the regular NRPE (usually limited to 1K, but extended with our version to 16K) in a backward compatible way. It is important that you use the check_nrpe plugin delivered with Opsview as this can communicate between a regular NRPE daemon and the Opsview Agent version. Security All NRPE communication is default to sending with encryption on. In Opsview 4.6.3, we added the ability to change the cipher used, as well as support for SSL certificates. It is possible to set an allowed_hosts variable to only allow connections from specific IP addresses, although this is only a rudimentary security check. We recommend you use firewall rules to determine which IP addresses are allowed to connect. If you were to set the allowed_hosts variable, and attempted to connect from an alternative IP, you would get the error: CHECK_NRPE: Error - Could not complete SSL handshake To set the allowed_hosts variable, create an override file in /usr/local/nagios/etc/nrpe_local/override.cfg. Ensure the nagios user owns the file. Add the line: allowed_hosts=127.0.0.1,192.168.101.2 Restart the nrpe agent: /etc/init.d/opsview-agent stop /etc/init.d/opsview-agent start Adding New Plugins The unix agents can be modified to add extra plugins as as below. Newer agents allow for supplementary packages to be created and installed over the top of the Opsview agent, which enables packaged customisations. Agents 3.2.0 and newer - Copy the new plugin into the /usr/local/nagios/libexec/nrpe_localdirectory and ensure it has the correct permissions to run as the nagios user. - Amend the nrpe configuration file /usr/local/nagios/etc/nrpe_local/nrpe.cfgto include a line similar to command[<script_name>]=/usr/local/nagios/libexec/nrpe_local/<script_name> $ARG1$ i.e. command[my_check_script]=/usr/local/nagios/libexec/nrpe_local/my_check_script.sh $ARG1$ - restart the nrpe agent. /etc/init.d/opsview-agent stop /etc/init.d/opsview-agent start You can set up the service check within Opsview and assign to hosts as normal. Any upgrade to the agent package should not lose any customisations.
https://docs.opsview.com/doku.php?id=opsview4.6:unix_customise_agent
2019-02-16T02:09:31
CC-MAIN-2019-09
1550247479729.27
[]
docs.opsview.com
This page provides information on the Global DMC rollout in the V-Ray tab of the Render Settings. Page Contents Overview modification of Schlick sampling, introduced by Christophe Schlick in [ 1 ] (see the References section below for more information).. UI Path: ||Render Setup window|| > V-Ray tab > Global DMC rollout (when V-Ray Adv is the Production renderer) V-Ray Adv Global DMC rollout ||Render Setup window|| > V-Ray RT tab > Global DMC rollout (when V-Ray RT is the Production renderer) V-Ray RT Global DMC rollout Global DMC Rollout The settings for the DMC sampler are located in the Global DMC rollout. The actual number of samples for the DMC sampler's blurry values are determined by three factors: - The subdivs value for a particular blurry effect. This is multiplied by the Global subdivs multiplier. -. More information is available in the Adaptive Sampling section. Default Parameters The following parameters are visible from the Global Switches rollout when set to the Default Render UI Mode. Lock noise pattern – When enabled, the sampling pattern will be the same from frame to frame in an animation. Since this may be undesirable in some cases, you can disable this option to make the sampling pattern change with time. Note that re-rendering the same frame will produce the same result in both cases. Use local subdivs – When disabled, V-Ray will automatically determine subdivs values for sampling of materials, lights and other shading effects based on the Min shading rate parameter for the image sampler. When enabled, the subdivs values from the respective materials/lights are used. Subdivs mult. – When Use local subdivs is enabled, this will multiply all subdiv values during rendering; you can use this to quickly increase/decrease sampling quality everywhere. This affects everything, except for the light cache, photon map, caustics and AA subdivs. Everything else (irradiance map, brute-force GI, area lights, area shadows, glossy reflections/refractions) is affected by this parameter. Advanced Parameters The following parameters are added to the list of visible settings available from the Global Switches rollout when set to the Advanced Render UI Mode. Min samples – Determines the minimum number of samples that must be made before the early termination algorithm is used. Higher values will slow things down but will make the early termination algorithm more reliable. For most scenes, there is no need to adjust this parameter. Adaptive amount – Controls the extent to which the number of samples depends on the importance of a blurry value. It also controls the minimum number of samples that will be taken. A value of 1.0 means full adaptation; a value of 0.0 means no adaptation. For most scenes there is no need to adjust this parameter. Noise threshold – Controls V-Ray's judgement of when a shading value is "good enough" to be used. This directly translates to noise in the result. Smaller values mean less noise, more samples, and higher quality. A value of 0.0 means that no adaptation will be performed. For most scenes, there is no need to adjust this parameter..
https://docs.chaosgroup.com/pages/viewpage.action?pageId=16551032&spaceKey=VRAY3MAX
2019-02-16T01:32:22
CC-MAIN-2019-09
1550247479729.27
[]
docs.chaosgroup.com
ContentBlocks v1 Importing & Exporting Through the ContentBlocks component (under Components or Apps in the top navigation) it is possible to create an XML export of Fields, Layouts and Templates. Exporting Data To export all data, simply click the "Export" button in the Field, Layout or Template grid toolbar. After a quick confirmation of what is about to happen, the system will generate an XML file with all Field or Layouts and your browser should initiate the download. Since ContentBlocks 1.3.0, it is also possible to export a single field, layout or template by right clicking it in the grid and choosing the Export option. Importing Data Importing from your XML Export is a bit more tricky, as you will need to determine the right import mode. To start off, click the Import button on the Fields or Layouts tab and choose the file from your local computer. If you click the Import button on the Fields tab only Field records will be imported, and if you use the Import button on the Layouts tab only Layout records will be imported - regardless of the file its contents. There are three import modes available, each with their own use cases and caveats: - Insert With the Insert mode, each of the fields or layouts will have their IDs unset during the import, which will cause them to essentially be appended to the existing fields or layouts. This has the benefit of being safe: there is no risk of breaking any existing content if there are only new fields/layouts being added. However, it does mean that (if your export contains similar data as what already exists) you will end up with duplicate fields. - Overwrite In Overwrite mode, before a new field or layout is created, the script will check a field/layout with the same ID as the current one. If it exists, it will be overwritten, but if it doesn't exist a new field/layout will be created. This compromise can limit the number of duplicates if your export is very similar to the existing data, however it does introduce the risk that unrelated fields are overwritten causing content issues. Example: if before the import the field with ID 5 was a snippet field, but after the import field ID is suddenly a heading field, the heading field wont know what to do with the data from the snippet field and that data will be lost on the next save. - Replace The most nuclear option, Replace first clears the Field or Layout table and then imports the data from the file, keeping all of the IDs as defined in the export file. This has the highest chance of breaking content but is in some cases the most appropriate option. If you need help figuring out the right import mode for your situation, please don't hesitate to get in touch with [email protected]
http://docs.modmore.com/en/ContentBlocks/v1.x/Import_Export.html
2019-02-16T01:02:27
CC-MAIN-2019-09
1550247479729.27
[]
docs.modmore.com
Vencap – Theme Documentation ArticlesGetting Started Welcome to Vencap Page Options Menu How to Create a Menu How to Edit a Menu Post How to Create a New Post How to Create a Category How to Create a Service How to Create a Team How to Create a Research Widget How to Add Widget in Sidebar Recent Posts with image Widget Advanced Setup Translate The Theme Child Theme FAQs What are common Installation Issues How to Change Theme Color How to Change Logo How to Create a Contact Form How To Change GDPR Text & Link How to Customize CTA Section Doc navigation← Albion - Theme DocumentationEmoi - Theme Documentation → Was this article helpful to you? Yes No How can we help? Name Email subject message
https://www.docs.envytheme.com/docs/vencap-theme-documentation/
2020-07-02T18:56:54
CC-MAIN-2020-29
1593655879738.16
[]
www.docs.envytheme.com
Energy UK responds to Parliament's rejection of PM's Brexit deal Responding to Parliament’s rejection of PM’s Brexit deal, Lawrence Slade chief executive of Energy UK, said: “Tonight’s vote means continued uncertainty for business. As the Parliamentary discussions continue, we reiterate our serious concerns over a possible no-deal Brexit which would be so damaging for the energy sector and its customers. "It is critical we ensure the smooth functioning of markets, and the efficient flow of gas and electricity and cooperation on tacking climate change, in order to keep bills down for UK customers and businesses without compromising on protecting our environment. We will continue to work with the Government to ensure the best Brexit deal for the energy sector, its customers and the wider economy.
https://www.docs.energy-uk.org.uk/media-and-campaigns/press-releases/440-2019/6988-energy-uk-responds-to-parliament-s-rejection-of-pm-s-brexit-deal.html
2020-07-02T18:03:34
CC-MAIN-2020-29
1593655879738.16
[]
www.docs.energy-uk.org.uk
. Message identity can be set by manipulating the NServiceBus. header in an Outgoing Message Mutator. public class MessageIdFromMessageMutator : IMutateOutgoingMessages { IBus bus; public MessageIdFromMessageMutator(IBus bus) { this.bus = bus; } public object MutateOutgoing(object message) { bus.SetMessageHeader( msg: message, key: "NServiceBus.MessageId", value: GenerateIdForMessage(message)); return message; }
https://particular-docs.azurewebsites.net/nservicebus/messaging/message-identity?version=core_5
2020-07-02T17:45:25
CC-MAIN-2020-29
1593655879738.16
[]
particular-docs.azurewebsites.net
One of the most critical things about persistence of sagas is proper concurrency control. Sagas guarantee business data consistency across long running processes using compensating actions. A failure in concurrency management that leads to creation of an extra instance of a saga instead of routing a message to an existing instance could lead to business data corruption. Default behavior When simultaneously handling messages, conflicts may occur. See below for examples of the exceptions which are thrown. Saga concurrency explains how these conflicts are handled, and contains guidance for high-load scenarios. Starting a saga Example exception: NHibernate.Exceptions.GenericADOException: could not execute batch command.[SQL: SQL not available] ---> System.Data.SqlClient.SqlException: Violation of UNIQUE KEY constraint 'UQ__OrderSag__C3905BCE71EF212B'. Cannot insert duplicate key in object 'dbo.OrderSagaData'. The duplicate key value is (e87490ba-bb56-4693-9c0a-cf4f95736e06). Updating or deleting saga data No exceptions will be thrown. Conflicts cannot occur because the persistence uses pessimistic locking. Pessimistic locking is achieved by performing a SELECT ... FOR UPDATE, see NHibernate Chapter 12. Transactions And Concurrency. Custom behavior Explicit version The RowVersion attribute can be used to explicitly denote a property that should be used for optimistic concurrency control. An update will then compare against this single property instead of comparing it against the previous state of all properties which results in a more efficient comparison. a concurrency violation error to be raised in case of concurrent updates. RowVersiondoes not disable the pessimistic locking optimization. All it does is replace, the optimistic-lock NHibernate attribute has to be specified in a custom mapping. The custom mapping sample explains how to override the default mapping with a custom one.
https://particular-docs.azurewebsites.net/persistence/nhibernate/saga-concurrency?version=nhibernate_6
2020-07-02T19:18:24
CC-MAIN-2020-29
1593655879738.16
[]
particular-docs.azurewebsites.net
Here in the UK, we rely on electricity to flow at the flick of a switch. So, it’s easy to overlook and underestimate the complexity of the market that allows it to happen. In order for the electricity sector to operate effectively, it is important industry and government work together putting in place long-term policy that meets the needs of today and supports the innovation needed to deliver the electricity for tomorrow. Now, and in the future, the energy sector needs to be in sync with all its customers. Nothing stays the same and, as needs and attitudes to energy change, so will demand. As electricity generation and supply needs long-term investment, it is vital the industry knows as much as it can about future demand and gets a clear signal of the focus, direction and speed of travel to 2030 and beyond. This spans more than the power sector alone. We need to take associated sectors - such as heat and transport – into account too. We cannot continue to look at these industries in isolation: a whole systems approach is needed. The power industry understands it will bear the bulk of the heavy-lifting to meet binding decarbonisation targets. As energy, transport and heat become much more intertwined, each will have its own impact on the power sector and the level of investment. As our future energy choices change, the country must consider - with every decision -. The country’s energy mix is changing as the government looks to remove all coal from the system by 2025. This will leave room for lower-carbon fuels, like new gas, biomass and wind, to play a greater role in a flexible energy system. We have the opportunity to see advancing technologies take up more of the mix – including storage, solar, wind and tidal. As the UK decarbonises the electricity system, there will be costs and challenges along the way. So the industry must take its customers with it. The goal is too important to be missed. The cheapest electricity will always be what we don't use. Energy efficiency remains the most cost-effective way to help cut energy bills and carbon emissions. Government needs to put policies in place that: encourage improvement; help people struggling with their bills; and kick-start the-able-to-pay market. The energy world is changing to meet the needs of our busy lives; by 2020 every home will have a smart meter allowing us to take control of the energy we use in ways not previously possible. And, while the future is an unknown land, one thing is certain – Britain’s energy sector is ready to take up the challenge and to work both with customers and other sectors. The ‘Pathways’ report sets out a vision and a pathway but Britain needs the policies in place to take it from the page to the practical future.
https://www.docs.energy-uk.org.uk/media-and-campaigns/energy-uk-blogs/5729-pathways-to-the-future.html
2020-07-02T17:46:05
CC-MAIN-2020-29
1593655879738.16
[]
www.docs.energy-uk.org.uk
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up Trans 401.08(2)(b)12.c. c. Management of overland flow at the selected site. Trans 401.08(2)(b)12.d. d. Trapping of sediment in channelized flow. Trans 401.08(2)(b)12.e. e. Staging construction to limit bare areas subject to erosion. Trans 401.08(2)(b)12.f. f. Protection of downslope drainage inlets where they occur. Trans 401.08(2)(b)12.g. g. Minimization of tracking at the site. Trans 401.08(2)(b)12.h. h. Clean up of off-site sediment deposits. Trans 401.08(2)(b)12.i. i. Proper disposal of building and waste material at the site. Trans 401.08(2)(b)12.j. j. Stabilization of drainage ways. Trans 401.08(2)(b)12.k. k. Installation of permanent stabilization practices as soon as possible after final grading. Trans 401.08(2)(b)12.L. L. Minimization of dust to the maximum extent practicable. Trans 401.08(2)(b)13. 13. An estimate of the starting and completion dates of construction activity. Trans 401.08(2)(b)14. 14. A description of the procedures to maintain, in good and effective operating condition, vegetation, best management practices and other protective measures. Trans 401.08(3) (3) Amendments. Subject to the written approval of the department, a prime contractor shall amend the ECIP whenever the project engineer determines: Trans 401.08(3)(a) (a) There is a change in design, construction, operation or maintenance at a project site or selected site that has the reasonable potential for a discharge to waters of the state and that has not been addressed in the ECIP. The department shall pay for changes under this paragraph that are necessitated by department action. The prime contractor shall pay for all other changes under this paragraph, unless the department agrees to pay for the change. Trans 401.08(3)(b) (b) The best management practices required by the plan fail to reduce adverse impacts to waters of the state caused by a discharge. Subject to s. Trans 401.12 , the department shall pay for changes under this paragraph. Trans 401.08(3)(c) (c) An amendment approved under this subsection supersedes any contradictory provisions of the erosion control plan. Trans 401.08 History History: Cr. Register, October, 1994, No. 466 , eff. 11-1-94; CR 02-081 : (2) (a) 2m. renum. from Trans 401.07 (2) (j) 5., r. (intro.), (1) (b) and (2) (b) 8., am. (1) (a), (c) to (h), (2) (a) 1. to 4., (b) (intro.), 1. to 7., 11. (intro.), e. and g., 12. (intro.) to c., 14. and (3) (intro.) to (b), cr. (1) (am), (ar) and (3) (c), r. and recr. (2) (b) 9., Register December 2002 No. 564 , eff. 1-1-03; corrections in (1) (a), (ar) 1. a. made under s. 13.92 (4) (b) 6. , Stats., Register February 2013 No. 686 . Trans 401.09 Trans 401.09 Maintenance of best management practices. Trans 401.09(1g) (1g) General responsibility. A prime contractor or utility person, as appropriate, shall implement, install and maintain best management practices at a site as required in the contract documents, as defined in s. Trans 401.12 (1) (a) . Trans 401.09(1m) (1m) Before and during construction or maintenance activity. Before and during the period of construction or maintenance activity, the prime contractor or utility person shall implement, install and maintain, or cause to be performed, all best management practices required by the erosion control plan, the ECIP and the requirements of this chapter. The prime contractor or utility person shall also implement any corrective action that is ordered under s. Trans 401.105 . A utility person shall notify the appropriate department representative at least 24 hours before the installation of best management practices. Trans 401.09(2) (2) After construction or maintenance activity. Trans 401.09(2)(a) (a) Upon the department's written acceptance of permanent best management practices at a site, or upon the department's granting of partial acceptance for a portion of work, the prime contractor's responsibility to maintain those accepted best management practices, or that portion of work for which partial acceptance is granted, shall cease except for any responsibility for defective work or materials or for damages caused by its own operations. Trans 401.09(2)(b) (b) In the case of a utility facility project, a utility person shall promptly notify the department upon completion of all construction or maintenance activities and the installation of all permanent best management practices at a project site. Within a reasonable time after that notification by the utility person, the department shall inspect the project site to ensure that the permanent best management practices are adequate and functioning properly. If the inspection of the project site reveals that the best management practices are not adequate or not functioning properly, the utility person, upon notification from the department or based on its own inspection, shall promptly take the appropriate corrective action. Where the utility person takes corrective action based on its own inspection of a project site, the utility person shall immediately notify the department of that corrective action. Trans 401.09 History History: Cr. Register, October, 1994, No. 466 , eff. 11-1-94; CR 02-081 : renum. (intro.) and (1) to be (1g) and (1m) and am., am. (2) Register December 2002 No. 564 , eff. 1-1-03. Trans 401.10 Trans 401.10 Inspections. Trans 401.10(1) (1) General. The project engineer or inspector shall inspect the project site and any selected site of a project described in s. Trans 401.03 (1) (a) or (c) . A utility person shall, and the department's authorized representative may, inspect the site of a utility facility project. The inspection shall determine whether best management practices for a project required by the erosion control plan, the ECIP and other contract documents, as defined in s. Trans 401.12 (1) (a) , are properly implemented, installed, and functioning, determine whether the best management practices for a project site or selected site are adequate for the purposes intended and for the site conditions, and identify any corrective action that is necessary. The project engineer or inspector shall invite the prime contractor, or his or her designee, to accompany the project engineer or inspector during inspections described in sub. (2) at least one hour before commencing the inspection. The project engineer or inspector is not required to wait more than one hour after such invitation, or past the time stated for the inspection, before commencing the inspection. A utility person shall allow a department representative to accompany the utility person during any inspection of a utility facility project. An inspector who inspects a site shall provide a copy of the completed inspection report form to the project engineer immediately following the inspection. Within 24 hours after completing an inspection, the person who performs the inspection shall deliver a copy of the completed inspection report to the appropriate department representative. Inspections shall continue at the frequency required in sub. (2) until the installation of permanent stabilization of disturbed areas is completed and the temporary best management practices are removed. Trans 401.10 Note Note: Inspectors are encouraged to provide reasonable advance notice. One hour is the minimum required advance notice. More time may be appropriate to provide the prime contractor a real opportunity to accompany an inspector. Trans 401.10(2) (2) When required. Inspections shall be conducted at least once per week during the time construction or maintenance activity is being pursued on a project site or selected site, and at all of the following times: Trans 401.10(2)(a) (a) Within 24 hours after every precipitation event that produces 0.5 inches of rain or more during a 24–hour period, or that results in any discharge, to determine the appropriate corrective action, if any. The department of transportation shall notify the department of natural resources within 24 hours after learning of any prohibited discharge from a project site or selected site into waters of the state. Trans 401.10(2)(b) (b) At each stage, as new portions of a project site or selected site are disturbed. Trans 401.10(2)(c) (c) Upon completing the installation of permanent best management practices to stabilize disturbed areas at a project site or selected site. Trans 401.10(2)(d) (d) At the completion of the project. The inspection to be performed at the completion of the project shall be made before the department provides the prime contractor with written notice of final acceptance of the project. Trans 401.10(4) (4) The department shall prescribe an inspection report form for documenting the findings of an erosion control inspection for use statewide on all projects directed and supervised by the department other than utility facility projects. The inspector shall document each inspection on the inspection report form. The inspection report is considered part of a project diary. The department shall publish the inspection report form in the construction and materials manual, and the form takes effect upon publication. The inspection report and any form required for use on utility facility projects shall contain all of the following: Trans 401.10(4)(a) (a) The date or dates of inspection. Trans 401.10(4)(ag) (ag) The names of the inspector, prime contractor or utility person, and erosion control subcontractor. Trans 401.10(4)(am) (am) The project identification number or permit number. Trans 401.10(4)(b) (b) Any comments concerning the effectiveness of in–place best management practices. Trans 401.10(4)(c) (c) Trans 401.10(4)(c)1. 1. A statement of whether each type of best management practice required by the ECIP complies with that plan. The inspection report shall specify the location and deficiency of any best management practices that do not comply with the erosion control plan, the ECIP and any other contract documents, as defined in s. Trans 401.12 (1) (a) . Trans 401.10(4)(c)2. 2. Any reasonable corrections needed to restore, maintain or increase the effectiveness of existing best management practices. Trans 401.10(4)(c)3. 3. The prime contractor is not required to make any corrections as a result of an inspection unless an erosion control order is issued under s. Trans 401.105 . Trans 401.10(4)(c)4. 4. A utility person shall take any corrective action that is consistent with the permit issued by the department and that is ordered, verbally or in writing, by the department or the department's authorized representative. Trans 401.10(4)(d) (d) Written notes commemorating any verbal communications between the project engineer, inspector, contractor or utility person regarding erosion control and storm water management. Trans 401.10(4m) (4m) Report available to contractor. Within 24 hours after completing an inspection, the project engineer or inspector shall post the completed inspection report prepared under sub. (4) on the site to which the report relates. Trans 401.10(5) (5) Review. The department shall make copies of the written inspection reports either separately or as part of the project diary, available for review by other agencies and the public. Trans 401.10(6) (6) Records. After a project is completed and the final inspection has been made, the department shall maintain copies of the written inspection reports and erosion control orders in the project's files, or with the project's permit application or approval document, if any, for a period of not less than 3 years after the date the department accepted the completed project. Trans 401.10 History History: Cr. Register, October, 1994, No. 466 , eff. 11-1-94; CR 02-081 : r. (intro.), (2) (intro.), (a) (intro.) and (b), am. (1), (4) (intro.), (b), (d), (5) and (6), renum. (2) (a) 1. to 3. and (4) (c) to be (2) and (4) (c) 2. and am., renum. (3) to be Trans 401.105 and am., cr. (4) (ag), (am), (c) 1., 3., 4. and (4m) Register December 2002 No. 564 , eff. 1-1-03. Trans 401.105 Trans 401.105 Corrective action. Trans 401.105(1) (1) Trans 401.105(1)(a) (a) An inspector who believes that changes or corrections are needed to best management practices may, by written order delivered to the prime contractor, temporarily suspend work until the project engineer is notified and decides all questions at issue. The prime contractor shall respond to the order in a manner consistent with the contract documents, as defined in s. Trans 401.12 (1) (a) . The project engineer shall, by written notice, inform the prime contractor whenever an inspection of a project site or selected site reveals the need for changes or corrections to best management practices. Trans 401.105(1)(b) (b) The department shall prescribe an erosion control order form for use whenever a corrective action is ordered on any project directed and supervised by the department. The department shall publish the form in the construction and materials manual. The project engineer shall include a copy of the completed inspection report with every erosion control order issued. Trans 401.105 Note Note: Erosion control order forms may be obtained upon request by writing to the Department's Division of Transportation Infrastructure Development, Bureau of Environment, P. O. Box 7965, Room 451, Madison, WI 53707-7965, or by calling (608) 267-3615. Trans 401.105(1m) (1m) An authorized representative of the department shall inform the utility person, verbally or in writing, whenever an inspection of the project site by the department reveals the need for changes or corrections to best management practices. A utility person shall comply with any corrective action order, written or verbal, issued by the department's authorized representative within the time specified in the order or, if no time is specified, within 24 hours after receiving the order. Upon completing the corrective action, the utility person shall notify the appropriate department representative of the corrective action taken and the date completed. Trans 401.105(2) (2) Upon receipt of an erosion control order form ordering changes or corrections to existing best management practices, the prime contractor shall implement, or cause to be implemented, the necessary corrective action within the time specified in the order or, if no time is specified, within 24 hours after receiving the order. The prime contractor shall deliver the erosion control order form to the project engineer upon completion of the corrective action and shall include on the form a description of the corrective action implemented and the date completed. Trans 401.105(3) (3) The department may approve or reject any completed corrective action by inspecting the affected area within 16 hours after the prime contractor or utility person delivers the completed erosion control order form to the project engineer or, for utility facility projects, to the department's authorized representative. The department shall consider all matters required in an erosion control order satisfactorily completed after that 16 hours has elapsed, or at 12 noon on the day the 16 hours expires, whichever is later, unless within the later of those 2 times the department has inspected and rejected the corrective action implemented. If a discharge occurs after the prime contractor or utility person delivers the erosion control order form under this section but before the later of those 2 times, the prime contractor or utility person shall have an opportunity to demonstrate that the corrective action was completed as required prior to the discharge. If the department does not reject any completed corrective action within the time specified in this subsection, the department may compel corrective action at the affected area only by issuing a new erosion control order. Trans 401.105(4) (4) Notwithstanding any time period permitted under this section for completing corrective action, a prime contractor is considered not in compliance with the contract documents, as defined in s. Trans 401.12 (1) (a) , for any area or matter described in the erosion control order form as requiring changes or corrections until such time as the change or correction is satisfactorily completed, as determined under sub. (3) . Trans 401.105(5) (5) Written notices are considered delivered to a prime contractor for purposes of this section when the written notice is presented to the head representative of the prime contractor then available on the project site or selected site, or when written notice is delivered to the prime contractor's principal place of business, whichever occurs earlier. Written notices are considered delivered to a project engineer or to the department when the written notice or form is presented to the project engineer or to the authorized department representative then available on the project site, or when written notice is delivered to the project engineer's principal place of business, whichever occurs earlier. Trans 401.105 History History: CR 02-081 : renum. from Trans 401.10 (3) and am., cr. (1m) Register December 2002 No. 564 , eff. 1-1-03. Trans 401.106 Trans 401.106 Post-construction performance standard. Trans 401.106(1) (1) Definitions. In this section: Trans 401.106(1)(a) (a) "Average annual rainfall" means the rainfall determined by the following year and location for the location nearest the project site: Madison, 1981 (Mar. 12-Dec. 2); Green Bay, 1969 (Mar. 29-Nov. 25); Milwaukee, 1969 (Mar. 28-Dec. 6); Minneapolis, 1959 (Mar. 13-Nov. 4); Duluth, 1975 (Mar. 24-Nov. 19). Trans 401.106(1)(b) (b) "TR-55" means the United States Department of Agriculture, Natural Resources Conservation Service (formerly Soil Conservation Service), Urban Hydrology for Small Watersheds, Second Edition, Technical Release 55, June 1986, or Technical Release 55 for Windows (Win TR-55), 2002. Trans 401.106 Note Note: TR-55 is on file with the offices of the Legislative Reference Bureau, the Secretary of State, and the Department of Transportation, Office of General Counsel. Copies may be obtained by writing to the U.S. Department of Agriculture, Natural Resources Conservation Service, Conservation Engineering Division, 14th and Independence Avenue, SW., Room 6136-S, Washington, DC 20250. The phone number for the division is: 202-720-2520, and the fax number is: 202-720-0428. TR-55 is available electronically at: Trans 401.106 Note Trans 401.106(2) (2) Plan. The department shall develop and implement a written plan that includes the requirements of subs. (3) to (10) for each transportation facility. This plan may be part of the erosion control plan. Trans 401.106(3) (3) Total suspended solids. Best management practices shall be designed, installed and maintained to control total suspended solids carried in runoff from the transportation facility as follows: Trans 401.106(3)(a) (a) For transportation facilities first constructed on or after January 1, 2003 by design, reduce the suspended solids load to the maximum extent practicable, based on an average annual rainfall, as compared to no runoff management controls. A reduction in total suspended solids by at least 80% meets the requirements of this paragraph. Trans 401.106(3)(b) (b) For highway reconstruction and non-highway redevelopment, by design, reduce to the maximum extent practicable the total suspended solids load by at least 40%, based on an average annual rainfall, as compared to no runoff management controls. A 40% or greater total suspended solids reduction shall meet the requirements of this paragraph. In this paragraph, "redevelopment" means the construction of residential, commercial, industrial or institutional land uses and associated roads as a substitute for existing residential, commercial, industrial or institutional land uses. Trans 401.106(3)(c) (c) Notwithstanding pars. (a) and (b) , if the design cannot achieve the applicable total suspended solids reduction specified, the design plan shall include a written and site-specific explanation why that level of reduction is not attained and the total suspended solids load shall be reduced to the maximum extent practicable. Trans 401.106(4) (4) Peak discharge. Trans 401.106(4)(a) (a) By design, BMPs shall be employed to maintain or reduce the peak runoff discharge rates, to the maximum extent practicable, as compared to pre-development site conditions for the 2-year 24-hour design storm or to the 2-year design storm with a duration equal to the time of concentration applicable to the transportation facility. Pre-development conditions shall assume "good hydrologic conditions" for appropriate land covers as identified in TR-55 or an equivalent methodology. The meaning of "hydrologic soil group" and "runoff curve number" are as determined in TR-55. However, when pre-development land cover is cropland, rather than using TR-55 values for cropland, the runoff curve numbers in Table 2 below shall be used. - See PDF for table Trans 401.106 Note Note: The curve numbers in Table 2 represent mid-range values for soils under a good hydrologic condition where conservation practices are used and are selected to be protective of the resource waters. Trans 401.106(4)(b) (b) This subsection does not apply to: Trans 401.106(4)(b)1. 1. A transportation facility where the change in hydrology due to development does not increase the existing surface water elevation at any point within the downstream receiving surface water by more than 0.01 of a foot for the 2-year 24-hour storm or for a 2-year design storm with a duration equal to the time of concentration. Trans 401.106 Note Note: Hydraulic models, such as HEC-2 or an equivalent methodology, may be used to determine the change in surface water elevations. Trans 401.106(4)(b)2. 2. A highway reconstruction site. Trans 401.106(5) (5) Infiltration. Trans 401.106(5)(a) (a) Except as provided in pars. (d) to (g) , BMPs shall be designed, installed and maintained to infiltrate runoff to the maximum extent practicable in accordance with one of the following: Trans 401.106(5)(a)1. 1. Infiltrate sufficient runoff volume so that the post-construction infiltration volume shall be at least 60% of the pre-construction infiltration volume, based on an average annual rainfall. However, when designing appropriate infiltration systems to meet this requirement, no more than 2% of the project site is required as an effective infiltration area. Trans 401.106(5)(a)2. 2. Infiltrate 10% of the post-development runoff volume from the 2-year 24-hour design storm with a type II distribution. Separate curve numbers for pervious and impervious surfaces shall be used to calculate runoff volumes and not composite curve numbers as defined in TR-55. However, when designing appropriate infiltration systems to meet this requirement, no more than 2% of the project site is required as an effective infiltration area. Trans 401.106(5)(b) (b) Pre-development condition shall be the same as specified in sub. (4) (a) . Trans 401.106(5)(c) (c) par. (g) . Pretreatment may include, but is not limited to, oil and grease separation, sedimentation, biofiltration, filtration, swales or filter strips. Trans 401.106 Note Note: To minimize potential groundwater impacts it is desirable to infiltrate the cleanest runoff. To achieve this, a design may propose greater infiltration of runoff from low pollutant sources such as roofs, and less from higher pollutant source areas such as parking lots. Trans 401.106(5)(d) (d) The following are prohibited from meeting the requirements of this subsection, due to the potential for groundwater contamination: Trans 401.106(5)(d)1. 1. Areas associated with tier 1 industrial facilities identified in s. NR 216.21 (2) (a) , including storage, loading, rooftop and parking. Trans 401.106(5)(d)2. 2. Storage and loading areas of tier 2 industrial facilities identified in s. NR 216.21 (2) (b) . Trans 401.106 Note Note: Runoff from tier 2 parking and rooftop areas may require pretreatment before infiltration. Trans 401.106(5)(d)3. 3. Fueling and vehicle maintenance areas. Trans 401.106(5)(d)4. 4. Areas within 1000 feet upgradient or within 100 feet downgradient of karst features. Down Down /code/admin_code/trans/401 true administrativecode /code/admin_code/trans/401/10/2/b administrativecode/Trans 401.10(2)(b) administrativecode/Trans 401.10?
http://docs.legis.wisconsin.gov/code/admin_code/trans/401/10/2/b
2013-05-18T17:20:15
CC-MAIN-2013-20
1368696382584
[]
docs.legis.wisconsin.gov
Developing a Plugin on iOS Writing a plugin requires an understanding of the architecture of Cordova-iOS. Cordova-iOS consists of a UIWebView where intercept commands passed in as url changes. These plugins are represented as class mappings in the Cordova.plist file, under the Plugins key. A plugin is an Objective-C class that extends the CDVPlugin class.. The options parameter for the Objective-C plugin method is being deprecated, and it should not be used. For legacy reasons - the last JavaScript object passed in the args Array will be passed in as the options dictionary of the method in Objective-C. You must make sure that any JavaScript object that is passed in as an element in the args array occurs as the last item in the Array, if not it will throw off the array index of all subsequent parameters of the Array in Objective-C. Only one JavaScript object is supported for the options dictionary, and only the last one encountered will be passed to the native method. It is because of these error-prone reasons that they are being deprecated. The plugin must be added to Plugins key (a Dictionary) of the Cordova.plist file in your Cordova-iOS application's project folder. <key>service_name</key> <string>PluginClassName</string> The key service_name should match what you use in the JavaScript exec call, and the value will be the name of the Objective-C class of the plugin. Without this added, the plugin may compile but will not be reachable by Cordova. Writing an iOS Cordova Plugin We have JavaScript fire off a plugin request to the native side. We have the iOS Objective-C plugin mapped properly via the Cordova.plist file. So what does the final iOS Objective-C Plugin class look like? What gets dispatched to the plugin via JavaScript's exec function gets passed into the corresponding Plugin class's action method. Most method implementations look like this: - (void) myMethod:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options { NSString* callbackId = [arguments objectAtIndex:0]; CDVPluginResult* pluginResult = nil; NSString* javaScript = nil; @try { NSString* myarg = [arguments objectAtIndex:1]; if (myarg != nil) { pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK]; javaScript = [pluginResult toSuccessCallbackString:callbackId]; } } @catch (id exception) { pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_JSON_EXCEPTION messageAsString:[exception reason]]; javaScript = [pluginResult toErrorCallbackString:callbackId]; } [self writeJavascript:javaScript]; } Echo Plugin iOS Plugin We would add the following to the Plugins key (a Dictionary) of the project's Cordova.plist file: <key>Echo</key> <string>Echo</string> Then we would add the following files ( Echo.h and Echo.m) to the Plugins folder inside our Cordova-iOS application folder: /********* Echo.h Cordova Plugin Header *******/ #import <Cordova/CDVPlugin.h> @interface Echo : CDVPlugin - (void) echo:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options; @end /********* Echo.m Cordova Plugin Implementation *******/ #import "Echo.h" #import <Cordova/CDVPluginResult.h> @implementation Echo - (void) echo:(NSMutableArray*)arguments withDict:(NSMutableDictionary*)options { NSString* callbackId = [arguments objectAtIndex:0]; CDVPluginResult* pluginResult = nil; NSString* javaScript = nil; @try { NSString* echo = [arguments objectAtIndex:1]; if (echo != nil && [echo length] > 0) { pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsString:echo]; javaScript = [pluginResult toSuccessCallbackString:callbackId]; } else { pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_ERROR]; javaScript = [pluginResult toErrorCallbackString:callbackId]; } } @catch (NSException* exception) { pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_JSON_EXCEPTION messageAsString:[exception reason]]; javaScript = [pluginResult toErrorCallbackString:callbackId]; } [self writeJavascript:javaScript]; } @end Let's take a look at the code. At the top we have all of the necessary Cordova imports. Our class extends from CDVPlugin - very important. This plugin only supports one action, the echo action. First, we grab the callbackId parameter, which is always the 0th item in the arguments array. Next, we grab the echo string using the objectAtIndex method on our args, telling it we want to get the 1st parameter in the arguments array. We do a bit of parameter checking: make sure it is not nil, and make sure it is not a zero-length string. If it is, we return a PluginResult with an ERROR status. If all of those checks pass, then we return a PluginResult with an OK status, and pass in the echo string we received in the first place as a parameter. Then, we convert the PluginResult to JavaScript by calling either the toSuccessCallbackString (if it was OK) or toErrorCallbackString (if it was an error) methods. Finally we write the JavaScript back to the UIWebView, which will execute the JavaScript that will callback to success or failure callbacks of the exec method on the JavaScript side. If the success callback was called, it will pass the echo parameter as a parameter. Advanced Plugin Functionality See other methods that you can override in: For example, you can hook into the pause, resume, app terminate and handleOpenURL events. Debugging Plugins To debug the Objective-C side, you would use Xcode's built in debugger. For JavaScript, you can use Weinre, an Apache Cordova Project or iWebInspector, a third-party utility Common Pitfalls - Don't forget to add your plugin's mapping to Cordova.plist - if you forgot, an error will be printed to the Xcode console log - Don't forget to add any hosts you connect to in the whitelist - if you forgot, an error will be printed to the Xcode console log If you handle the resume event, and the app resumes, you can hang the app if you send out a JavaScript call that executes a native function, like alerts. To be safe, wrap your JavaScript call in a setTimeout call, with a timeout value of zero: setTimeout(function() { // do your thing here! }, 0);
http://docs.phonegap.com/en/2.0.0/guide_plugin-development_ios_index.md.html
2013-05-18T17:56:11
CC-MAIN-2013-20
1368696382584
[]
docs.phonegap.com
View tree Close tree | Preferences | | Feedback | Legislature home | Table of contents Search Up Up Ins 25.55(2)(b)5. 5. To underwrite insurance at the consumer's request or for any of the following purposes as they relate to a consumer's insurance: account administration, reporting, investigating or preventing fraud or material misrepresentation, processing premium payments, processing insurance claims, administering insurance benefits including utilization review activities, participating in research projects, workers compensation premium audits, workers' compensation first reports of injury, workers' compensation loss runs or as otherwise required or specifically permitted by federal or state law. Ins 25.55(2)(b)6. 6. In connection with any of the following: Ins 25.55(2)(b)6.a. a. The authorization, settlement, billing, processing, clearing, transferring, reconciling or collection of amounts charged, debited or otherwise paid using a debit, credit or other payment card, check or account number, or by other payment means. Ins 25.55(2)(b)6.b. b. The transfer of receivables, accounts or interests therein. Ins 25.55(2)(b)6.c. c. The audit of debit, credit or other payment information. Ins 25.55 History History: Cr. Register, June, 2001, No. 546 , eff. 7-1-01. Ins 25.60 Ins 25.60 Other exceptions to notice and opt out requirements for disclosure of nonpublic personal financial information. Ins 25.60(1) (1) Exceptions to opt out requirements. The requirements for initial notice to consumers in s. Ins 25.10 (1) (b) , the opt out in ss. Ins 25.17 and 25.30 , and service providers and joint marketing in s. Ins 25.50 do not apply when a licensee discloses nonpublic personal financial information under any of the following circumstances: Ins 25.60(1)(a) (a) With the consent or at the direction of the consumer, provided that the consumer has not revoked the consent or direction. Ins 25.60(1)(b) (b) Ins 25.60(1)(b)1. 1. To protect the confidentiality or security of a licensee's records pertaining to the consumer, service, product or transaction. Ins 25.60(1)(b)2. 2. To protect against or prevent actual or potential fraud or unauthorized transactions. Ins 25.60(1)(b)3. 3. For required institutional risk control or for resolving consumer disputes or inquiries. Ins 25.60(1)(b)4. 4. To persons holding a legal or beneficial interest relating to the consumer. Ins 25.60(1)(b)5. 5. To persons acting in a fiduciary or representative capacity on behalf of the consumer. Ins 25.60(1)(c) (c) To provide information to insurance rate advisory organizations, guaranty funds or agencies, agencies that are rating a licensee, persons that are assessing the licensee's compliance with industry standards, and the licensee's attorneys, accountants and auditors. Ins 25.60(1)(d) (d) To the extent specifically permitted or required under other provisions of law and in accordance with the federal Right to Financial Privacy Act of 1978 ( 12 USC. Ins 25.60(1)(e) (e) Ins 25.60(1)(e)1. 1. To a consumer-reporting agency in accordance with the federal Fair Credit Reporting Act ( 15 USC 1681 et seq.). Ins 25.60(1)(e)2. 2. Disclosure from a consumer report reported by a consumer-reporting agency. Ins 25.60(1)(f) (f) In connection with a proposed or actual sale, merger, transfer or exchange of all or a portion of a business or operating unit if the disclosure of nonpublic personal financial information concerns solely consumers of the business or unit. Ins 25.60(1)(g) (g) Ins 25.60(1)(g)1. 1. To comply with federal, state or local laws, rules and other applicable legal requirements. Ins 25.60(1)(g)2. 2. To comply with a properly authorized civil, criminal or regulatory investigation, or subpoena or summons by federal, state or local authorities. Ins 25.60(1)(g)3. 3. To respond to judicial process or government regulatory authorities having jurisdiction over a licensee for examination, compliance or other purposes as authorized by law. Ins 25.60(1)(h) (h) For purposes related to the replacement of a group benefit plan, a group health plan, a group welfare plan or a workers' compensation policy. Ins 25.60(2) (2) Example of revocation of consent. A consumer may revoke consent by subsequently exercising the right to opt out of future disclosures of nonpublic personal financial information as permitted under s. Ins 25.17 (6) . Ins 25.60(3) (3) Receivership. This chapter does not apply to a receiver for an insurer subject to a delinquency proceeding under ch. 645 , Stats. Ins 25.60 History History: Cr. Register, June, 2001, No. 546 , eff. 7-1-01; correction in (1) (intro.) made under s. 13.93 (2m) (b) 7., Stats., Register March 2004 No. 579 . subch. V of ch. Ins 25 Subchapter V — Health Information Ins 25.70 Ins 25.70 When authorization required for disclosure of nonpublic personal health information. Ins 25.70(1) (1) A licensee shall not disclose nonpublic personal health information about a consumer or customer unless an authorization is obtained from the consumer or customer whose nonpublic personal health information is sought to be disclosed or unless disclosure of the health information is permitted under ss. 51.30 , or 146.81 to 146.84 , Stats., or otherwise authorized by law. Ins 25.70(2) ; rate-making; workers' compensation premium audits; workers' compensation first reports of injury; workers' compensation loss runs; commissioner to the extent they are necessary for appropriate performance of insurance functions and are fair and reasonable to the interest of consumers. A licensee may apply for approval of, and the commissioner may approve additional specific insurance functions that are subject to this subsection if the commissioner finds inclusion is fair and reasonable to the interests of consumers. Ins 25.70 History History: Cr. Register, June, 2001, No. 546 , eff. 7-1-01. Ins 25.73 Ins 25.73 Authorizations. Ins 25.73(1) (1) A valid authorization to disclose nonpublic personal health information pursuant to this subchapter shall be in written or electronic form and shall contain all of the following: Ins 25.73(1)(a) (a) The identity of the consumer or customer who is the subject of the nonpublic personal health information. Ins 25.73(1)(b) (b) A general description of the types of nonpublic personal health information to be disclosed. Ins 25.73(1)(c) (c) General descriptions of the parties to whom the licensee discloses nonpublic personal health information, the purpose of the disclosure and how the information will be used. Ins 25.73(1)(d) (d) The signature of the consumer or customer who is the subject of the nonpublic personal health information or the individual who is legally empowered to grant authority and the date signed. Ins 25.73(1)(e) (e) Notice of the length of time for which the authorization is valid and that the consumer or customer may revoke the authorization at any time and the procedure for making a revocation. Ins 25.73(2) (2) An authorization for the purposes of this subchapter shall specify a length of time for which the authorization shall remain valid, which in no event shall be for more than the period permitted if the authorization were subject to s. 610.70 (2) (b) , Stats., or twenty-four months, whichever is longer. Ins 25.73(3) (3) A consumer or customer who is the subject of nonpublic personal health information may revoke an authorization provided pursuant to this subchapter at any time, subject to the rights of an individual who acted in reliance on the authorization prior to notice of the revocation. Ins 25.73(4) (4) A licensee shall retain the authorization or a copy thereof in the record of the individual who is the subject of nonpublic personal health information. Ins 25.73 History History: Cr. Register, June, 2001, No. 546 , eff. 7-1-01. Ins 25.75 Ins 25.75 Authorization request delivery. A request for authorization and an authorization form may be delivered to a consumer or a customer as part of an opt-out notice pursuant to s. Ins 25.25 , provided that the request and the authorization form are clear and conspicuous. An authorization form is not required to be delivered to the consumer or customer or included in any other notices unless the licensee intends to disclose protected health information pursuant to s. Ins 25.70 (1) . Ins 25.75 History History: Cr. Register, June, 2001, No. 546 , eff. 7-1-01. Ins 25.77 Ins 25.77 Relationship to federal rules. Irrespective of whether a licensee is subject to the federal Health Insurance Portability and Accountability Act privacy rule as promulgated by the U.S. Department of Health and Human Services, if a licensee complies with all requirements of that rule, regardless of whether it currently applies to the licensee, the licensee shall not be subject to the provisions of this subchapter. Ins 25.77 History History: Cr. Register, June, 2001, No. 546 , eff. 7-1-01. Ins 25.80 Ins 25.80 Insurers and agents compliance with s. 610.70, Stats. Ins 25.80(1) (1) An insurer that is subject to s. 610.70 , Stats., or an intermediary acting solely as an agent of an insurer subject to s. 610.70 , Stats., with respect to health information is not required to comply with this subchapter. An insurer is responsible for the acts or omissions of its agents that constitute violations of s. 610.70 , Stats. Ins 25.80(2) (2) For the purposes of s. 610.70 (1) (d) , Stats., "insurance that is primarily for personal, family or household purposes" includes group or individual health insurance policies and personal automobile, homeowners, disability and life policies. It does not include workers' compensation or commercial property and casualty policies. Ins 25.80(3) (3) Nothing in this chapter or s. 610.70 , Stats., restricts disclosure of nonpublic personal health information permitted under s. 102.13 , Stats. Ins 25.80 History History: Cr. Register, June, 2001, No. 546 , eff. 7-1-01. subch. VI of ch. Ins 25 Subchapter VI — Additional Provisions Ins 25.90 Ins 25.90 Nondiscrimination. Ins 25.90(1) (1) A licensee shall not unfairly discriminate against any consumer or customer because that consumer or customer has opted out from the disclosure of his or her nonpublic personal financial information pursuant to the provisions of this chapter. Ins 25.90(2) (2) A licensee shall not unfairly discriminate against a consumer or customer because that consumer or customer has not granted authorization for the disclosure of his or her nonpublic personal health information pursuant to the provisions of this chapter. Ins 25.90(3) (3) Failure to provide an insurance product or service based on usual and customary insurance underwriting practices and standards is not unfair discrimination under this section, even if such failure is the result of a consumer or customer's refusal to authorize the disclosure of his or her nonpublic personal information. Ins 25.90 History History: Cr. Register, June, 2001, No. 546 , eff. 7-1-01. Ins 25.95 Ins 25.95 Effective date. Ins 25.95(1) (1) Applicability. Enforcement under section 505 of the Gramm-Leach-Bliley Act (PL 102-106 ) is effective only on and after the effective date of this rule. Ins 25.95(2) (2) Phase in. Ins 25.95(2)(a) (a) Phased in notice requirement for consumers who are the licensee's customers on the compliance date. Beginning on the first day of the fourth month commencing after the after publication of this rule and by not later than June 30, 2002 a licensee shall provide an initial notice, as required by s. Ins 25.10 , to consumers who are the licensee's customers on the first day of the fourth month commencing after the after publication of this rule. Ins 25.95(2)(b) (b) Example. A licensee provides an initial notice to consumers who are its customers on the first day of the fourth month commencing after the after publication of this rule, if, by that date, the licensee has established a system for providing an initial notice to all new customers and if by June 30, 2002 the licensee has mailed the initial notice to all the licensee's existing customers. Ins 25.95 History History: Cr. Register, June, 2001, No. 546 , eff. 7-1-01; CR 03-083 : r. (3) Register March 2004 No. 579 , eff. 4-1-04. Next file: Chapter Ins 25 Appendix A /code/admin_code/ins/25 true administrativecode /code/admin_code/ins/25/V administrativecode/subch. V of ch. Ins 25 administrativecode/subch. V of ch. Ins?
http://docs.legis.wisconsin.gov/code/admin_code/ins/25/V
2013-05-18T18:08:58
CC-MAIN-2013-20
1368696382584
[]
docs.legis.wisconsin.gov
User Guide Local Navigation Search This Document Synchronize your music from your computer to your smartphone or tablet The number of iTunes songs and playlists that you can synchronize depends on the amount of storage space on your media card or built-in media storage that is available for storing music files. Before you begin: To perform this task, on your BlackBerry® smartphone, mass storage mode must be turned on. - Connect your smartphone or BlackBerry® PlayBook™ tablet to your computer. - On your computer, in the Applications folder, click the BlackBerry Desktop Software icon. - In the Media section in the left pane, click Music. - Select the Sync Music check box. - Do any of the following: - To synchronize your entire music collection, select the Sync All Music check box. - To synchronize videos from iTunes, select the Include Videos check box. - To synchronize specific playlists or synchronize music by artist or genre, select the check box beside one or more playlists, artists, or genres. - To synchronize a random selection of your remaining iTunes songs that aren't in a playlist, select the Fill with Random Music check box. These songs appear in the Random Music playlist in the Music application on your smartphone or tablet. - Click Sync. Previous topic: Supported media file formats Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/33930/Sync_music_2.1_Mac_1653522_11.jsp
2013-05-18T17:58:14
CC-MAIN-2013-20
1368696382584
[]
docs.blackberry.com
I have added a couple of applications in the example directory that can be used as a template for building other applications. Here is an extract from a basic TextEditor that shows how to use the JFace builder: Simple isn't it? For the complete code see subversion or the attached snapshot.
http://docs.codehaus.org/pages/diffpages.action?pageId=76022033&originalId=228172706
2013-05-18T17:57:08
CC-MAIN-2013-20
1368696382584
[array(['/s/en_GB/3278/15/_/images/icons/emoticons/smile.png', '(smile)'], dtype=object) ]
docs.codehaus.org
Talk:Creating a custom form field type From Joomla! Documentation (Difference between revisions) Revision as of 06:47, 7 June 2011 Set up base class is not neccessary. I have implemented custom list field using JFormField as parent class. No need for class like specified in step 2, just write our getInput() or getOptions().
http://docs.joomla.org/index.php?title=Talk:Creating_a_custom_form_field_type&diff=59360&oldid=37227
2013-05-18T17:58:04
CC-MAIN-2013-20
1368696382584
[]
docs.joomla.org
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Get-BAKBackupSelectionList-BackupPlanId <String>-MaxResult <Int32>-NextToken <String>-Select <String>-PassThru <SwitchParameter>-NoAutoIteration <SwitchParameter> maxResultsnumber of items, NextTokenallows you to return more items in your list starting at the location pointed to by the next token. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Get-BAKBackupSelectionList.html
2020-02-17T00:12:09
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
Using Zapier you can connect Real Geeks with many other lead generation services and CRMs. See their list of available integrations and how it works. First of all you need to create an account in Zapier. They have a free basic plan and more advanced and paid options. Real Geeks app inside Zapier has two options: you can use it as a Trigger or as an Action First create a new Zap Then you'll pick the trigger you want. Once you select the Real Geeks app the first step will be to authenticate with your Real Geeks account. Click “Connect a New Account” You will then get a pop-up window from Real Geeks asking you to sign in with your Real Geeks account. After you sign in, you will get sent back to your Zap where you now have your Real Geeks account connected. New Lead triggers when a lead is created in Real Geeks, be it a new sign up on the website, a lead added manually on the Lead Manager or even leads generated with custom sources you have configured before. When you are creating your Zap, after signing in with your Real Geeks account, the New Lead trigger will ask for your site. If you have more than one Real Geeks site and want to trigger leads from all of them, you’ll need one Zap per site. When you select your site and click Continue, Zapier will attempt to test your account by fetching one lead from your Lead Manager. You will need to have at least one existing lead in your Lead Manager, if you don’t have any please go ahead and create a lead manually in your Lead Manager before continuing. By clicking Fetch & Continue Zapier will connect to your Lead Manager, fetch the newest lead and ensure the connection with Real Geeks is working. Now your trigger is configured. Next step is to setup Actions and optionally Filters in between. You're all done! Create Lead action will create a new lead in Real Geeks. The lead will be added to your Lead Manager and also your website. If you have other integrations configured with Real Geeks this lead will also be sent to them. When you are creating your Zap, after signing in with your Real Geeks account, the Create Lead action will prompt you to fill up the lead fields. Lead details like name and phone number should have values that originated from the Trigger step of your Zap. The possible field values will vary depending on the Trigger you’re using. After you mapped those fields click Continue. You will be asked to test the integration by sending a lead from your Trigger to Real Geeks. The last step will ask to name your Zap and turn it on. You’re all done! If you are already using Zapier with our private beta the upgrade to our new app is really simple. First edit your Zap Note how the name of the App you're using is now Real Geeks 1.1 (Legacy) Make your edit and now pick the App called Real Geeks A new step will prompt you to choose your Real Geeks site this Zap is associated to. If you're using Create Lead Action the fields will have to be mapped again, just follow the normal Zap steps. We have some Zap templates to help you get started.
https://docs.realgeeks.com/zapier
2020-02-17T01:57:40
CC-MAIN-2020-10
1581875141460.64
[]
docs.realgeeks.com
Managing the Device¶ The FoundriesFactory provides you the ability to produce your own custom platform images, and a mechanism to securely deploy them to any of the registered devices. A couple of quick notes: The default user and password is fio; we recommend changing it now if you haven’t already. For dynamic host name resolution to work, your local network needs to support Zeroconf and the hostname must be otherwise unclaimed. Registering the Device with the OTA Server¶ Your Linux microPlatform image includes a tool, lmp-device-register that will register your device(s) via Foundries.io REST API. From a console on the device run this command to register the device into your factory: # -a - this allows you to specify an optional comma separated list of # Docker Apps to manage on the device. Check :ref:`tutorial-containers` # for more information. # -n - the name of the device as you'd like it to appear in fioctl and the UI sudo lmp-device-register -n device-name-01 There will be a challenge that needs to be pasted into the URL printed out on the screen. Enter the challenge into the text field on the website. Once the device has been registered, from a console on the device run the following command: sudo systemctl restart aktualizr-lite Now your device is registered, and any new platform builds produced will be pushed down to this device, and be applied. You can view all the devices registered to your factory at:. You can also work with your fleet using the fioctl command line tool. If for some reason you would like to re-provision a device, you will need to remove the credentials generated by lmp-device-register, and re-provision the device. Using the same device name twice is not supported: sudo -s systemctl stop aktualizr-lite rm -rf /var/sota/* lmp-device-register -a shellhttpd -n your_device_name systemctl start aktualizr-lite If you would like to monitor the update status on the device, here are a couple of useful commands: sudo journalctl -f -u aktualizr-lite sudo ostree admin status
https://docs.foundries.io/latest/customer-factory/managing.html
2020-02-17T01:40:38
CC-MAIN-2020-10
1581875141460.64
[]
docs.foundries.io
All content with label 2lcache+gridfs+gui_demo+hibernate+infinispan+installation+interactive+interceptor+repeatable_read+s+setup+xaresource. Related Labels: expiration, publish, datagrid, coherence, server, replication, recovery, transactionmanager, dist, release, query, deadlock, jbossas, lock_striping, nexus, demos, guide, schema, listener, cache, amazon, s3, grid, jcache, api, xsd, ehcache, maven, documentation, jboss, wcm, youtube, write_behind, ec2, 缓存, getting, aws, getting_started, custom_interceptor, clustering, eviction, ls, concurrency, out_of_memory, examples, jboss_cache, import, index, events, batch, hash_function, configuration, buddy_replication, loader, write_through, cloud, mvcc, tutorial, read_committed, xml, jbosscache3x, distribution, started, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, development, transaction, async, build, gatein, searchable, demo, scala, client, as7, non-blocking, filesystem, jpa, tx, eventing, client_server, testng, infinispan_user_guide, standalone, webdav, hotrod, snapshot, docs, batching, consistent_hash, store, jta, faq, as5, jsr-107, jgroups, lucene, locking, favourite, rest, hot_rod more » ( - 2lcache, - gridfs, - gui_demo, - hibernate, - infinispan, - installation, - interactive, - interceptor, - repeatable_read, - s, - setup, - xaresource ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/2lcache+gridfs+gui_demo+hibernate+infinispan+installation+interactive+interceptor+repeatable_read+s+setup+xaresource
2020-02-17T02:48:44
CC-MAIN-2020-10
1581875141460.64
[]
docs.jboss.org
Use NetworkInformation classes to get the list of listening ports on your machine .Net frameworks 2.0 have new addition of Namespace, System.Net.NetworkInformation, it provide a number of interesting classes to extract the network related statistics and state of the machine, it pretty much provide most of the functionality which is exposed by native IPHelper APIs. Earlier I had shown a simple example for getting network availbility event notification. Here is another simple 4 line example, where your application could check all the listening ports on the machine. using System; using System.Net; using System.Net.NetworkInformation; public class Test { public static void Main() { IPGlobalProperties ipGlobal = IPGlobalProperties.GetIPGlobalProperties(); IPEndPoint[] connections = ipGlobal.GetActiveTcpListeners ( ) ; foreach(IPEndPoint ipe in connections) Console.WriteLine("Listening IPEndPoint = "+ipe.Port); } }
https://docs.microsoft.com/en-us/archive/blogs/adarshk/use-networkinformation-classes-to-get-the-list-of-listening-ports-on-your-machine
2020-02-17T02:14:44
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
MSSQLSERVER error 823 SQL Server Azure SQL Database Azure Synapse Analytics (SQL DW) Parallel Data Warehouse Details Explanation SQL Server uses Windows APIs (for example, ReadFile, WriteFile, ReadFileScatter, WriteFileGather) to perform file I/O operations. After performing these I/O operations, SQL Server checks for any error conditions associated with these API calls. If the API calls fail with an Operating System error, then SQL Server reports Error 823. The 823 error message contains the following information: - The database file against which the I/O operation was performed - The offset within the file where the I/O operation was attempted. This is the physical byte offset from the start of the file. Dividing this number by 8192 will give you the logical page number that is affected by the error. - Whether the I/O operation is a read or write request - The Operating System error code and error description in parentheses Operating system error: A read or write Windows API call is not successful, and SQL Server encounters an operating system error that is related to the Windows API call. The following message is an example of an 823 error: Error: 823, Severity: 24, State: 2. 2010. Additional diagnostic information for 823 errors may be written to the SQL Server error log file when you use trace flag 818. For more information, see KB 826433: Additional SQL Server diagnostics added to detect unreported I/O problems Cause. In the case of a file read, SQL Server will have already retried the read request four times before it returns 823. If the retry operation succeeds, the query will not fail but message 825 will be written into the ERRORLOG and Event Log. User Action - Review the suspect_pages table in MSDB for other pages that are encountering this problem (in the same database or different databases). - Check the consistency of databases located on the same volume (as the one reported in the 823 message) using DBCC CHECKDB command. If you find inconsistencies from the DBCC CHECKDB command, use the guidance from How to troubleshoot database consistency errors reported by DBCC CHECKB. - Review the Windows Event logs for any errors or messages reported from the Operating System or a Storage Device or a Device Driver. If they are related to this error in some manner,. The SQLIOSim utility ships with SQL Server 2008 and later versions so there is no need for a separate download. You can typically find it in your C:\Program Files\Microsoft SQL Server\MSSQLxx.MSSQLSERVER\MSSQL\Binnfolder. - Work with your hardware vendor or device manufacturer to ensure - The hardware devices and the configuration conforms to the I/O requirements of SQL Server - The device drivers and other supporting software components of all devices in the I/O path are up to date - If the hardware vendor or device manufacturer provided you with any diagnostic utilities, Feedback
https://docs.microsoft.com/en-us/sql/relational-databases/errors-events/mssqlserver-823-database-engine-error?view=sql-server-ver15
2020-02-17T01:54:30
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Full System Recovery of DB2 Database: Restoring the DB2 Database Restore and bring up each dropped database from its database backup image in the event of a disaster. Before You Begin Rebuild the Operating system Procedure - From the CommCell Browser, navigate to Client Computers | <Client> | DB2 | <Instance>. - Right-click the <Backup Set>, point to All Tasks, and then click Browse and Restore. - Click View Content. - In the right pane of the Browse window, click the <Backup Set> and select all the entities. - Click Recover All Selected. - Click OK. - Click OK to deactivate the database. You can monitor the progress of the restore job in the Job Controller. - Once the restore job has completed, right-click the agent and click View | Restore History and click OK. - You can right-click the job and view the following details: - View Restore Items You can view them as Successful, Failed, Skipped or All. - View Job Details - View Events of the restore job - View Log files of the restore job What To Do Next Once the database is restored, verify that the restored database and log files are available in the original location.
http://docs.snapprotect.com/netapp/v11/article?p=features/disaster_recovery/t_dr_restore_db2.htm
2020-02-17T02:04:35
CC-MAIN-2020-10
1581875141460.64
[]
docs.snapprotect.com
Backups SnapProtect backs up all of the instances identified on the Content tab for a subclient, including active EC2 instances and instance EBS volumes. Backups for Amazon are crash consistent. SnapProtect does not back up AMIs or instance store volumes. When multiple VSA proxies are available, a proxy in the same zone is given priority, or a proxy in the same region if no proxy in the same zone is available. A proxy in the same region must be available to perform backups. Notes: - You cannot perform a streaming backup of the root volume for an instance that is launched from AWS Marketplace. To back up instance data, include the instance as content for the subclient and create a filter to exclude root volumes. To perform a backup that includes the root volume, use SnapProtect for Amazon. - When an instance is discovered during backup, a client for the instance is created in the CommCell Console if one does not already exist. If an instance with spaces or special characters in its name is discovered during backup, the spaces or special characters are replaced with underscores ('_') when creating the instance client name that is displayed in the CommCell Console. The instance name with spaces or special characters is still displayed on the Instance Status tab for the backup job, in the backup job summary, and in reports that include the instance name. The following special characters are replaced: [ \ \ | ` ~ ! @ # $ % ^ & * + = < > ? , { } ( ) : ; ' \ " \ \ s / ] Backup Process The backup operation includes the following stages: - For each instance, create an AMI. This operation creates a crash consistent snapshots of instance volumes. - Create volumes from the snapshots. - Attach volumes to the VSA proxy. Up to 21 volumes can be attached to the VSA proxy during backups, occupying device slots xvdf - xvdz. This limit includes any volumes already attached to the proxy. If EBS is attached to slot xvdf, only 20 volumes can be attached. - Back up the volumes, using CRC for incremental changes and directing data through the MediaAgent for the storage policy associated with the backup. - Unmount, detach, delete the snapshots, and delete the AMI. - Deregister the AMI..
http://docs.snapprotect.com/netapp/v11/article?p=products/vs_amazon/c_vamz_backup.htm
2020-02-17T02:04:14
CC-MAIN-2020-10
1581875141460.64
[]
docs.snapprotect.com
CRYENGINE comes with a localization system that allows text localization for the UI. The Localization (moved to DOC2) System is documented within the Asset Creation Manual. Besides string localization it is also possible to use different font and glyph sets for each language. UI string translations are stored in .xml Excel sheets. It simply stores labels (keys) and translation. Translation tables for each language must be stored under: Localization\<Language>\text_ui*.xml for it to be processed to the default locations used by the engine. For UI translation the tables must have a column "KEY", "ORIGINAL TEXT" and "TRANSLATED TEXT". A label is also translated if it is passed as string to a dynamic textfield via: Code\FlowGraph\LUA. There are two files that are needed:.. Just create gfxfontlib.gfx and gfxfontlib_glyphs.gfx files for each language and place them under: Localization\<language>\*.gfx
https://docs.cryengine.com/plugins/viewsource/viewpagesrc.action?pageId=19379522
2020-02-17T02:35:47
CC-MAIN-2020-10
1581875141460.64
[]
docs.cryengine.com
Advanced Setup ************** .. raw:: html Use the Privacy Dialog ====================== By default, this feature is disabled. When a crash occurs in any version of your app, this will ask your users if that crash should be reported. If you’d like to enable it, you can do it in your `app settings page
https://docs.fabric.io/android/_sources/crashlytics/advanced-setup.txt
2020-02-17T02:18:02
CC-MAIN-2020-10
1581875141460.64
[]
docs.fabric.io
Instance Modified by Unusual UserDescriptionDetects the first time a user modifies an existing instance.Use CaseAdvanced Threat DetectionCategoryAccount Compromise, IAM Analytics, SaaSSecurity ImpactThe risk that this detection intends to reduce is the compromise of an IaaS environment, where all of a sudden instance modification occurs by a user that isn't known to modify instances. Assuming that the user has not changed roles, and that new orchestration tools are not being used, this would suggest that credentials have been created or compromised, and are in control of an adversary. This could result in potential data leakage, data deletion, or cost run-up.Alert VolumeMedium (?)SPL DifficultyMediumJourneyStage 3MITRE ATT&CK TacticsPersistencePrivilege EscalationMITRE ATT&CK TechniquesValid AccountsMITRE Threat GroupsAPT18APT28APT3APT32APT33APT39APT41CarbanakDragonfly 2.0FIN10FIN4FIN5FIN6FIN8LeviathanNight DragonOilRigPittyTigerSoft CellStolen PencilSuckflyTEMP.VelesThreat Group-1314Threat Group-3390menuPassKill Chain Phases Actions on ObjectivesData SourcesAudit TrailGCPAzureAWS How to ImplementAssuming you use the ubiquitous AWS, GCP, or Azure Add-ons for Splunk to pull these logs in, this search should work automatically for you without issue. While implementing, make sure you follow the best practice of specifying the index for your data..For this use case, you probably don't care every time a new user starts modifying instances in a large environment, but you may care when they change resources in sensitive VPCs. Smaller organizations with a limited number of admins would likely care the minute that a new account is created. For all organizations, having this data around for context or to aggregate risk is extremely useful! How To RespondWhen this alert fires, call the user and see if they expected this behavior. If the user cannot attribute this activity, it is best to reset the keys and continue your investigation to see what occurred. HelpInstance Modified by Unusual User HelpThis example leverages the Detect New Values search assistant. Our example dataset is a collection of anonymized AWS CloudTrail logs, during which someone does something bad. Our live search looks for the same behavior using the very standardized index and sourcetypes for AWS CloudTrail, GCP and Azure audit, as detailed in How to Implement.SPL for Instance Modified by Unusual UserDemo Data| `Load_Sample_Log_Data(AWS CloudTrail)` First we bring in our basic demo dataset. In this case, anonymized AWS CloudTrail logs. We're using a macro called Load_Sample_Log_Data to wrap around | inputlookup, just so it is cleaner for the demo data.|searchThen we filter for individual APIs that we want to pay close attention to.| stats earliest(_time) as earliest latest(_time) as latest by user, eventName.AWS Dataindex=* sourcetype=aws:cloudtrailFirst we bring in our basic dataset. In this case, AWS CloudTrail logs, filtered for individual APIs that we want to pay close attention to.| stats earliest(_time) as earliest latest(_time) as latest by user, event.GCP Dataindex=* sourcetype=google:gcp:pubsub:message data.protoPayload.methodName=*compute.instance* NOT (data.protoPayload.methodName=*.compute.instances.*insert* OR data.protoPayload.methodName=*.compute.instances.*create* OR data.protoPayload.methodName=*.compute.instances.*run* OR data.protoPayload.methodName=*.compute.instance*.*list*)First we bring in our GCP Audit logs, filtered for Instance Creation.| stats earliest(_time) as earliest latest(_time) as latest by data.protoPayload.authenticationInfo.principalEmail, data.protoPayload.method.Azure Dataindex=* sourcetype=mscs:azure:audit operationName.localizedValue=Create* "claims."=user_impersonationFirst we bring in our Azure Audit logs. We weren't able to determine a difference between instance creation and instance modification, so this is the same SPL as Instance Creation by Unusual user, specifically for Azure.| stats earliest(_time) as earliest latest(_time) as latest by caller, operationName.localizedValueaInstance Created by Unusual User Large Web Upload
https://docs.splunksecurityessentials.com/content-detail/aws_instance_modified_by_unusual_user/
2020-02-17T00:56:07
CC-MAIN-2020-10
1581875141460.64
[]
docs.splunksecurityessentials.com
Food for thoughts¶ Simple plain text writing¶. What Is Markdown, and Why Is It Better for My To-Do Lists and Notes?¶. Long notes vs short notes¶ It seems that if it make sense try to make long notes. One project should be a long note. You have everything in one place, and you can just scroll up or down and use the table of content sidebar to get where you want, it’s a really time saver! Not having to click and go to a different note, it’s really fun and help you focus on your work. That’s why Word doc files don’t work for me, it’s to hard to find yourself easily and for very big files, Word is just super slow! However, if you have a note that is clearly self-containing, separate from everything else, use a new note. It will be faster to read or/and edit on mobile devices, easier to print. Git/Github your notes¶ We develop a plugin to automatically git your notes. The script can be added to your crontab. geekbook/plugins/ContentAutoCommit/git-auto-commit.sh Magnus: I realized that I prefer to commit changes of my notes by myself. I usually improve some new information, fix some notes etc. So I developed the script but I’m not really using it right now. Images (external)¶ It’s also very useful in some applications to have images seperate than your notes. You can have dynamics notes, where your images are in varous places and you provide in Markdown links to them. You can also grab any image to Gimp, edit it and just save. The image in the note will be updated then. You can edit images in batch. Styles¶ Geekbook compared to Word is very easy to stylish however you want :-) It’s just HTML. You can do whatevery you want using CSS etc. Version control of your notes¶ If you use git, you can keep all version of your notes, you can track the whole history, in the similar way how you can deal with your code. Super-flexible¶ This system is super flexible. You can use whatever editor you like, you can edit your notes on your phone, one a cluster using Vi/Nano/etc. It’s text file so you will be able to open it alwasy in the future. Cool alternatives¶ - Geeknote - Work with Evernote from command line - KeepNote - Notes
https://geekbook.readthedocs.io/en/latest/thoughts.html
2020-02-17T02:12:48
CC-MAIN-2020-10
1581875141460.64
[array(['_images/gitgui-notes.png', '_images/gitgui-notes.png'], dtype=object) ]
geekbook.readthedocs.io
Pulling and running a container from Docker Hub¶ After installing Docker, you can verify your installation by running a container from Docker Hub. Docker Hub is service that allows you to store and share containers. When building your own container, you will usually start from a a pre-existing container. For example, the Ubuntu Docker page on Docker Hub hosts official Ubuntu operating system containers. These are minimal installations of Ubuntu you can customize to build a new container, without installing an operating system. Tip Docker Hub hosts Official Images, which are generally more secure than unverified images (anyone with a Docker Hub account can host and image). Official images are also more likely to be configured correctly. We will test your installation by pulling and running the “Hello World” image: - Run the following command:docker run hello-world You will get the following output if Docker is installed and configured successfully:Unable to find image 'hello-world:latest' locally: - As suggested, we can run an instance of Ubuntu using the following command. Notice, we explicitly retrieve the container, use the -it (interactive) option and start the bash shell within the container:docker run -it ubuntu bash Try a few commands; you can use the exit command to exit the container. - You can see a list of container images you have downloaded to your computer using the info command:docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest d131e0fa2585 2 weeks ago 102MB hello-world latest fce289e99eb9 4 months ago 1.84kB - You can see the status of your containers using the using the ps command:docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES In this case, no containers are running. - Let’s try a more complex container - using this command we will pull a container running RStudiodocker run -e PASSWORD=<YOUR_PASS> -p 8787:8787 rocker/rstudio # -e passes an environment variable - change <YOUR_PASS>, e.g. 12345 # -p maps a port inside the docker container to one on the host machineYou will now have an RStudio instance running at port 8787 of your computer. To access open a web browser and go to your machine’s IP address + “:8787” (this will generally be 127.0.0.1:8787). Username will be “rstudio” and password will be whatever you chose in the command above Tip A full list of Docker commands and their explanation can be found here: Docker child commands. Fix or improve this documentation - On Github: Github Repo Link - Send feedback: [email protected]
https://cyverse-creating-docker-containers-quickstart.readthedocs-hosted.com/en/latest/step2.html
2020-02-17T00:30:23
CC-MAIN-2020-10
1581875141460.64
[]
cyverse-creating-docker-containers-quickstart.readthedocs-hosted.com
All content with label cloud+concurrency+data_grid+documentation+infinispan+installation+jcache+jsr-107+listener+mvcc+non-blocking+read_committed+repeatable_read+store+transactionmanager. Related Labels: podcast, publish, datagrid, coherence, interceptor, server, replication, dist, release, query, deadlock, intro, contributor_project, archetype, lock_striping, nexus, guide, schema, cache, amazon, s3, memcached, grid, test, api, xsd, ehcache, maven, wcm, youtube, userguide, ec2, 缓存, s, streaming, hibernate, getting, aws, interface, custom_interceptor, clustering, setup, eviction, large_object, gridfs, out_of_memory, examples, jboss_cache, import, index, events, hash_function, batch, configuration, buddy_replication, loader, write_through, remoting, tutorial, notification, presentation, xml, jbosscache3x, distribution, started, cachestore, cacheloader, hibernate_search, resteasy, cluster, development, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, demo, scala, client, migration, jpa, tx, user_guide, gui_demo, eventing, student_project, client_server, testng, infinispan_user_guide, standalone, hotrod, webdav, docs, consistent_hash, batching, jta, faq, as5, 2lcache, lucene, jgroups, locking, rest, hot_rod more » ( - cloud, - concurrency, - data_grid, - documentation, - infinispan, - installation, - jcache, - jsr-107, - listener, - mvcc, - non-blocking, - read_committed, - repeatable_read, - store, - transactionmanager ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/cloud+concurrency+data_grid+documentation+infinispan+installation+jcache+jsr-107+listener+mvcc+non-blocking+read_committed+repeatable_read+store+transactionmanager
2020-02-17T02:00:59
CC-MAIN-2020-10
1581875141460.64
[]
docs.jboss.org
EDITBIN Reference The Microsoft COFF Binary File Editor (EDITBIN.EXE) modifies Common Object File Format (COFF) binary files. You can use EDITBIN to modify object files, executable files, and dynamic-link libraries (DLL). Note You can start this tool only from the Visual Studio command prompt. You cannot start it from a system command prompt or from File Explorer. EDITBIN is not available for use on files produced with the /GL compiler option. Any modifications to binary files produced with /GL will have to be achieved by recompiling and linking. See also Additional MSVC Build Tools Feedback
https://docs.microsoft.com/en-us/cpp/build/reference/editbin-reference?redirectedfrom=MSDN&view=vs-2019
2020-02-17T02:36:54
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Installation¶ Stable release¶ To install Lambda Utils, run this command in your terminal: $ pip install lambda_utils This is the preferred method to install Lambda Utils, as it will always install the most recent stable release. If you don’t have pip installed, this Python installation guide can guide you through the process. From sources¶ The sources for Lambda Utils can be downloaded from the Github repo. You can either clone the public repository: $ git clone git://github.com/CloudHeads/lambda_utils $ curl -OL Once you have a copy of the source, you can install it with: $ python setup.py install
https://lambda-utils.readthedocs.io/en/latest/installation.html
2020-02-17T01:54:30
CC-MAIN-2020-10
1581875141460.64
[]
lambda-utils.readthedocs.io
4.9. Upgrading SIMP¶ This section describes both the general, recommended upgrade procedures as well as any version-specific upgrade procedures. Important To minimize upgrade problems in your production environment, we strongly recommend that you: - Carefully read the CHANGELOG for the SIMP version to which you are upgrading, as well as the Changelogs for any interim versions you are skipping over. - Test your upgrades in a development environment before deploying to a production environment. - Backup any critical server data/configurations prior to executing the upgrade to a production environment. - On each managed server, ensure you have a local user with suand sshprivileges to prevent lockout. - 4.9.1. General Upgrade Instructions - 4.9.2. Version-Specific Upgrade Instructions
https://simp.readthedocs.io/en/6.3.3/user_guide/Upgrade_SIMP.html
2020-02-17T01:59:25
CC-MAIN-2020-10
1581875141460.64
[]
simp.readthedocs.io
Deploying Exchange 2007 on VMWare? VMWare Podcasts – VMWAre Infrastructure 3 Podcast: Disaster Recovery (DR) for Exchange using VMWare @ by Scott Salyer, VMware Technical Solutions Architect - Using VMWare infrastructure we can help increase the flexibility of your high availability solution for exchange - Reduce cost - Enhance availability of Exchange through VMWare and MS Exchange features Scott makes a lot of the inflexibility of a non-VMWare Exchange design – you have to stick to your design decisions – using VMWare is more flexible if your original design requirements change. ..would still argue that getting your requirements straight to begin with and sticking to an environment lifecycle is pretty key to a successful Exchange deployment regardless of platform. ..good podcast and worth a listen although I’d love to see more detail about recovery processes for different scenarios and some expectations about data loss. Secure and Consolidated 16,000 Exchange Users Solution on a VMWare/EMC Environment @ (published May last year) “The purpose of this white paper is to validate the building-block guidelines for virtualizing an Exchange 2007 Mailbox server role using a real-world deployment scenario. VMWare ESX 2.5 was used to host the Exchange Server 2007 virtual machines. All periperal (AD, HUB, and CAS) server roles were also hosted on VMWare virtual machines. EMC CLARiiON CX3-80 storage was used to host the Exchange database and log storage, and EMC Replication Manager software was used to test backup/restore functionality for the virtualized Mailbox servers.” 16,000 users, 0.32 IOPS per mailbox – Loadgen and Jetstress used to test the deployment. “Conclusion The solution validated the building-block approach to virtualizing an Exchange 2007 Mailbox server with VMWare and EMC CLARiiON storage”… Good results but sounds like a pretty expensive solution for Exchange 2007 which would negate a lot of the benefits of deploying Exchange 2007 on VMWare in the first place. To me it doesn’t make a good enough case against making the most of your hardware and dedicating it to Exchange with more cost effective storage. Deploy Exchange on a Dynamic Platform @ “Increase the Capacity of Physical Servers by 100% Double the number of mailboxes supported per physical host from 8,000 to 16,000 heavy mailbox users. Without VMware, a single Exchange, Exchange can be scaled out on 8 Virtual Machines, each supporting 2,000 heavy mailbox users, to support 16,000 users on one physical server. This performance advantage will amplify over time with the introduction of larger multicore systems. Without VMware, Exchange will not be able to use the additional capacity of these servers. With VMware, Exchange will scale out linearly to efficiently use the additional capacity.” Virtualization Performance Basics @ “By running multiple virtual machines simultaneously, a physical server can be driven to much higher utilizations, albeit with some performance overhead.” “Virtualization does not decrease the amount of RAM required to run an application and its host operating system, and like any software, the virtualization layer requires its own portion of RAM…” When multiple virtual machines are consolidated on a single physical server, they can impact I/O performance with their combined file size and simultaneous need for rapid access to stored data. “VMware solutions help to improve I/O performance through the VMware vStorage VMFS, which provides virtual machines with simultaneous access to shared data stores. Centralized storage helps reduce latency and increase throughput, and provides the foundation for unique capabilities such as live migration and consolidated backup.” Where are the up to date performance benchmarks? They still don’t seem to exist. ..and some other links I’m sure you’ve already seen. Should You Virtualize Your Exchange 2007 SP1 Environment? Microsoft Support Policies and Recommendations for Exchange Servers in Hardware Virtualization Environments Windows Server Virtualization Validation Program Exchange 2007 System Requirements Server Virtualization with Advanced Management (SVAM) Service Offering
https://docs.microsoft.com/en-us/archive/blogs/douggowans/deploying-exchange-2007-on-vmware
2020-02-17T02:30:29
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
British Computer Society Agile SCM Event This week I presented a session at the British Computer Society Configuration Management Specialist Group's Agile SCM Event. It was an excellent day with a number of speakers from various backgrounds talking about the challenges and best practice of Agile development but what really struck me while listening to the speakers was that Visual Studio Team System is going in the right direction to address those issues. My presentation was on "SCM and Agile Practices in the Microsoft Developer Division". The slides should be available soon on the BCS site but if anyone is interested then I will post them here. If you attended the event and have any feedback or if you need any further information then please get in touch. Regards, Richard
https://docs.microsoft.com/en-us/archive/blogs/ukvsts/british-computer-society-agile-scm-event
2020-02-17T02:15:14
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Asynchronous Field Resolvers¶ Lacinia supports asynchronous field resolvers: resolvers that run in parallel within a single request. This can be very desirable: different fields within the same query may operate on different databases or other backend data sources, for example. Alternately, a single request may invoke multiple top-level operations which, again, can execute in parallel. It’s very easy to convert a normal synchronous field resolver into an asynchronous field resolver: Instead of returning a normal value, an asynchronous field resolver returns a special kind of ResolverResult, a ResolverResultPromise. Such a promise is created by the :api:`resolve/resolve-promise function. The field resolver function returns immediately, but will typically perform some work in a background thread. When the resolved value is ready, the deliver! method can be invoked on the promise. (require '[com.walmartlabs.lacinia.resolve :as resolve]) (defn ^:private get-user [connection user-id] ...) (defn resolve-user [context args _] (let [{:keys [id]} args {:keys [connection]} context result (resolve/resolve-promise)] (.start (Thread. #(try (resolve/deliver! result (get-user connection id)) (catch Throwable t (resolve/deliver! result nil {:message (str "Exception: " (.getMessage t))}))))) result)) The promise is created and returned from the field resolver function. In addition, as a side effect, a thread is started to perform some work. When the work is complete, the deliver! method on the promise will inform Lacinia, at which point Lacinia can start to execute selections on the resolved value (in this example, the user data). On normal queries, Lacinia will execute as much as it can in parallel. This is controlled by how many of your field resolvers return a promise rather than a direct result. Despite the order of execution, Lacinia ensures that the order of keys in the result map matches the order in the query. For mutations, the top-level operations execute serially. That is, Lacinia will execute one top-level operation entirely before starting the next top-level operation. Timeouts¶ Lacinia does not enforce any timeouts on the field resolver functions, or the promises they return. If a field resolver fails to deliver! to a promise, then Lacinia will block, indefinitely. It’s quite reasonable for a field resolver to enforce some kind of timeout on its own, and deliver nil and an error message when a timeout occurs. Exceptions¶ Uncaught exceptions in an asynchonous resolver are especially problematic, as it means that ResolverResultPromises are never delivered. In the example above, any thrown exception is converted to an error map. Warning Not catching exceptions will lead to promises that are never delivered and that will cause Lacinia to block indefinitely. Thread Pools¶ By default, calls to deliver! invoke the callback (provided to on-deliver!) in the same thread. This is not always desirable; for example, when using Clojure core.async, this can result in considerable processing occuring within a thread from the dispatch thread pool (the one used for go blocks). There are typically only eight threads in that pool, so a callback that does a lot of processing (or blocks due to I/O operations) can result in a damaging impact on overall server throughput. To address this, an optional executor can be provided, via the dynamic com.walmartlabs.lacinia.resolve/*callback-executor* var. When a ResolverResultPromise is delivered, the executor (if non-nil) will be used to execute the callback; Java thread pools implement this interface.
https://lacinia.readthedocs.io/en/latest/resolve/async.html
2020-02-17T01:03:01
CC-MAIN-2020-10
1581875141460.64
[]
lacinia.readthedocs.io
Mole is a log analyzer with parse your logs file (any kind of log), using specified definitions (usually as regular expressions) and magically interpret some fields (numbers, dates ...). Mole provide you a set of functions to analyze that data. In this example we will use an access log file generated by apache (or any other HTTP server). Let’s suppose that this file is located in /var/log/apache/access.log. Note Don’t worry about log rotations, mole can handle it. Edit the /etc/mole/input.conf, just adding [apache_log] type = tail source = /var/log/apache/access.log We are defining a new input called apache_log, of type tail (that means that we read the new lines in the file when written and handle rotate logs), pointing to our log file in /var/log/apache/access.log Edit the /etc/mole/index.conf, just adding [apache_log] path = /var/db/mole/apache_log We are defining a new index. The index is the mole database where logs will be stored in a proper format, so we can perform faster searches. The mole pipeline is the responsible to read log items from a source, process then (and transform them if required) and, finally, return an output. If output is not explicitly defined, use the best output format for current console (serialize in network, just an printf in console). There are a few components which are interesting to know: input: The input are the responsible to read the log source, sources can be of different kinds, such normal files, network stream, index file and so on. plotter: The plotter main function is to split the source in logical lines. In a normal log file, each line in log is usually a new log entry, but some other logs could be use a couple of lines to define the same logical entry (i.e. java exceptions are usually in a number of lines). parser: Once the logical line is got, you need to known what is the meaning of each field. The parser just assign names to fields using regular expressions for that. actions: The actions are transformations, filters and in general any other action to take over the log dataset. output: The output just encapsulate the results of the actions in a human (or machine) readable form. You can think the output as some kind of serialization. So, the final pipeline in mole is something like that: <input> | <plotter> | <parser> | <action> | <action> ... | <output> Mole is composed by three different daemons (for now): mole: is the client which can query the mole-seeker. To start mole, you need to configure the server. You have an example in the configuration directory of the source code. The configuration directory will contains one file per mole component. Once your server is configured, start both mole-indexer and mole-seeker. Finally perform your query using mole. Into the configuration directory, you can find a different file per each mole component, i.e: index.conf for set up indexes. The indexes are special stpra Count the lines of a input (in this case the input will be an access_log of apache server): $ mole 'input apache_log | count *' count(*)=3445 Perform the same query, but grouping by source ip: $ mole 'input apache_log | count * by src_ip' src_ip=127.0.0.1 count=121 src_ip=192.168.0.21 count=1203 Calculate the average transfer size in apache log, sorted by URL and get only the top three: $ mole 'input apache_log | avg bytes by path | top 3' path=/ avg(bytes)=12343 path=/login avg(bytes)=6737 path=/logout avg(bytes)=2128 $ mole 'input apache_log | search path=*login* | count *' count(*)=3838 The Mole code is stored in github, and you can download it using git, as usual too: $ git clone git://github.com/ajdiaz/mole The basic design of mole is a linear pipeline which includes, the following components: Inputs can be normal files (or tails of files) or special files called “indexes”. An index contains the raw data plus time pointer. To open bugs or enhanced proposals, please use the github issues tool. If you have any suggestions, do not hesitate to contact me.
https://mole.readthedocs.io/en/latest/
2020-02-17T01:48:05
CC-MAIN-2020-10
1581875141460.64
[]
mole.readthedocs.io
Load Balancing Load balancing is a technique to spread work between multiple back-end servers in order to obtain optimal resource utilization, throughput and response times. Requests are directed to the load balancing hardware or software and then forwarded to the appropriate server. Load Balancing by IP Address A variety of algorithms are used by load balancers to determine which back-end server to send a request to. One such method is load balancing by IP address. With this method requests are balanced across the back-end servers by the IP address of the machine from which the request originates. An injector machine will typically have a single IP address and so all virtual users (VUs) generated from the injector will posses this same IP address. This will cause the load balancer to forward all requests from the injector to the same back-end server. There are two strategies that can be employed to overcome this problem. Strategy 1: Multiple Injectors One way of running a performance test against a system under test that utilizes IP load balancing is to use multiple load injectors. If load balancing occurs across four back-end servers then the solution is to match this with four injectors; resulting in four source IP addresses. This will ensure that the load is distributed amongst all four back-end servers. All requests from each injector will be directed to one of the servers. Strategy 2: IP Spoofing If enough injectors cannot be sourced in order to pursue Strategy 1 then it may be appropriate to spoof the IP addresses of the VUs within the test. IP spoofing is a method of assigning an IP address to an individual VU. For IP spoofing: - You will need a range of unused IP addresses from the Network Administrator. These are the IP addresses that will be assigned to the VUs. Again, if there are four back-end servers participating in load balancing, the minimum number of IP addresses required for the virtual user population is also four. - The network routing must also be capable of handling a PC with multiple IP addresses. Typically, the environment should be capable of working with static, rather than dynamic, IP addresses. The network administrator should be able to advise you on this. To configure a Windows Injector for multiple local IP addresses, do the following: - Open Network Connections. - Right-click the required network connection and select Properties. - Under the General tab select Internet Protocol (TCP/IP) and then Properties. - Select Use the following IP address. - Select Advanced. - Now assign the IP addresses to your network card under the IP Settings tab. - You must also set the address of the DNS server on your network under the DNS tab.
http://docs.eggplantsoftware.com/epp/9.0.0/ePP/ip_spoofing.htm
2020-02-17T00:23:41
CC-MAIN-2020-10
1581875141460.64
[]
docs.eggplantsoftware.com
Security Detection Basics Table of Contents Introduction When you’re getting started with Security Detections, you don’t need to be overwhelmed with everything that Splunk can do. This guide helps you view the key content that drives the most value, and also suggests pages and docs that provide the context to help accelerate success.. Data. Data Source Check The data source check dashboard will look in your environment not just for the expected data, but also for the actual field extractions used by the free searches in Splunk Security Essentials, and provide you a list of checkboxes for what searches you can use. The Data Source Check dashboard tells you what searches would be ready to run in your environment. Click Start Searches to get started. The dashboard will launch 60+ pre-req tests. Each is really quick – the whole set should take less than 10 minutes and won’t overwhelm your Splunk. As the searches run, you will get back Green Checks or Red Explanation Points. A green check indicates that the pre-req test found the exact data, sourcetypes, and fields that the detection is expecting. If you’ve run the dashboard checks in the past, you can always re-run them on your current data, or you can click Retrieve Result to pull back your last result. Security Posture Dashboards Once you complete the Data Source Check, you can click “Create Posture Dashboards” in the upper right corner. That will let you create up to 50 dashboard panels looking at your actual data, and following Splunk best practices! The Security Posture dashboards only run on the data you have in your system, so make sure you run the Data Source Check searches first (or if you’ve run them before, click Retrieve Last Result. Once the checks are in place, you can click Create Posture Dashboards. There are three dashboards you can choose. Within each, some panels are enabled by default, some disabled, and some unavailable as you don’t have the required data. If you want to see the intended result, you can click Use Demo Datasets and all the dashboards will use CSV demo data. After clicking Create Dashboards, you will get a link to each dashboard. They’ll also be added to navigation. These are SimpleXML dashboards using Splunk best practices (with post-processing and using accelerated data models if possible). That makes them easy to customize, or copy-paste into your dashboards.
https://docs.splunksecurityessentials.com/user/content/basic_users/
2020-02-17T01:33:21
CC-MAIN-2020-10
1581875141460.64
[]
docs.splunksecurityessentials.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. New-OPSApp-StackId <String>-Name <String>-Shortname <String>-Description <String>-Type <AppType>-Attribute <Hashtable>-SslConfiguration_Certificate <String>-SslConfiguration_Chain <String>-DataSource <DataSource[]>-Domain <String[]>-EnableSsl <Boolean>-Environment <EnvironmentVariable[]>-AppSource_Password <String>-SslConfiguration_PrivateKey <String>-AppSource_Revision <String>-AppSource_SshKey <String>-AppSource_Type <SourceType>-AppSource_Url <String>-AppSource_Username <String>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter> Passwordto the appropriate IAM secret access key. Passwordto the password. *****FILTERED*****instead of the actual value. *****FILTERED*****instead of the actual value.. Usernameto the appropriate IAM access key ID. Usernameto the user name. ', example.com' EnvironmentVariableobjects 20 KB. This limit should accommodate most if not all use cases. Exceeding it will cause an exception with the message, "Environment: is too large (maximum is 20KB)."If you have specified one or more environment variables, you cannot modify the stack's Chef version. other. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/New-OPSApp.html
2020-02-17T00:47:06
CC-MAIN-2020-10
1581875141460.64
[]
docs.aws.amazon.com
Errors All errors will return the most specific, appropriate HTTP response code from the following table: Return CodesReturn Codes Standard Response FormatStandard Response Format All errors will have a standard response body with two fields, code and message. More fields can be used but they are optional. ExamplesExamples This is the simplest form of an error’s return body, clients can rely on these fields being present in every error. Error Responses - Example 1 (Baseline)Error Responses - Example 1 (Baseline) { "code": 429, "message": "You have exceeded your number of requests. Please try again later" } The following example has, in addition to the standard fields, for errors associated with this specific request. Error Responses - Example 2 (More Complex)Error Responses - Example 2 (More Complex) { "code": 422, "message": "Unprocessable entity", "cause": "Validation failed", "errors": [ { "resource": "Issue", "field": "title", "code": "missing_field" }, ... ] }
https://docs.exads.com/docs/api-manual-errors/
2020-02-17T01:03:51
CC-MAIN-2020-10
1581875141460.64
[]
docs.exads.com
The Fourth Industrial Revolution – making disruption work for you The Fourth Industrial Revolution. It's a big topic. Big enough for it to have played a central role in the World Economic Forum's Industry Strategy Meeting in June, and for it to be a key topic in the UK Regional session at Microsoft Inspire. But what is the Fourth Industrial Revolution (4IR)? Let's start by pointing out that it's not IoT or big data. It's not Factory 4.0 or the smart factory. It's all of them - and more. The first industrial revolution mechanised production. The second created mass production. The third automated it. Now, the fourth is taking that digital model even further - faster than anyone could have expected. Along the way, it's disrupting every industry. Companies that used to be leaders in their sectors are now also-rans; start-ups, such as OnDeck and LendFriend, have become successful in a matter of months instead of years. Change is here... and this time it's good. The World Economic Forum puts it best: "The change brought by the Fourth Industrial Revolution is inevitable, not optional. And the possible rewards are staggering". It so happens that a lot of businesses have already started to cotton on to that fact. According to IDC, 70% of the top 500 global companies will have dedicated digital transformation and innovation teams by the end of this year. It all adds up to a UK market opportunity worth around $55 million - and 4IR's just getting started, so it's likely that the future will hold even more opportunity. So, where can you expect the big wins to come from? Probably from three main areas. Better use of information and smarter collaboration are set to give customers better experiences. The same elements should also sharpen up companies' approaches to innovation and creativity, so it's likely that there will be a change in the way they design and market products and services. In addition, these factors - as well as creative use of IT - are improving logistics, the supply chain and time-to-market. Companies expect the world... Some research by PwC suggests that companies agree. They expect big things of 4IR. Around 80% expect it to improve planning and control, while over 60% believe it'll improve customer satisfaction and make production more flexible. Plus, around half say it will reduce time to market . Many industry experts believe that 4IR will have a positive effect on society across the world as well. Let's give them the stars as well That all sounds like good news for Microsoft partners. To quote Judson Althoff, our Executive Vice President, Worldwide Commercial Business: "Devices, intelligence, the cloud, the rich cloud services that we bring to market together...these combine to drive digital transformation. It's a new world of opportunity for all of us." And together, we're in a great position to go after this opportunity. As our CEO Satya Nadella says, "We'll capture this by coming together to address customer needs through four digital transformation outcomes: modern workplace; business applications; data and AI; and infrastructure and apps." This means that you'll be able to deliver the exciting new apps and services that 4IR will depend on, confident that you can help people to work together anywhere, yet still be secure. We'll show you how we can all take advantage of the 4IR opportunity The Fourth Industrial Revolution is an exciting prospect, with its potential to transform your customers - and your own businesses. Over the next few weeks, this series of blogs will bring you insights into its likely impact and how we can help you, our partners, turn it to your advantage. Watch out for the next in the series, and download our Fourth Industrial Revolution infographic for more information.
https://docs.microsoft.com/en-us/archive/blogs/mpn_uk/the-fourth-industrial-revolution-making-disruption-work-for-you
2020-02-17T02:29:53
CC-MAIN-2020-10
1581875141460.64
[array(['https://msdnshared.blob.core.windows.net/media/2017/10/The-Fourth-Industrial-Revolution-is-Here--1024x470.jpg', None], dtype=object) ]
docs.microsoft.com
Crate fastdivide [−] [src] Yes, but what is it really ? Division is a very costly operation for your CPU (probably between 10 and 40 cycles). You may have noticed that when the divisor is known at compile time, your compiler transforms the operations into a cryptic combination of a multiplication and bitshift. Fastdivide is about doing the same trick your compiler uses but when the divisor is unknown at compile time. Of course, it requires preprocessing a datastructure that is specific to your divisor, and using it only makes sense if this preprocessing is amortized by a high number of division (with the same divisor). When is it useful ? You should probably use fastdivide, if you do a lot (> 10) of division with the same divisor ; and these divisions are a bottleneck in your program. This is for instance useful to compute histograms. Example use fastdivide::DividerU64; fn histogram(vals: &[u64], min: u64, interval: u64, output: &mut [usize]) { // Preprocessing a datastructure dedicated // to dividing `u64` by `interval`. // // This preprocessing is not cheap. let divide = DividerU64::divide_by(interval); // We reuse the same `Divider` for all of the // values in vals. for &val in vals { if val < min { continue; } let bucket_id = divide.divide(val - min) as usize; if bucket_id < output.len() { output[bucket_id as usize] += 1; } } }
https://docs.rs/fastdivide/0.2.0/fastdivide/
2020-02-17T01:43:00
CC-MAIN-2020-10
1581875141460.64
[]
docs.rs
Slack is an enterprise software platform that allows teams to communicate effectively through a messaging application. Slack also allows users to communicate with applications like ThoughtSpot through chat. Spot is a ThoughtSpot integration with Slack. Does your Slack have Spot? Spot has to be integrated with your Slack team before you can use it. Your team admin or ThoughtSpot admin can do this. To test if your Slack team has a Spot integration, mention @spot and see if he barks back: In this particular channel, @spot is there for you but like his brothers @spot-east-credit is not in the channel. If @spot doesn’t come when you “call” you are spotless. Ask your administrator to see if you can get one. Related Information Go to How to use Spot to get started using Spot. For information on setting up Spot, see Slack Integration in the Administration Guide.
https://docs.thoughtspot.com/5.0/end-user/slack/intro.html
2020-02-17T00:37:21
CC-MAIN-2020-10
1581875141460.64
[array(['/5.0/images/slack-0.png', 'Check for slack'], dtype=object)]
docs.thoughtspot.com
Running an existing .NET Framework-based application in a Windows container doesn't require any changes to your app. To run your app in a Windows container you create a Docker image containing your app and start the container. This topic explains how to take an existing ASP.NET MVC application and deploy it in a Windows container. You start with an existing ASP.NET MVC app, then build the published assets using Visual Studio. You use Docker to create the image that contains and runs your app. You'll browse to the site running in a Windows container and verify the app is working. This article assumes a basic understanding of Docker. You can learn about Docker by reading the Docker Overview. The app you'll run in a container is a simple website that answers questions randomly. This app is a basic MVC application with no authentication or database storage; it lets you focus on moving the web tier to a container. Future topics will show how to move and manage persistent storage in containerized applications. Moving your application involves these steps: - Creating a publish task to build the assets for an image. - Building a Docker image that will run your application. - Starting a Docker container that runs your image. - Verifying the application using your browser. The finished application is on GitHub. Prerequisites The development machine must be running - Windows 10 Anniversary Update (or higher) or Windows Server 2016 (or higher). - Docker for Windows - version Stable 1.13.0 or 1.12 Beta 26 (or newer versions) - Visual Studio 2017. Important If you are using Windows Server 2016, follow the instructions for Container Host Deployment - Windows Server. After installing and starting Docker, right-click on the tray icon and select Switch to Windows containers. This is required to run Docker images based on Windows. This command takes a few seconds to execute: Publish script Collect all the assets that you need to load into a Docker image in one place. You can use the Visual Studio Publish command to create a publish profile for your app. This profile will put all the assets in one directory tree that you copy to your target image later in this tutorial. Publish Steps - Right click on the web project in Visual Studio, and select Publish. - Click the Custom profile button, and then select File System as the method. - Choose the directory. By convention, the downloaded sample uses bin\Release\PublishOutput. Open the File Publish Options section of the Settings tab. Select Precompile during publishing. This optimization means that you'll be compiling views in the Docker container, you are copying the precompiled views. Click Publish, and Visual Studio will copy all the needed assets to the destination folder. Build the image Define your Docker image in a Dockerfile. The Dockerfile contains instructions for the base image, additional components, the app you want to run, and other configuration images. The Dockerfile is the input to the docker build command, which creates the image. You will build an image based on the microsft/aspnet image located on Docker Hub. The base image, microsoft/aspnet, is a Windows Server image. It contains Windows Server Core, IIS and ASP.NET 4.6.2. When you run this image in your container, it will automatically start IIS and installed websites. The Dockerfile that creates your image looks like this: # The `FROM` instruction specifies the base image. You are # extending the `microsoft/aspnet` image. FROM microsoft/aspnet # The final instruction copies the site you published earlier into the container. COPY ./bin/Release/PublishOutput/ /inetpub/wwwroot There is no ENTRYPOINT command in this Dockerfile. You don't need one. When running Windows Server with IIS, the IIS process is the entrypoint, which is configured to start in the aspnet base image. Run the Docker build command to create the image that runs your ASP.NET app. To do this, open a PowerShell window in the directory of your project and type the following command in the solution directory: docker build -t mvcrandomanswers . This command will build the new image using the instructions in your Dockerfile, naming (-t tagging) the image as mvcrandomanswers. This may include pulling the base image from Docker Hub, and then adding your app to that image. Once that command completes, you can run the docker images command to see information on the new image: REPOSITORY TAG IMAGE ID CREATED SIZE mvcrandomanswers latest 86838648aab6 2 minutes ago 10.1 GB The IMAGE ID will be different on your machine. Now, let's run the app. Start a container Start a container by executing the following docker run command: docker run -d --name randomanswers mvcrandomanswers The -d argument tells Docker to start the image in detached mode. That means the Docker image runs disconnected from the current shell. In many docker examples, you may see -p to map the container and host ports. The default aspnet image has already configured the container to listen on port 80 and expose it. The --name randomanswers gives a name to the running container. You can use this name instead of the container ID in most commands. The mvcrandomanswers is the name of the image to start. Verify in the browser Note With the current Windows Container release, you can't browse to. This is a known behavior in WinNAT, and it will be resolved in the future. Until that is addressed, you need to use the IP address of the container. Once the container starts, find its IP address so that you can connect to your running container from a browser: docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" randomanswers 172.31.194.61 Connect to the running container using the IPv4 address, in the example shown. Type that URL into your browser, and you should see the running site. Note Some VPN or proxy software may prevent you from navigating to your site. You can temporarily disable it to make sure your container is working. The sample directory on GitHub contains a PowerShell script that executes these commands for you. Open a PowerShell window, change directory to your solution directory, and type: ./run.ps1 The command above builds the image, displays the list of images on your machine, starts a container, and displays the IP address for that container. To stop your container, issue a docker stop command: docker stop randomanswers To remove the container, issue a docker rm command: docker rm randomanswers
https://docs.microsoft.com/en-us/aspnet/mvc/overview/deployment/docker-aspnetmvc
2017-10-17T04:09:45
CC-MAIN-2017-43
1508187820700.4
[array(['media/aspnetmvc/switchcontainer.png', 'Switch to Windows Container Windows Container'], dtype=object) array(['media/aspnetmvc/publishconnection.png', 'Publish to File System Publish Connection'], dtype=object) array(['media/aspnetmvc/publishsettings.png', 'Publish Settings Publish Settings'], dtype=object)]
docs.microsoft.com
Bootstrap is the most popular HTML, CSS, and JS framework for developing responsive, mobile first projects on the web. Currently v3.2.0 Bootstrap is the most popular HTML, CSS, and JS framework for developing responsive, mobile first projects on the web. Currently v3.2.0 Bootstrap makes front-end web development faster and easier. It's made for folks of all skill levels, devices of all shapes, and projects of all sizes. Bootstrap ships with vanilla CSS, but it's. Bootstrap is open source. It's hosted, developed, and maintained on GitHub.View the GitHub project Millions of amazing sites across the web are being built with Bootstrap. Get started on your own with our growing collection of examples or by exploring some of our favorites. We showcase dozens of inspiring projects built with Bootstrap on the Bootstrap Expo.Explore the Expo
https://bootstrapdocs.com/v3.2.0/docs/
2017-10-17T03:47:21
CC-MAIN-2017-43
1508187820700.4
[]
bootstrapdocs.com
Table of Contents This section is mainly for superusers ( root) people with high security demands, or simply technically interested people. It is not necessary to read this if you only use Linux® at home for yourself, although you may learn a thing or two in any case. A system administrator might want to restrict access as to who is allowed to use KPPP. There are two ways to accomplish this. Create a new group (you might want to name it dialout or similar), and put every user that should be allowed to use KPPP into that group. Then type at the prompt: # chown root.dialout /opt/kde/bin/kppp # chmod 4750 /opt/kde/bin/kppp This assumes that KDE was installed in /opt/kde/ and that your new group is named dialout. Before doing anything, KPPP checks if there is a file named /etc/kppp.allow. If such a file exists, only users named in this file are allowed to dial out. This file must be readable by everyone (but of course NOT writable.) Only login names are recognized, so you cannot use UID's in this file. Here is a short example: # /etc/kppp.allow # comment lines like this are ignored # as well as empty lines fred karl daisy In the example above, only the users fred, karl and daisy are allowed to dial out, as well as every user with a UID of 0 (so you don't have to explicitly list root in the file).
https://docs.kde.org/stable4/en/kdenetwork/kppp/security.html
2017-10-17T03:47:17
CC-MAIN-2017-43
1508187820700.4
[array(['/stable4/common/top-kde.jpg', None], dtype=object)]
docs.kde.org
Building the AGL Demo Platform for QEMU To build the QEMU version of the AGL demo platform use machine qemux86-64 along with features agl-demo and agl-devel: source meta-agl/scripts/aglsetup.sh -f -m qemux86-64 agl-demo agl-devel bitbake agl-demo-platform By default, the build will produce a compressed vmdk image in tmp/deploy/images/qemux86-64/agl-demo-platform-qemux86-64.vmdk.xz Deploying the AGL Demo Platform for QEMU Prepare an image for boot Decompress the agl-demo-platform-qemux86-64.vmdk.xz image to prepare it for boot. Linux cd tmp/deploy/images/qemux86-64 xz -d agl-demo-platform-qemux86-64.vmdk.xz Windows Download 7-Zip and select agl-demo-platform-qemux86-64.vmdk.xz to be decompressed. Boot an image QEMU Install Note: if an AGL crosssdk has been created, it will contain a qemu binary for the host system. This SDK qemu binary has no graphics support and cannot currently be used to boot an AGL image. Arch: sudo pacman -S qemu Debian/Ubuntu: sudo apt-get install qemu-system-x86 Fedora: sudo yum install qemu-kvm Boot Boot the agl-demo-platform-qemux86-64.vmdk image in qemu std -show-cursor \ -device virtio-rng-pci \ -serial mon:stdio -serial null \ -soundhw hda \ -net nic,vlan=0 \ -net user,hostfwd=tcp::2222-:22 VirtualBox Install Download and install VirtualBox Boot Boot the agl-demo-platform-qemux86-64.vmdk image in VirtualBox: - - Ensure that the newly created AGL QEMU machine is highlighted and click Start VMWare Player Install Download and install VMWare Player Boot Boot the agl-demo-platform-qemux86-64.vmdk image in VMWare Player: - Start VMWare Player - Select File and Create a New Virtual Machine - Select I will install the operating system later and click Next - Select Linux as the Guest Operating System, Other Linux 3.x kernel 64-bit as the Version, and click Next - Enter AGL QEMU as the Name and click Next - Leave disk capacity settings unchanged and click Next - Click Finish - Select/highlight AGL QEMU and click Edit virtual machine settings - Select/highlight Memory and click 2 GB - Select/highlight Hard Disk (SCSI) and click Remove - Click Add - Select Hard Disk and click Next - Select SCSI (Recommended) and click Next - Select Use an existing virtual disk and click Next - Browse and select the agl-demo-platform-qemux86-64.vmdk image - Click Finish - Click Keep Existing Format - Click Save - Ensure that the newly created AGL QEMU machine is highlighted and click Power On
http://docs.automotivelinux.org/docs/getting_started/en/dev/reference/machines/qemu.html
2017-10-17T03:48:55
CC-MAIN-2017-43
1508187820700.4
[]
docs.automotivelinux.org
Install with Terraform Terraform, please use the Terraform Portworx Module Upgrading Portworx If you have installed Portworx with Terraform, Portworx needs to be upgraded through the CLI on a node-by-node basis. Please see the upgrade instructions. Last edited: Tuesday, Nov 2, 2021 Questions? Visit the Portworx forum.
https://docs.portworx.com/install-with-other/nomad/installation/install-with-terraform/
2022-01-16T21:35:53
CC-MAIN-2022-05
1642320300244.42
[]
docs.portworx.com
Administrator documentation¶ Contents - Installation - Step by step installation - uwsgi - Install with nginx - Install with apache - Docker installation - How to update - How to inspect & debug - Engines & Settings - Administration API - Architecture - How to protect an instance - How to setup result proxy - Plugins builtin - Buildhosts
https://docs.searxng.org/admin/index.html
2022-01-16T22:57:56
CC-MAIN-2022-05
1642320300244.42
[]
docs.searxng.org
Manage the number of threats and anomalies in your environment The Offline Rule Executor in Splunk UBA runs nightly to process the scheduled anomaly and threat rules, and also performs threat revalidation in real time when there are rule changes, anomalies are removed from the system, or anomaly scores are changed. Threat revalidation can take a long time and cause memory issues on your system depending on a variety of factors, including the types and age of the anomalies involved in the threat, the number or anomalies and entities involved in the threat, and any custom threat rules active in the system. Perform regular maintenance of your Splunk UBA deployment by managing the number of threats and anomalies in your system. When deleting large number of anomalies, do not delete more than 200,000 anomalies at a time. - Perform regular cleanup of anomalies more than 90 days old. See Delete anomalies in Splunk UBA. - Close unwanted threats. See Close threats in Splunk UBA. - Monitor the total number of anomalies in your environment. - If your deployment is fewer than 10 nodes, do not exceed 800,000 anomalies. - If your deployment is 10 nodes or more, do not exceed 1.5 million anomalies. - Monitor the number of rule-based threats in your environment. - If your deployment is fewer than 10 nodes, do not exceed 1,000 rule-based threats. - If your deployment is 10 nodes or more, do not exceed 2,000 rule-based threats. The Offline Rule Executor times out in 15 minutes, meaning that if a threat rule takes longer than 15 minutes to complete, or threat revalidation takes longer than 15 minutes, some computations are lost and not generated in Splunk UBA. If a threat rule is taking longer than 15 minutes to complete, you can edit the rule parameters to try to shorten the time. See Monitor policy violations with custom threats. This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.4.1, 5.0.5, 5.0.5.1 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/UBA/5.0.5/Admin/Threatandanomalylimits
2022-01-16T22:51:28
CC-MAIN-2022-05
1642320300244.42
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Note The documentation you're currently reading is for version 3.6.0. Click here to view documentation for the latest stable version. System Requirements¶ StackStorm requires Ubuntu, RHEL or CentOS Linux. It is not supported on any other Linux distributions. The table below lists the supported Linux versions, along with the Vagrant Boxes and Amazon AWS instances we use for testing. See below for more details about our Linux distribution support policy. If you are installing from ISO, perform a minimal installation. For Ubuntu, use the “Server” variant, and only add OpenSSH Server to the base set of packages. All other dependencies will be automatically added when you install StackStorm. Note Please note that only 64-bit architecture is supported. This is the recommended minimum sizing for testing and deploying StackStorm: Note If you are planning to add the DC Fabric Automation Suite to your system later, you will need additional RAM. Check the DC Fabric Automation Suite System Requirements If you split your filesystem into multiple partitions and mount points, ensure you have at least 1GB of free space in /var and /opt. RabbitMQ and MongoDB may not operate correctly without sufficient free space. By default, StackStorm and related services use these TCP ports: nginx (80, 443) mongodb (27017) rabbitmq (4369, 5672, 25672) redis (6379) or zookeeper (2181, 2888, 3888) st2auth (9100) st2api (9101) st2stream (9102) If any other services are currently using these ports, StackStorm may fail to install or run correctly. Linux Distribution Support Policy¶ StackStorm only support Ubuntu and RHEL/CentOS Linux distributions. In general, it is supported on the two most recent major supported releases for those distributions. Specifically: Ubuntu: Current LTS releases are supported. Today this is 18.04and 20.04. RHEL/CentOS: We currently support RHEL/CentOS 7.xand 8.x. In general, we recommend using the most recent version in that series, but any version may be used. Support for RHEL/CentOS 6.xhas been removed. StackStorm 3.2 is the last release that supported RHEL/CentOS 6.x. Support for Ubuntu 16.04has been removed. StackStorm 3.4 is the last release that supported RHEL/CentOS 16.04.
https://docs.stackstorm.com/install/system_requirements.html
2022-01-16T21:38:03
CC-MAIN-2022-05
1642320300244.42
[]
docs.stackstorm.com
Note The documentation you're currently reading is for version 3.6.0. Click here to view documentation for the latest stable version. CLI Reference¶ The StackStorm command line client (CLI) st2 allows you to operate your StackStorm system using the command line. It communicates with the StackStorm processes using public APIs. Installation¶ The CLI client is installed by default on your StackStorm system. It can also be installed on a client system using pip: pip install st2client Configuration¶ The command line client can be configured using one or more of the approaches listed below: Configuration file ( ~/.st2/config) Environment variables ( ST2_API_URL, etc.) Command line arguments ( st2 --cacert=... action list, etc.) Command line arguments have highest precedence, followed by environment variables, and then configuration file values. If the same value is specified in multiple places, the value with the highest precedence will be used. For example, if api_url is specified in the configuration file and in an environment variable ( $ST2_API_URL), the environment variable will be used. Configuration File¶ The CLI can be configured through an ini-style configuration file. By default, st2 will use the file at ~/.st2/config. If you want to use configuration from a different file (e.g. you have one config per deployment or environment), you can select which file to use using the ST2_CONFIG_FILE environment variable or the --config-file command line argument. For example (environment variable): ST2_CONFIG_FILE=~/.st2/prod-config st2 action list For example (command line argument): st2 --config-file=~/.st2/prod-config action list An example configuration file with all the options and the corresponding explanations # or authenticate with an api key. # api_key = <key> [api] url = # or for a remote instance # url = [auth] url = # or for a remote instance #url = [stream] url = # or for a remote instance # url = If you want the CLI to skip parsing of the configuration file, you can do that by passing the --skip-config flag to the CLI: st2 --skip-config action list Authentication and Auth Token Caching¶ The previous section showed an option for storing your password in plaintext in your configuration file. The st2 login command offers an alternative that does not store the password in plaintext. Similar to st2 auth, you must provide your username and password: st2 login st2admin --password 'Password1!' This command caches your authentication token, but also modifies the CLI configuration to include the referenced username. This way, future commands will know which cached token to use for authentication, since tokens are cached using the token-<username> format. The password itself does not need to be stored in the config file. Warning st2 login will overwrite the “credentials” section of the configuration. It will overwrite the configured username and will remove any configured password. These auth tokens are by default cached on the local filesystem (in the ~/.st2/token-<username> file) and re-used for subsequent requests to the API service. You will need to re-login once the generated token has expired, or use of the --write-password flag, which writes the password to the config. The st2 whoami command will tell you who is the currently authenticated user. You can switch between users by re-running the st2 login command. Any existing. You can still use the “old” method of supplying both username and password in the configuration file. If both a username and password are present in the configuration, then the client will automatically try to authenticate with these credentials. If you want to disable auth token caching and want the CLI to retrieve a new auth token on each invocation, set cache_token to False: [cli] cache_token = False The CLI will by default also try to retrieve a new token if an existing one has expired. If you have manually deleted or revoked a token before expiration you can clear the cached token by removing the ~/.st2/token file. If the configuration file has an API key as authentication credentials, the CLI will use that as the primary method of authentication instead of auth token. Using Debug Mode¶ The command line tools accepts the --debug flag. When this flag is provided, debug mode will be enabled. Debug mode consists of the following: On error/exception, full stack trace and client settings (API URL, auth URL, proxy information, etc.) are printed to the console. The equivalent curlcommand in in Scripts¶ If you want to authenticate and obtain an authentication token inside your (shell) scripts, you can use the st2 auth CLI command in combination with the -t flag. This flag will cause the command to only print the token to stdout on successful authentication. This means you don’t need to deal with parsing JSON or CLI output format. Example: st2 action list --pack=slack +--------------------+-------+--------------+-------------------------------+ | ref | pack | name | description | +--------------------+-------+--------------+-------------------------------+ | slack.post_message | slack | post_message | Post a message to the Slack | | | | | channel. | +--------------------+-------+--------------+-------------------------------+ If you want a raw JSON result as returned by the API (e.g. you are using the CLI as part of your script and you want the raw result which you can parse), you can pass the -j flag: st2 action list -j --pack=slack [ { "description": "Post a message to the Slack channel.", "name": "post_message", "pack": "slack", "ref": "slack.post_message" } ] Only Displaying a Particular Attribute¶ By default, when retrieving the action execution result using st2 execution get, the whole result object will be printed: a specific result attribute, use the -k <attribute name> flag: st2 execution get -k stdout 54d8c52e0640fd1c87b9443f Mon Feb 9 14:33:18 UTC 2015 If you only want to retrieve and print out a specific¶ When you use local and remote actions (e.g. core.local, core.remote, etc.), you need to wrap cmd parameter values in a single quote or escape the variables. Otherwise, the shell variables will be expanded locally which is something you usually don’t want. Using single quotes: st2 run core.local env='{"key1": "val1", "key2": "val2"}' cmd='echo "ponies ${key1} ${key2}"'. Example 1 - Simple Case (Array of Strings)¶ st2 run mypack.myaction parameter_name="value 1,value2,value3" In this case, the parameter_name value would get passed to the API as a list (JSON array) with three items - ["value 1", "value2", "value3"]. Example 2 - Complex Case (Array of Objects)¶ When you want to pass more complex type (e.g. arrays of objects) value to an action, you can do it like this: st2 run --auto-dict mypack.set_interfaces \ nic_info="target:eth0,ipaddr:192.168.0.10,netmask:255.255.255.0,mtu:1454" \ nic_info="target:eth1,ipaddr:192.168.0.11,netmask:255.255.255.0,mtu:2000" In this case, the nic_info value passed to the mypack.set_interfaces action would be parsed and look like this: [{'netmask': '255.255.255.0', 'ipaddr': '192.168.0.10', 'target': 'eth0', 'mtu': 1454}, {'netmask': '255.255.255.0', 'ipaddr': '192.168.0.11', 'target': 'eth1', 'mtu': 2000}] Note The st2 cli option --auto-dict is required to use this functionality. When you run action without this option, each colon separated parameters are not parsed as dict object but just string. And this option and its functionality will be deprecated in the next release in favor of a more robust conversion method. To parse each value in the object as an expected type, you need to specify the type of each value in the action metadata, like this. parameters: nic_info: type: array properties: target: type: string ipaddr: type: string netmask: type: string mtu: type: integer Or you can use JSON notation: st2 run mypack.myaction parameter_name='[{"Name": "MyVMName"}]' Note When using JSON string notation, parameter value needs to be wrapped inside single quotes (e.g. parameter='{"Key": "Value"}'), otherwise quotes ( ") inside the JSON string need to be escaped with a backslash ( \ - e.g. parameter="{\"Key\": \"Bar\"}"). Specifying Parameters with Type “object”¶ When running an action using st2 run command, you can specify the value of parameters with type object using two different approaches: JSON String Notation¶ For complex objects, you should use JSON notation: st2 run core.remote hosts=localhost env='{"key1": "val1", "key2": "val2"}' cmd="echo ponies \${key1} \${key2} Note When using JSON string notation, parameter value needs to be wrapped inside single quotes (e.g. parameter='{"Key": "Value"}'), otherwise quotes ( ") inside the JSON string need to be escaped with a backslash ( \ - e.g. parameter="{\"Key\": \"Bar\"}"). Reading Parameter Values From a File¶ The CLI also supports the special @parameter notation which makes it read parameter values st2 have completed. To cancel an execution, run: st2 execution cancel <existing execution id> Passing Environment Variables to Runner as env Parameter¶ Local, remote and Python runners support the env parameter. This parameter tells the runner which environment variables should be accessible to the action which is being executed. User can specify environment variables manually using the env parameter in the same manner as other parameters: the action.py file. For example: st2 run --inherit-env core.remote cmd=...
https://docs.stackstorm.com/reference/cli.html
2022-01-16T22:36:38
CC-MAIN-2022-05
1642320300244.42
[]
docs.stackstorm.com
: - In the Column List section, select the columns to display and deselect the columns to hide. - In the Xsheet toolbar, click the Hide Selected Column button (you may have to customize the toolbar to display it).button (you may have to customize the toolbar to display it).
https://docs.toonboom.com/help/harmony-14/advanced/timing/show-hide-column.html
2022-01-16T21:38:54
CC-MAIN-2022-05
1642320300244.42
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Layers/HAR11/HAR11_timing_hidecolumns.png', None], dtype=object) ]
docs.toonboom.com
Interface patterns give you an opportunity to explore different interface designs. Be sure to check out How to Adapt a Pattern for Your Application. Use the navigation patterns to help orient users and enable them to easily navigate pages and content. This page explains how you can use this pattern in your interface, and walks through the design structure in detail. There are four options for navigation patterns which provide a way to structure a group of pages with icon/text-based left navigation. When an icon/title is selected, the corresponding page is displayed. All four patterns have similar functionality, design, and data sets. They're all are located under the PATTERNS tab under PALETTE in your interface. Test out all four to see which option works best for you. When you're adapting one of these patterns, be sure to follow UX design best practices, like adding a descriptive caption to the icons and using accessibilityText to help screen reader users know when icons or cards are selected. Let's take a closer look at the "Navigation (Collapsible)" pattern. The main components in this pattern are a Column that changes between "EXTRA-NARROW" and "NARROW" widths, Card Layouts with links, and a set of Styled Icons and titles. When you drag and drop the collapsible navigation pattern onto your interface, 203 lines of expressions will be added to the section where you dragged it. At the beginning of the expression, three local variables are set up: local!collapseNav, which stores whether or not the side navigation is collapsed, local!activeCollapsibleNavSection, which stores the current selection, and local!collapsibleNavSections, which stores the list of page names and their respective icons. local!activeCollapsibleNavSection is initialized to the first page by default. The left navigation is constructed in two different ways depending on whether or not it's collapsed. When expanded, the left navigation is built using card layouts with Dynamic Links and Rich Text inside a NARROW column. When collapsed, the left navigation is built using LARGE styled icons with dynamic links inside an EXTRA-NARROW column. Inside of the first column, we iterate over each section defined in local!collapsibleNavSections using a!forEach(). This is where we then determine what size navigation to show by checking the value of local!collapseNav. The components used for each size vary, but both use fv!index to save the index of the selected section to local!activeCollapsibleNavSection in the dynamic link. There are a couple of small details that have been added in for better functionality and user experience. We set showBorder parameter to false for the card layouts and used a styled icon with a dynamic link to change the width of the navigation at the bottom of the list of icons. Once the user clicks on an icon or a card in the left navigation pane, the selected section will render in the right column. In this pattern, we have only configured a basic section layout to display. You'll want to add your components to the contents of the section or replace the section with an interface object for the associated section. To create a better division between our navigation and section contents, we've used the showDividers parameter in our Columns Layout. In addition to the "Navigation (Collapsible)" pattern described above, the "Navigation", "Navigation (Stamp)", and "Navigation (Subsections)" patterns are also available in your interface. They're located in PALETTE, under the PATTERNS tab. This pattern is a simplified version of the "Navigation (collapsible)" pattern, using only a "NARROW" column with the card layouts. When you drag and drop the navigation pattern onto your interface, 152 lines of expressions will be added to the section where you dragged it. This pattern is a lightweight version of the "Navigation (collapsible)" pattern. Instead of using a combination of icons and titles, this pattern uses stamps with dynamic links to create an easy to use and visually pleasing side navigation. When you drag and drop the stamp navigation pattern onto your interface, 129 lines of expressions will be added to the section where you dragged it. This pattern is another variation of the "Navigation" pattern. It follows a similar style, but allows you to group sections for better organization. When you drag and drop the subsection navigation pattern onto your interface, 206 lines of expressions will be added to the section where you dragged it. On This Page
https://docs.appian.com/suite/help/21.2/navigation-patterns.html
2022-01-16T22:15:48
CC-MAIN-2022-05
1642320300244.42
[]
docs.appian.com
Arranging Nodes Snapping - Snap Toggle snapping mode for moving nodes around. - Snap Node Element Selector This selector provide the following node elements for snap: - Сетка Snap to grid background. - Node X Snap to left/right node border. - Node Y Snap to top/bottom node border. - Node X/Y Snap to any node border. - Snap Target Which part to snap onto the target. - По близости Snap closest point onto target. - Center Snap center onto target. - Median Snap median onto target. - Active (Активный). См.также Example Video: Auto-Offset. A workflow enhancement for Blender’s node editors.
https://docs.blender.org/manual/ru/dev/interface/controls/nodes/arranging.html
2022-01-16T22:20:10
CC-MAIN-2022-05
1642320300244.42
[array(['../../../_images/interface_controls_nodes_arranging_auto-offset.png', '../../../_images/interface_controls_nodes_arranging_auto-offset.png'], dtype=object) ]
docs.blender.org
Important! The information provided in this video is for general guidance only. The information is provided on "AS IS" basis, with no guarantee of completeness, accuracy or timeliness, and without warranty or representations of any kind, expressed or implied. In no event will CloudEndure and/or its subsidiaries and/or their employees or service providers be liable to you or anyone else for any decision made or action taken in reliance on the information provided above or for any direct, indirect, consequential, special or similar damages (including any kind of loss), even if advised of the possibility of such damages. CloudEndure is not responsible for the update, validation or support of sample videos. The Machines page enables you to monitor the progress and status of the replication of your SourceThe location of the Source machine; Currently either a specific Region or Other Infrastructure. machines, and to perform actions at the machineA physical or virtual computer. level. The Machines page is divided into various actions on the top and the machineA physical or virtual computer. list view at the bottom. The actions you can perform at the machineA physical or virtual computer. level are done using the Search box, FILTERS, MACHINE ACTIONS and LAUNCH TARGET MACHINES buttons. These actions affect the machines within the machineA physical or virtual computer. view list, which shows all of your current machines. You can utilize the Search field to search for specific machines. Note that the search filters include any value, whether visible in the Machines list view or underlying (such as Cloud ID, progress percent, etc.) Enter any value into the Search box and the system will automatically show relevant results in the Machine list view. Ex. Only one of our three machines is shown after entering the first four digits of its Cloud ID in the Search box. To cancel a search, click the gray "x" within the Search box. Machine Labels provide an unstructured way to organize machines into groups within a ProjectA Project is the basic organizational unit for running a CloudEndure solution.. Labels can be used to identify applications, operating systems, machineA physical or virtual computer. types, migration waves, etc. You can add labels to machines through the Machines page on the Machines List View. You can add Machine Labels by selecting a one or more machines by checking the box to the left of the machineA physical or virtual computer. name and then selecting the MACHINE ACTIONS button and clicking on the Modify Machine Labels for X Machines option. To add a new label to the selected machineA physical or virtual computer.(s), input your desired label into the New label field and click the gray plus icon . Click SAVE. You will receive a notification that labels have been added successfully. Once you have added labels, you can easily filter machines by labels by clicking the FILTERS option. The Filter by Labels menu will appear. To filter your machines, check the box to the left of each label you want to filter by. The User ConsoleCloudEndure SaaS User Interface. A web-based UI for setting up, managing, and monitoring the Migration and Disaster Recovery solutions. will automatically apply the label filters to the Machine List View. Note: Labels that are not assigned to any machine will not show up as filtering options. If you have many labels, you can search for individual labels by using the Search bar within the Filter by Labels menu. Note: The label filters match any machines selected. If you filter by "Label A" and/or "Label B", the Machine List view will show machines that are tagged "Label A" and machines that are tagged "Label B" and not only machines that are tagged with both Label A and Label B. Once a label filter is applied, the FILTERS button will display an orange circle next to it in order to let you know that the Machines List View is being filtered and that not all machines are showing. The number of machines showing will be displayed in gray to the right of the FILTERS button. If you select a machineA physical or virtual computer. (by checking the box to its left), the message next to the FILTERS button will indicate the amount of machines shown and the amount of machines selected. The MACHINE ACTIONS menu enables you to perform the following actions: Note: Most of these actions can be performed on one Source machine or on multiple machines simultaneously. Modify Machines' Labels - Add/edit/delete Machine Labels from the selected Machine(s). Learn more about the Launch TargetThe location where the Replication Server will be located and where Target machines will be created (as a result of Test, Cutover or Recovery). Machines menu in this video. The LAUNCH TARGET MACHINES menu includes different actions according to the solution type of the selected ProjectA Project is the basic organizational unit for running a CloudEndure solution.. Clicking on a specific machineA physical or virtual computer. on the Machines page, opens the Machine Details view. The Machine Details page provides you with additional information and settings for the selected Source machineThe computer, physical or virtual machine that needs to be protected by replication (Disaster Recovery) or migrated (Migration) The CloudEndure Agent is installed on the Source machine. and the Target machineThe Machine created during Test, Cutover or Recovery. that is launched for it. The Machine Details page is divided into two parts: MACHINE DASHBOARD and additional information and settings tabs. The MACHINE DASHBOARD section includes detailed information about the selected Source machineThe computer, physical or virtual machine that needs to be protected by replication (Disaster Recovery) or migrated (Migration) The CloudEndure Agent is installed on the Source machine., and enables you to perform actions on this machineA physical or virtual computer. using the ACTIONS and LAUNCH TARGET MACHINE buttons. Ex. MigrationThe CloudEndure solution that allows you to move data, applications, and other business elements from an onsite network or a cloud environment to another physical location or cloud environment. machineA physical or virtual computer. dashboard Note: The ACTIONS and LAUNCH TARGET MACHINE buttons are the same as the buttons on the Machines pages, but their actions will be applied only to the displayed Source machine in its current state. This section includes the following tabs: The collected information on the Source machineThe computer, physical or virtual machine that needs to be protected by replication (Disaster Recovery) or migrated (Migration) The CloudEndure Agent is installed on the Source machine. appears in this tab. Note: The information displayed in the Source tab is updated regularly. Note: The available options in this tab are different for each selected Target infrastructure.<![CDATA[ ]]> ©2020 COPYRIGHT CloudEndure - Terms of Service - Privacy Policy - AWS Vulnerability Reporting Guidelines - Report a Security Issue
https://docs.cloudendure.com/Content/Getting_Started_with_CloudEndure/Exploring_the_CloudEndure_Console/Machines/Machines.htm
2022-01-16T22:34:11
CC-MAIN-2022-05
1642320300244.42
[]
docs.cloudendure.com
- numeric_val Literal numeric value, a function that returns a numeric value, or the. You can re-type the OrderId column to String without issue. If you retype the other three columns, all values are mismatched. You can use the following transforms to remove the currency and percentage notation. The first transform removes the trailing % sign from every value across all columns using a You can use a similar one to remove the $ sign at the beginning of values: When both are applied, you can see that the data types of each column is updated to a numeric type: Integer or Decimal. Now, you can perform the following computations: You can use the new SubTotal column as the basis for computing the DiscountedTotal column, which factors in discounts: The Total column applies the tax to the DiscountedTotal column: Because of the math operations that have been applied to the original data, your values might no longer look like dollar information. You can now apply price formatting to your columns. The following changes the number format for the SubTotal column: Note that the leading $ was not added back to the data, which changes the data type to String. You can apply this transform to the Price, DiscountedTotal, and Total columns. The Discount and TaxRate values should be converted to Decimals. The following adjusts the Discount column: Results: The output data should look like the following:
https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=181681932&selectedPageVersions=27&selectedPageVersions=28
2022-01-16T22:59:06
CC-MAIN-2022-05
1642320300244.42
[]
docs.trifacta.com
. Note: Address Manager adds the DNS host record directly to the DNS/DHCP Server so that the individual host record is made live instantly. This is done through the Address Manager to DNS/DHCP Server communication service (Command Server) and does NOT require a standard Address Manager deployment. API method: POST /v1/addDeviceInstance
https://docs.bluecatnetworks.com/r/Address-Manager-API-Guide/Add-device-instance/9.3.0
2022-01-16T22:41:18
CC-MAIN-2022-05
1642320300244.42
[]
docs.bluecatnetworks.com
Set Up the Prisma Cloud Role for AWS—Manual To monitor your AWS account, create the roles (manually) and authorize the permissions for Prisma Cloud. If you do not want to use the guided onboarding flow that automates the process of creating the roles required for Prisma™ Cloud to monitor or monitor and protect your accounts on AWS, you must create the roles manually. In order to monitor your AWS account, you must create a role that grants Prisma Cloud access to your flow logs and read-only access (to retrieve and view the traffic log data) or a limited read-write access (to retrieve traffic log data and remediate incidents). To authorize permission, you must copy the policies from the relevant template and attach it to the role. Event logs associated with the monitored cloud account are automatically retrieved on Prisma Cloud. - Log in to the AWS Management Console to create a role for Prisma Cloud.Refer to the AWS documentation for instructions. Create the role in the same region as your AWS account, and use the following values and options when creating the role: - Type of trusted entity:Another AWS Accountand enter the Account ID*:188619942792 - SelectRequire external ID, which is a unique alphanumeric string. You can generate a secure UUIDv4 at. - Do not enable MFA. Verify thatRequire MFAis not selected. - ClickNextand add the AWS Managed Policy for Security Audit.Then, add a role name and create the role. In this workflow, later, you will create the granular policies and edit the role to attach the additional policies. - Get the granular permissions from the AWS CloudFormation template for your AWS environment.The Prisma Cloud S3 bucket has read-only templates and read-and-write templates for the public AWS, AWS GovCloud, and AWS China environments. - Download the template you need. - Identify the permissions you need to copy.To create the policy manually, you will need to add the required permissions inline using the JSON editor. From the read-only template you can get the granular permissions for thePrismaCloud-IAM-ReadOnly-Policy, and the read-write template lists the granular permissions for thePrismaCloud-IAM-ReadOnly-Policyand thePrismaCloud-IAM-Remediation-Policy.For AWS accounts you onboard to Prisma Cloud, if you do not use the host, serverless functions, and container capabilities enabled with Prisma Cloud Compute, you do not need the permissions associated with these roles: - PrismaCloud-ReadOnly-Policy-Computerole—CFT used for Monitor mode, includes additional permissions associated with this new role to enable monitoring of resources that are onboarded for Prisma Cloud Compute. - PrismaCloud-Remediation-Policy-Computerole—CFT used for Monitor & Protect mode, includes additional permissions associated with this new role to enable read-write access for monitoring and remediating resources that are onboarded for Prisma Cloud Compute. - Open the appropriate template using a text editor. - Find the policies you need and copy it to your clipboard.Copy the details for one or both permissions, and make sure to include the open and close brackets for valid syntax, as shown below. - Create the policy that defines the permissions for the Prisma Cloud role.Both the read-only role and the read-write roles require the AWS Managed PolicySecurityAudit Policy. In addition, you will need to enable granular permissions for thePrismaCloud-IAM-ReadOnly-Policyfor the read-only role, or for the read-write role add thePrismaCloud-IAM-ReadOnly-Policyand the limited permissions forPrismaCloud-IAM-Remediation-Policy. - SelectIAMon the AWS Management Console. - In the navigation pane on the left, choose.Access ManagementPoliciesCreate policy - Select theJSONtab.Paste the JSON policies that you copied from the template within the square brackets for Statement.If you are enabling read and read-write permissions, make sure to append the read-write permissions within the same Action statement. - Review and create the policy. - Required only if you want to use the same role to access your CloudWatch log groupUpdate the trust policy to allow access to the CloudWatch log group.Edit theTrust Relationshipsto add the permissions listed below. This allow you to ensure that your role has a trust relationship for the flow logs service to assume the role and publish logs to the CloudWatch log group.{ "Effect": "Allow", "Principal": { "Service": "vpc-flow-logs.amazonaws.com" }, "Action": "sts:AssumeRole" }Copy theRole ARN.Resume with the account onboarding flow at Paste the Role ARN in Add an AWS Cloud Account on Prisma Cloud Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-aws-account/set-up-your-prisma-cloud-role-for-aws-manual.html
2022-01-16T22:23:10
CC-MAIN-2022-05
1642320300244.42
[]
docs.paloaltonetworks.com
Why use a private instance?¶ “Is it worth to run my own instance?” .. is a common question among SearXNG users. Before answering this question, see what options a SearXNG user has. Public instances are open to everyone who has access to its URL. Usually, these are operated by unknown parties (from the users’ point of view). Private instances can be used by a select group of people. It is for example a SearXNG of group of friends or a company which can be accessed through VPN. Also it can be single user one which runs on the user’s laptop. To gain more insight on how these instances work let’s dive into how SearXNG protects its users. How does SearXNG protect privacy?¶ SearXNG protects the privacy of its users in multiple ways regardless of the type of the instance (private, public). Removal of private data from search requests comes in three forms: - removal of private data from requests going to search services - not forwarding anything from a third party services through search services (e.g. advertisement) - removal of private data from requests going to the result pages Removing private data means not sending cookies to external search engines and generating a random browser profile for every request. Thus, it does not matter if a public or private instance handles the request, because it is anonymized in both cases. IP addresses will be the IP of the instance. But SearXNG can be configured to use proxy or Tor. Result proxy is supported, too. SearXNG does not serve ads or tracking content unlike most search services. So private data is not forwarded to third parties who might monetize it. Besides protecting users from search services, both referring page and search query are hidden from visited result pages. What are the consequences of using public instances?¶ If someone uses a public instance, they have to trust the administrator of that instance. This means that the user of the public instance does not know whether their requests are logged, aggregated and sent or sold to a third party. Also, public instances without proper protection are more vulnerable to abusing the search service, In this case the external service in exchange returns CAPTCHAs or bans the IP of the instance. Thus, search requests return less results. I see. What about private instances?¶ If users run their own instances, everything is in their control: the source code, logging settings and private data. Unknown instance administrators do not have to be trusted. Furthermore, as the default settings of their instance is editable, there is no need to use cookies to tailor SearXNG to their needs. So preferences will not be reset to defaults when clearing browser cookies. As settings are stored on their computer, it will not be accessible to others as long as their computer is not compromised. Conclusion¶ Always use an instance which is operated by people you trust. The privacy features of SearXNG are available to users no matter what kind of instance they use. If someone is on the go or just wants to try SearXNG for the first time public instances are the best choices. Additionally, public instance are making a world a better place, because those who cannot or do not want to run an instance, have access to a privacy respecting search service.
https://docs.searxng.org/user/own-instance.html
2022-01-16T21:29:10
CC-MAIN-2022-05
1642320300244.42
[]
docs.searxng.org
Adding Local Rules¶ NIDS¶ You can add NIDS rules in /opt/so/saltstack/local/salt/idstools/local.rules on your manager. Within 15 minutes, Salt should then copy those rules into /opt/so/rules/nids/local.rules. The next run of idstools should then merge /opt/so/rules/nids/local.rules into /opt/so/rules/nids/all.rules which is what Suricata reads from. If you don’t want to wait for these automatic processes, you can run them manually from the manager (replacing $SENSORNAME_$ROLE as necessary): sudo salt-call state.highstate sudo so-rule-update sudo salt $SENSORNAME_$ROLE state.apply suricata For example: Let’s add a simple rule to /opt/so/saltstack/local/salt/idstools/local.rulesthat’s really just a copy of the traditional id check returned rootrule: alert ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root 2"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:7000000; rev:1;) From the manager, tell Salt to update: sudo salt-call state.highstate Update rules: sudo so-rule-update Restart Suricata (replacing $SENSORNAME_$ROLEas necessary): sudo salt $SENSORNAME_$ROLE state.apply suricata If you built the rule correctly, then Suricata should be back up and running. You can then run curl the node to generate traffic which should cause this rule to alert (and the original rule that it was copied from, if it is enabled). YARA¶ Default YARA rules are provided from Florian Roth’s signature-base Github repo at. Local Rules:¶ To add local YARA rules, create a directory in /opt/so/saltstack/local/salt/strelka/rules, for example localrules. Inside of /opt/so/saltstack/local/salt/strelka/rules/localrules, add your YARA rules. After adding your rules, update the configuration by running so-strelka-restart on all nodes running Strelka. Alternatively, run salt -G 'role:so-sensor' cmd.run "so-strelka-restart" to restart Strelka on all sensors at once. Remotely Managed Rules:¶ To have so-yara-update pull YARA rules from a Github repo, copy /opt/so/saltstack/local/salt/strelka/rules/, and modify repos.txt to include the repo URL (one per line). Next, run so-yara-update to pull down the rules. Finally, run so-strelka-restart to allow Strelka to pull in the new rules.
https://docs.securityonion.net/en/2.3/local-rules.html
2022-01-16T21:51:30
CC-MAIN-2022-05
1642320300244.42
[]
docs.securityonion.net
Web Services account. For more information, see Using Amazon S3 block public access . Related actions include: See also: AWS API Documentation See 'aws help' for descriptions of global parameters. put-public-access-block --public-access-block-configuration <value> --account-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --public-access-block-configuration (structure) The PublicAccessBlock configuration that you want to apply to the specified Amazon Web Services account. BlockPublicAcls -> (boolean) Specifies whether Amazon S3 should block public access control lists (ACLs) for buckets in this account.. This is not supported for Amazon S3 on Outposts. restricts configuration you want to.
https://docs.aws.amazon.com/ja_jp/cli/latest/reference/s3control/put-public-access-block.html
2022-01-16T22:55:01
CC-MAIN-2022-05
1642320300244.42
[]
docs.aws.amazon.com
Metadata Base. Has Changed Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Gets whether the item of metadata has changed. public: property Nullable<bool> HasChanged { Nullable<bool> get(); void set(Nullable<bool> value); }; [System.Runtime.Serialization.DataMember(Order=60)] public bool? HasChanged { get; set; } [<System.Runtime.Serialization.DataMember(Order=60)>] member this.HasChanged : Nullable<bool> with get, set Public Property HasChanged As Nullable(Of Boolean) Property Value Type: Nullable<Boolean>true if metadata item has changed since the last query; otherwise, false. - Attributes - Remarks When metadata is retrieved using RetrieveMetadataChangesRequest, with a valid ClientVersionStamp parameter, the RetrieveMetadataChangesResponse. EntityMetadata property contains an EntityMetadataCollection of all the changed metadata. When a child item has changed but a parent item has not, the HasChanged property of the child will be true but for the parent it will be false. It is not valid to use this attribute as part of a query because the value is calculated. The value will be null except in the results of a query that returns information about changes from a previous query.
https://docs.microsoft.com/en-us/dotnet/api/microsoft.xrm.sdk.metadata.metadatabase.haschanged?view=dynamics-general-ce-9
2022-01-16T22:47:06
CC-MAIN-2022-05
1642320300244.42
[]
docs.microsoft.com
This tutorial describes how to add a new report to a WPF Application at design time within Visual Studio. Do the following to create a new WPF Application in Microsoft Visual Studio 2010, 2012, 2013, 2015 or 2017: Press CTRL+SHIFT+N, or select FILE | New | Project... in the main menu. In the New Project dialog that is invoked, expand the Installed category, and select a programming language (Visual C# or Visual Basic) in the Templates section. Then switch to the Windows Desktop section and select WPF Application. Specify the application name and click OK. Do the following to add a new report to your WPF application: In Visual Studio, press CTRL+SHIFT+A, or select PROJECT | Add New Item... in the main menu. In the Add New Item dialog that is invoked, switch to the Reporting directory, select the DevExpress v19.1 Report item, specify the report name and click Add. This invokes the Report Wizard where you can choose a report type. Next Step: Create a Simple Data-Aware Report
https://docs.devexpress.com/XtraReports/8304/get-started-with-devexpress-reporting/add-a-report-to-your-net-application/add-a-new-report-to-a-wpf-application
2019-09-15T13:15:13
CC-MAIN-2019-39
1568514571360.41
[]
docs.devexpress.com
What is Ponydocs? Ponydocs is Splunk's MediaWiki-powered documentation platform. We build and deliver our official product documentation on this platform (you're looking at it right now), and have made it available to the community at large as an open source project. Check out this blog post about the Ponydocs beta release by our lead developer and Web Dev manager, Ashley for an overview. You can find the public source for Ponydocs at Note: Installation and configuration information is currently packaged with the code. Why open source? We are making Ponydocs available to the open source community because we think our docs platform is awesome, and also because we want to take advantage of the open source community's powers of bug detection and resolution (through issue tracking), testing (through others eating our cooking) and potential future hire determination (through excellent individuals providing quality patches to the project). This documentation applies to the following versions of Ponydocs: 1.0 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Ponydocs/1.0/Content/WhatisPonydocs
2019-09-15T13:01:11
CC-MAIN-2019-39
1568514571360.41
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Win an XBox One! For IT Pro Newsletter and through the twitter handle @IEITPRO. 6. PRIZE: A maximum of eight.
https://docs.microsoft.com/en-us/archive/blogs/niallsblog/win-an-xbox-one
2020-08-03T21:30:55
CC-MAIN-2020-34
1596439735833.83
[]
docs.microsoft.com
<bindingRedirect> Element Redirects one assembly version to another. <configuration> <runtime> <assemblyBinding> <dependentAssembly> <bindingRedirect> Syntax <bindingRedirect oldVersion="existing assembly version" newVersion="new assembly version"/> Attributes and Elements The following sections describe attributes, child elements, and parent elements. Attributes Child Elements Parent Elements Remarks. You can also redirect from a newer version to an older version of the assembly. Explicit assembly binding redirection in an application configuration file requires a security permission. This applies to redirection of .NET Framework assemblies and assemblies from third parties. The permission is granted by setting the SecurityPermissionFlag flag on the SecurityPermission. For more information, see Assembly Binding Redirection Security Permission. Example The following example shows how to redirect one assembly version to another. >
https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/runtime/bindingredirect-element
2020-08-03T21:57:42
CC-MAIN-2020-34
1596439735833.83
[]
docs.microsoft.com
Description Data sharing makes it possible to have multiple applications open the same network stream. When multiple applications open the same network stream they share the underlying host buffer, however, each application still has its own entity indicating where it is in the host buffer. Data sharing makes it possible for multiple applications to work on overlapping data. For example, application 1 wants to get all IP frames and application 2 wants all TCP frames – meaning that all TCP frames go to both applications but all other IP frames only go to application 1. This is possible by using the packet-based interface. In Napatech Software Suite, both applications see all packets and application 2 must do some post-filtering to only get TCP frames. It is however possible to configure a host buffer allowance in the NT_NetRxOpen function (see DN-0449), which enables different applications to share host buffers in such a way that a slow application does not occupy all of the host buffers and thereby causes a faster application to lose packets. The NT_NetRxOpen function is defined in the stream_net.h file.
https://docs.napatech.com/reader/7hwAqHSO4_DITKXtasI8yw/rG9h2tZF9rMoVt8im5zbcQ
2020-08-03T20:54:25
CC-MAIN-2020-34
1596439735833.83
[]
docs.napatech.com
Learning More¶ Hopefully this User Guide gets you started with the basics of using Bokeh. on the main documentation site, as well as the Bokeh NBViewer Gallery. For questions and technical assistance, come join the Bokeh mailing list, or visit the Gitter chat channel. Visit the Bokeh GitHub repository and try the examples. Be sure to follow us on Twitter @bokehplots and Youtube!
https://docs.bokeh.org/en/0.12.11/docs/user_guide/info.html
2020-08-03T20:40:04
CC-MAIN-2020-34
1596439735833.83
[]
docs.bokeh.org
As a merchant, who uses Amazon MCF as a 3PL, in a world without Amazon MCF - NetSuite Integration App, you will manually: - Create orders in Amazon MCF (Your Amazon Seller Central account) to match the ones received in NetSuite. The procedure becomes more complex and cumbersome if you have multiple sales channels (Web Store/Marketplace), such as Shopify, Walmart, Magento, and eBay. - For each order, perform the most critical task of fulfillment tracking. - Update the canceled orders in Amazon MCF. The manual procedure is always prone to errors due to human intervention and is not ideal when you are receiving bulk orders from multiple sales channels. Amazon MCF - NetSuite Integration App is here for your rescue. It is a pre-built Integration App designed to streamline your backend logistics. The Integration App comes with pre-built data flows that syncs your orders, fulfillment, and cancellation data in an automated way. This helps your sales ecosystem to be in sync all the time. Eventually, the Integration App reduces the time and effort drastically as compared to the manual tracking of each sales order. Most importantly, it reduces the number of errors that occur due to human intervention during order processing. Amazon MCF - NetSuite Integration App flows: - NetSuite Order to Amazon (MCF) Order Add: syncs all the eligible order line items (pending fulfillment) from NetSuite to Amazon. This flow creates the order line items in Amazon so that they can be fulfilled using the Amazon Multi-Channel Fulfillment service. For each line item, the shipment information is automatically generated inside Amazon that Amazon has to fulfill. Note: If you attempt to create a fulfillment order for which you do not have sufficient inventory in Amazon's fulfillment network, the service returns an error. - Amazon (MCF) Shipment to NetSuite Fulfillment Add: syncs the shipment information from Amazon into NetSuite. This flow creates Item Fulfillments in NetSuite for the items to be fulfilled using Amazon MCF. - NetSuite Cancellation to Amazon (MCF) Cancellation Add: syncs all the canceled orders from NetSuite into Amazon. See Also: Please sign in to leave a comment.
https://docs.celigo.com/hc/en-us/articles/360013876191-Amazon-MCF-NetSuite-integration-app-overview
2020-08-03T20:31:10
CC-MAIN-2020-34
1596439735833.83
[array(['/hc/article_attachments/360012802592/AMZ_MCF_MANNUAL.png', 'AMZ_MCF_MANNUAL.png'], dtype=object) array(['/hc/article_attachments/360012802632/AMZ_MCF_CONNECTOR_OVERVIEW.png', 'AMZ_MCF_CONNECTOR_OVERVIEW.png'], dtype=object) array(['/hc/article_attachments/360012816452/AMZ_MCF_DATA_FLOW.png', 'AMZ_MCF_DATA_FLOW.png'], dtype=object) ]
docs.celigo.com
. Examples removeButton is removed from the Control.ControlCollection. // This method enables you to determine whether a Control is a member of the collection before attempting to perform operations on the Control. You can use this method to confirm that a Control has been added to or is still a member of the collection.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.control.controlcollection.contains?view=netframework-4.8
2020-08-03T22:07:41
CC-MAIN-2020-34
1596439735833.83
[]
docs.microsoft.com
JIRA Software Configuring the JIRA Software enables you to create and update a JIRA Software ticket from an open Moogsoft AIOps Situation. You can enable auto-assign so new JIRA issues created from Moogsoft AIOps are automatically assigned to the logged in user. See JIRA Software Integration Workflow for more information. See the JIRA documentation for information on JIRA components. Before you begin The JIRA Software integration has been validated with JIRA Software v7 and JIRA Cloud. Before you start to set up your integration, ensure you have met the following requirements: You have the URL for your JIRA Software system. You have created an integration user in JIRA Software with access to the project where the system opens issues. You have the username (typically the email address) and password of the JIRA Software integration user. If you are using JIRA with Atlassian Cloud, their password needs to be an API token. For instructions on how to create an API token, see the Atlassian Cloud documentation. You have created a JIRA Software project using either the 'Bug Tracking' (formerly Basic Software Development), 'Scrum', or 'Kanban' project template. If you want to enable auto-assign, you have created user accounts with the same names in both Moogsoft AIOps and JIRA Software. Configure the JIRA Software integration To configure the JIRA Software integration: Navigate to the Integrations tab. Click JIRA Software in the Ticketing section. Follow the instructions to create an integration name and enter the other details relating to your JIRA instance. Configure JIRA Software Log in to JIRA to create the webhook to send event data. For more help, see theJIRA documentation. Open the JIRA site administration console and create a webhook. Add a name, set the status to 'Enabled' and enter the URL for this integration: Select only 'updated' issues and 'created' comments as your webhook events. After you complete the JIRA Software integration, you can right-click a Situation and select Open JIRA Issue from the contextual menu. Moogsoft AIOps maintains a link to the JIRA ticket and updates it with your comments and status changes. This integration prefixes JIRA tickets with 'Moogsoft AIOps Situation [number]'. Do not remove this prefix as it is needed to synchronize comments, status changes and descriptions.
https://docs.moogsoft.com/AIOps.7.3.0/jira-software.html
2020-08-03T20:25:19
CC-MAIN-2020-34
1596439735833.83
[]
docs.moogsoft.com
The SynoISCSIDriver volume driver allows Synology NAS to be used for Block Storage (cinder) in OpenStack deployments. Information on OpenStack Block Storage volumes is available in the DSM Storage Manager. The Synology driver has the following requirements: Note The DSM driver is available in the OpenStack Newton release. Edit the /etc/cinder/cinder.conf file on your volume driver host. Synology driver uses a volume in Synology NAS as the back end of Block Storage. Every time you create a new Block Storage volume, the system will create an advanced file LUN in your Synology volume to be used for this new Block Storage volume. The following example shows how to use different Synology NAS servers as the back end. If you want to use all volumes on your Synology NAS, add another section with the volume number to differentiate between volumes within the same Synology NAS. [default] enabled_backends = ds1515pV1, ds1515pV2, rs3017xsV3, others [ds1515pV1] # configuration for volume 1 in DS1515+ [ds1515pV2] # configuration for volume 2 in DS1515+ [rs3017xsV1] # configuration for volume 1 in RS3017xs Each section indicates the volume number and the way in which the connection is established. Below is an example of a basic configuration: [Your_Section_Name] # Required settings volume_driver = cinder.volume.drivers.synology.synology_iscsi.SynoISCSIDriver iscs_protocol = iscsi iscsi_ip_address = DS_IP synology_admin_port = DS_PORT synology_username = DS_USER synology_password = DS_PW synology_pool_name = DS_VOLUME # Optional settings volume_backend_name = VOLUME_BACKEND_NAME iscsi_secondary_ip_addresses = IP_ADDRESSES driver_use_ssl = True use_chap_auth = True chap_username = CHAP_USER_NAME chap_password = CHAP_PASSWORD DS_PORT driver_use_ssl = True. DS_IP DS_USER DS_PW DS_USER. DS_VOLUME volume[0-9]+, and the number is the same as the volume number in DSM. Note If you set driver_use_ssl as True, synology_admin_port must be an HTTPS port. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/newton/config-reference/block-storage/drivers/synology-dsm-driver.html
2022-06-25T14:22:33
CC-MAIN-2022-27
1656103035636.10
[]
docs.openstack.org
There is very little information about this function so I thought I'd add a few notes I found while trying to get this working. First make sure your version of PHP is above 4.3.2 I spent an hour searching goggles 13000+ mirrors of this same page and finally found the info I needed at AltaVista, there is a bug in PHP 4.3.2 that makes this none functional. if your creating the base image you need to create it with imageCreateTrueColor() if your using a PNG with transparency, I found even nullifying the PNG's transparency with GD doesn't work. the tiling PNG has to be created without transparency to work with imageCreate(). but from what I've seen imageCreateFromXXX() can use transparent and nonetransparent PNG's. here is an example. <?php $diagramWidth = 300; $diagramHeight = 50; $image = imageCreateTrueColor ($diagramWidth, $diagramHeight); $imagebg = imageCreateFromPNG ('tile.png'); // transparent PNG imageSetTile ($image, $imagebg); imageFilledRectangle ($image, 0, 0, $diagramWidth, $diagramHeight, IMG_COLOR_TILED); $textcolor1 = imageColorAllocate ($image, 80, 80, 80); $textcolor2 = imageColorAllocate ($image, 255, 255, 255); imageString ($image, 3, 10, 20, 'Transparent PNG Tile Test...', $textcolor1); imageString ($image, 3, 9, 19, 'Transparent PNG Tile Test...', $textcolor2); Header("Content-type: image/png"); imagePNG ($image); imagedestroy ($image); imagedestroy ($imagebg); ?> hope this helps someone else! Aquilo
http://docs.php.net/manual/uk/function.imagesettile.php
2022-06-25T13:09:42
CC-MAIN-2022-27
1656103035636.10
[]
docs.php.net
User Principal. Get Authorization Groups Method Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Returns a collection of principal objects that contains all the authorization groups of which this user is a member. This function only returns groups that are security groups; distribution groups are not returned. public: System::DirectoryServices::AccountManagement::PrincipalSearchResult<System::DirectoryServices::AccountManagement::Principal ^> ^ GetAuthorizationGroups(); public System.DirectoryServices.AccountManagement.PrincipalSearchResult<System.DirectoryServices.AccountManagement.Principal> GetAuthorizationGroups (); member this.GetAuthorizationGroups : unit -> System.DirectoryServices.AccountManagement.PrincipalSearchResult<System.DirectoryServices.AccountManagement.Principal> Public Function GetAuthorizationGroups () As PrincipalSearchResult(Of Principal) Returns A collection of Principal objects that contain the groups of which the user is a member, or null if the user does not belong to any groups. Exceptions The attempt to retrieve authorization groups failed. The retrieval of authorization groups is not supported by this operating system. Remarks This method searches all groups recursively and returns the groups in which the user is a member. The returned set may also include additional groups that system would consider the user a member of for authorization purposes. The groups that are returned by this method may include groups from a different scope and store than the principal. For example, if the principal is an AD DS object that has a DN of "CN=SpecialGroups,DC=Fabrikam,DC=com, the returned set can contain groups that belong to the "CN=NormalGroups,DC=Fabrikam,DC=com.
https://docs.microsoft.com/en-us/dotnet/api/system.directoryservices.accountmanagement.userprincipal.getauthorizationgroups?view=netframework-4.8
2022-06-25T13:08:40
CC-MAIN-2022-27
1656103035636.10
[]
docs.microsoft.com
Projects Supported Project Types One of the main goals of Projectile is to operate on a wide range of project types without the need for any configuration. To achieve this it contains a lot of project detection logic and project type specific logic. Broadly speaking, Projectile identifies projects like this: Directories that contain the special .projectilefile Directories under version control (e.g. a Git repo) Directories that contain some project description file (e.g. a Gemfilefor Ruby projects or pom.xmlfor Java maven-based projects) While Projectile aims to recognize most project types out-of-the-box, it’s also extremely flexible configuration-wise, and you can easily alter the project detection") :project-file "package.json" :compile "npm install" :test "npm test" :run "npm start" :test-suffix ".spec") What this does is: add your own type of project, in this case npmpackage. add a list of files and/or folders in a root of the project that helps to identify the type, in this case it is only package.json. add project-file, which is typically the primary project configuration file. In this case that. Let’s see a couple of more complex examples. ;; Ruby + RSpec (projectile-register-project-type 'ruby-rspec '("Gemfile" "lib" "spec") :project-file "Gemfile" :compile "bundle exec rake" :src-dir "lib/" :test "bundle exec rspec" :test-dir "spec/" :test-suffix "_spec") ;; Ruby + Minitest (projectile-register-project-type 'ruby-test '("Gemfile" "lib" "test") :project-file "Gemfile" :compile"bundle exec rake" :src-dir "lib/" :test "bundle exec rake test" :test-suffix "_test") ;; Rails + Minitest (projectile-register-project-type 'rails-test '("Gemfile" "app" "lib" "db" "config" "test") :project-file "Gemfile" :compile "bundle exec rails server" :src-dir "lib/" :test "bundle exec rake test" :test-suffix "_test") ;; Rails + RSpec (projectile-register-project-type 'rails-rspec '("Gemfile" "app" "lib" "db" "config" "spec") :project-file "Gemfile" :compile "bundle exec rails server" :src-dir "lib/" :test "bundle exec rspec" :test-dir "spec/" :test-suffix "_spec") All those projects are using Gemfile ( bundler's project file), but they have different directory structures. Bellow is a listing of all the available options for projectile-register-project-type: The :test-prefix and :test-suffix will work regardless of file extension or directory path should and be enough for simple projects. The projectile-other-file-alist variable can also be set to find other files based on the extension. For fine-grained control of implementation/test toggling, the :test-dir option of a project may take a function of one parameter (the implementation directory absolute path) and return the directory of the test file. This in conjunction with the options :test-prefix and :test-suffix will then be used to determine the full path of the test file. This option will always be respected if it is set. Similarly, the :src-dir option, the analogue of :test-dir, may also take a function and exhibits exactly the same behaviour as above except that its parameter corresponds to the directory of a test file and it should return the directory of the corresponding implementation file. It’s recommended that either both or neither of these options are set to functions for consistent behaviour. Alternatively, for flexible file switching across a range of projects, the :related-files-fn option set to a custom function or a list of custom functions can be used. The custom function accepts the relative file name from the project root and it should return related file information as a plist with the following optional key/value pairs: For each value, following type can be used: Notes: For a big project consisting of many source files, returning strings instead of a function can be fast as it does not iterate over each source file. There is a difference in behaviour between no key and nilvalue for the key. Only when the key does not exist, other project options such as :test_prefixor projectile-other-file-alistmechanism is tried. If the :test-diroption is set to a function, this will take precedence over any value for :related-files-fnset when projectile-toggle-between-implementation-and-testis called.) Editing Existing Project Types You can also edit specific options of already existing project types: (projectile-update-project-type 'sbt :related-files-fn (list (projectile-related-files-fn-test-with-suffix "scala" "Spec") (projectile-related-files-fn-test-with-suffix "scala" "Test"))) This will keep all existing options for the sbt project type, but change the value of the related-files-fn option. :test-dir/ :src-dir vs :related-files-fn Whilst setting the :test-dir and :src-dir to strings is sufficient for most purposes, using functions can give more flexibility. As an example consider (also using f.el): (defun my-get-python-test-file (impl-file-path) "Return the corresponding test file directory for IMPL-FILE-PATH" (let* ((rel-path (f-relative impl-file-path (projectile-project-root))) (src-dir (car (f-split rel-path)))) (cond ((f-exists-p (f-join (projectile-project-root) "test")) (projectile-complementary-dir impl-file-path src-dir "test")) ((f-exists-p (f-join (projectile-project-root) "tests")) (projectile-complementary-dir impl-file-path src-dir "tests")) (t (error "Could not locate a test file for %s!" impl-file-path))))) (defun my-get-python-impl-file (test-file-path) "Return the corresponding impl file directory for TEST-FILE-PATH" (if-let* ((root (projectile-project-root)) (rel-path (f-relative test-file-path root)) (src-dir-guesses `(,(f-base root) ,(downcase (f-base root)) "src")) (src-dir (cl-find-if (lambda (d) (f-exists-p (f-join root d))) src-dir-guesses))) (projectile-complementary-dir test-file-path "tests?" src-dir) (error "Could not locate a impl file for %s!" test-file-path))) (projectile-update-project-type 'python-pkg :src-dir #'my-get-python-impl-dir :test-dir #'my-get-python-test-dir) This attempts to recognise projects using both test and tests as top level directories for test files. An alternative using the related-files-fn option could be: (projectile-update-project-type 'python-pkg :related-files-fn (list (projectile-related-files-fn-test-with-suffix "py" "_test") (projectile-related-files-fn-test-with-prefix "py" "test_"))) In fact this is a lot more flexible in terms of finding test files in different locations, but will not create test files for you. Customizing Project Detection Project detection is pretty simple - Projectile just runs a list of project detection functions ( projectile-project-root-functions) until one of them returns a project directory. This list of functions is customizable, and while Projectile has some defaults for it, you can tweak it however you see fit. Let’s take a closer look at projectile-project-root-functions: (defcustom projectile-project-root-functions '(projectile-root-local projectile-root-bottom-up projectile-root-top-down projectile-root-top-down-recurring) "A list of functions for finding project roots." :group 'projectile :type '(repeat function)) The important thing to note here is that the functions get invoked in their order on the list, so the functions earlier in the list will have a higher precedence with respect to project detection. Let’s examine the defaults: projectile-root-locallooks for project path set via the buffer-local variable projectile-project-root. Typically you’d set this variable via .dir-locals.eland it will take precedence over everything else. projectile-root-bottom-upwill start looking for a project marker file/folder(e.g. .projectile, .hg, .git) from the current folder (a.k.a. default-directoryin Emacs lingo) up the directory tree. It will return the first match it discovers. The assumption is pretty simple - the root marker appear only once, at the root folder of a project. If a root marker appear in several nested folders (e.g. you’ve got nested git projects), the bottom-most (closest to the current dir) match has precedence. You can customize the root markers recognized by this function via projectile-project-root-files-bottom-up projectile-root-top-downis similar, but it will return the top-most (farthest from the current directory) match. It’s configurable via projectile-project-root-filesand all project manifest markers like pom.xml, Gemfile, project.clj, etc go there. projectile-root-top-down-recurringwill look for project markers that can appear at every level of a project (e.g. Makefileor .svn) and will return the top-most match for those. The default ordering should work well for most people, but depending on the structure of your project you might want to tweak it. Re-ordering those functions will alter the project detection, but you can also replace the list. Here’s how you can delegate the project detection to Emacs’s built-in function vc-root-dir: ;; we need this wrapper to match Projectile's API (defun projectile-vc-root-dir (dir) "Retrieve the root directory of the project at DIR using `vc-root-dir'." (let ((default-directory dir)) (vc-root-dir))) (setq projectile-project-root-functions '(projectile-vc-root-dir)) Similarly, you can leverage the built-in project.el like this: ;; we need this wrapper to match Projectile's API (defun projectile-project-current (dir) "Retrieve the root directory of the project at DIR using `project-current'." (cdr (project-current nil dir))) (setq projectile-project-root-functions '(projectile-project-current)) Ignoring files. If you would like to include comment lines in your .projectile file, you can customize the variable projectile-dirconfig-comment-prefix. Assigning it a non-nil character value, e.g. #, will cause lines in the .projectile file starting with that character to be treated as")))) By default, compilation buffers are not writable, which allows you to e.g. press g to restart the last command. Setting projectile-<cmd>-use-comint-mode (where <cmd> is configure, compile, test, install, package, or run) to a non-nil value allows you to make projectile compilation buffers interactive, letting you e.g. test a command-line program with projectile-run-project. (setq projectile-comint-mode t) Configure a Project’s Lifecycle Commands There are a few variables that are intended to be customized via .dir-locals.el. for configuration - projectile-project-configure-cmd for compilation - projectile-project-compilation-cmd for testing - projectile-project-test-cmd for installation - projectile-project-install-cmd for packaging - projectile-project-package)
https://docs.projectile.mx/projectile/projects.html
2022-06-25T13:23:36
CC-MAIN-2022-27
1656103035636.10
[]
docs.projectile.mx
Introduction A button is a clickable icon that opens an email contact form as a popup window over top of your Simplebooklet. You can set the email address that receives the completed contact form. Available on all plans. Benefits The email button is a great way to make it fast and convenient for a customer to reach out and contact you with any questions they may have. Walkthrough Here is a 2 minute long video showing you how to create a email button: Step-by-Step Guide Below is a walkthrough showing you how to create a email button: Click EDIT. Tap Buttons from the toolbar on the left. Select Email. Write the email address you would like your customers to contact in the blank space under Email address that will receive messages. Write a personalized message that will appear in your email popup under Greeting to Your Customer. Click SAVE. Click and drag your button to a position on your page you want the button to appear. You now have a email button on your Simplebooklet! Updated on: 02 / 02 / 2022
https://docs.simplebooklet.com/en/article/button-email-dvgxs8/
2022-06-25T14:16:26
CC-MAIN-2022-27
1656103035636.10
[]
docs.simplebooklet.com
Disable Password Authentication (SaaS) Sysdig Platform supports disabling password-based authentication on both SaaS and on-prem deployments. As an administrator (super administrator for on-prem), you can use either the Authentication option on the UI or the API to achieve it. This configuration is applicable to those who use single sign-on. For On-Prem environments, see Disable Password Authentication. Using the UI You can use the UI to disable password authentication only for SAML and OpenID authentication methods. For Google Oauth, use the API method as given below. As an administrator, perform the following: - Log in to Sysdig Monitor or Sysdig Secure as administrator and select Settings. - Click Authentication. - Choose your authentication method. Disabling password authentication through the UI is not supported for Google Oauth. - Use the Disable username and password login slider to turn off password authentication. - Click Save to save the settings. Using the API As an administrator, perform the following: Get the Sysdig Platform settings: See SaaS Regions and IP Ranges and identify the correct domain URL associated with your Sysdig application and region. For example, for Sysdig Monitor on US East is: GET For other regions, the format is https://<region>.app.sysdig.com/api/auth/settings. Replace <region> with the region where your Sysidig application is hosted. For example, for Sysdig Monitor in the EU, you use. Find the ID of the active SSO setup: GET Retrieve the specific settings associated with the SSO setup: GET} Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://docs.sysdig.com/en/docs/administration/administration-settings/authentication-and-authorization-saas/disable-password-authentication-saas/
2022-06-25T13:03:08
CC-MAIN-2022-27
1656103035636.10
[]
docs.sysdig.com
class in UnityEngine.SocialPlatforms.GameCenter / Implemented in:UnityEngine.GameCenterModule Implements interfaces:ISocialPlatform iOS GameCenter implementation for network services. An application bundle ID must be registered on iTunes Connect before it can access GameCenter. This ID must be properly set in the iOS player properties in Unity. When debugging you can use the GameCenter sandbox (a text displaying this is shown when logging on). You must log on in the application to get into sandbox mode, logging on in the GameCenter application will always use the production version. When using the GameCenterPlatform class in C# you need to include the UnityEngine.SocialPlatforms.GameCenter namespace. Some things to be aware of when using the generic API: Authenticate() If the user is not logged in, a standard GameKit UI is shown where they can log on or create a new user. It is recommended this is done as early as possible. Achievement descriptions and Leaderboards The achivements descriptions and leaderboard configurations can be configured in the iTunes Connect portal. Achievements get unique identifiers and the leaderboards use category names as identifiers. GameCenter Sandbox Development applications use the GameCenter Sandbox. This is a seperate GameCenter than the live one, nothing is shared between them. It is recommended that you create a seperate user for testing with the GameCenter Sandbox, you should not use your real Apple ID for this. You using the sandbox and you will be logged on to the real one. If the application has not been submitted to Apple already then this will probably result in an error. To fix this all that needs to be done is to delete the app and redeploy with Xcode. To make another Apple ID a friend of a sandbox user it needs to be a sandbox user as well. If you start getting errors when accessing GameCenter stating that the application is not recognized you'll need to delete the application completely and re-deploy. Make sure you are not logged on when starting the newly installed application again.
https://docs.unity3d.com/ScriptReference/SocialPlatforms.GameCenter.GameCenterPlatform.html
2022-06-25T13:46:48
CC-MAIN-2022-27
1656103035636.10
[]
docs.unity3d.com
Sunless Shadows An Iranian correctional facility for teenage girls convicted of the crimes against their attackers: their fathers, husbands, and brothers. In his latest film, Mehrdad Oskouei, one of the most outstanding documentary filmmakers of our time, takes on the topic of violence against women. The protagonists in Sunless Shadows are teenagers serving prison sentences in a correctional facility in Iran for crimes against their attackers: their fathers, husbands, and brothers. Oskouei observes with terrifying precision the accursed cycle of systemic oppression that is passed down from generation to generation, the only reprieve from which, paradoxically, is a prison term. The director masterfully shows the everyday life of adolescent girls—filled with sadness, but also with the joy and hope that come from sisterhood. Sunless Shadows is undoubtedly one of the most universal portraits of our time, as both viewers and juries have claimed at many of the world’s leading film festivals. Tadeusz Strączek Zagrebdox 2020 – Jury’s Special Mention CPH:DOX 2020 DOK.fest München 2020 DokuFest 2020 IDFA 2019 – Best Director Award
https://watchdocs.pl/en/watch-docs/2020/filmy/cienie-bez-slonca,7798
2022-06-25T13:03:28
CC-MAIN-2022-27
1656103035636.10
[array(['/upload/thumb/2020/11/cienie-duz-y_auto_800x900.png', 'Sunless Shadows'], dtype=object) array(['/upload/2020/12/image955.png', None], dtype=object)]
watchdocs.pl
google.cloud.gcp_compute_firewall_info module – Gather info for GCP Firewall_firewall_info. Synopsis Gather info for GCP Firewall Requirements The below requirements are needed on the host that executes this module. python >= 2.6 requests >= 2.18.4 google-auth >= 1.3.0 Parameters Notes Note: get info on a firewall gcp_compute_firewall_info: filters: - name = test_object project: test_project auth_kind: serviceaccount service_account_file: "/tmp/auth.pem" Return Values Common return values are documented here, the following are the fields unique to this module: Collection links Homepage Repository (Sources)
https://docs.ansible.com/ansible/latest/collections/google/cloud/gcp_compute_firewall_info_module.html
2022-06-25T14:33:43
CC-MAIN-2022-27
1656103035636.10
[]
docs.ansible.com
Aspect Bracket handler Link to aspect-bracket-handler The aspect Bracket Handler allows you to retrieve a Thaumcraft Aspect Stack in case you need one. Aspects are referenced in the Aspect Bracket handler this way: ZenScriptCopy <aspect:name> <aspect:ignis> If the Aspect is found, this will return an CTAspectStack Object with the stacksize 1. Please refer to the respective Wiki entry for further information on what you can do with these.
https://docs.blamejared.com/1.12/en/Mods/Modtweaker/Thaumcraft/Brackets/Bracket_Aspect
2022-06-25T14:20:32
CC-MAIN-2022-27
1656103035636.10
[]
docs.blamejared.com
conan source¶ $ conan source [-h] [-sf SOURCE_FOLDER] [-if INSTALL_FOLDER] path Calls your local conanfile.py ‘source()’ method. Usually downloads and uncompresses the package sources. positional arguments: path Path to a folder containing a conanfile.py or to a recipe file e.g., my_folder/conanfile.py optional arguments: -h, --help show this help message and exit -sf SOURCE_FOLDER, --source-folder SOURCE_FOLDER Destination directory. Defaulted to current directory -if INSTALL_FOLDER, --install-folder INSTALL_FOLDER Directory containing the conaninfo.txt and conanbuildinfo.txt files (from previous 'conan install'). Defaulted to --build-folder Optional, source method will run without the information retrieved from the conaninfo.txt and conanbuildinfo.txt, only required when using conditional source() based on settings, options, env_info and user_info The source() method might use (optional) --install-folder. Examples: Call a local recipe’s source method: In user space, the command will execute a local conanfile.py source()method, in the src folder in the current directory. $ conan new lib/1.0@conan/stable $ conan source . --source-folder mysrc In case you need the settings/options or any info from the requirements, perform first an install: $ conan install . --install-folder mybuild $ conan source . --source-folder mysrc --install-folder mybuild
https://docs.conan.io/en/1.31/reference/commands/development/source.html
2022-06-25T14:29:22
CC-MAIN-2022-27
1656103035636.10
[]
docs.conan.io
How to host and share Data Docs on Azure Blob Storage¶ This guide will explain how to host and share Data Docs on Azure Blob Storage. Data Docs will be served using an Azure Blob Storage static website with restricted access. Prerequisites: This how-to guide assumes you have already: Set up a working deployment of Great Expectations Have permission to create and configured an Azure storage account Steps Create an Azure Blob Storage static website. - Create a storage account. - In settings Select Static website to display the configuration page for static websites. - Select Enabled to enable static website hosting for the storage account. - Write “index.html” in Index document. Note the Primary endpoint url. Your team will be able to consult your data doc on this url when you have finished this tuto. You could also map a custom domain to this endpoint. A container called $webshould have been created in your storage account. Configure the config_variables.ymlfile with your azure storage credentials Get the Connection string of the storage account you have just created. We recommend that azure storage credentials be stored in the config_variables.ymlfile, which is located in the uncommitted/folder by default, and is not part of source control. The following lines add azure storage credentials under the key AZURE_STORAGE_CONNECTION_STRING. Additional options for configuring the config_variables.ymlfile or additional environment variables can be found here.AZURE_STORAGE_CONNECTION_STRING: "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=<YOUR-STORAGE-ACCOUNT-NAME>;AccountKey=<YOUR-STORAGE-ACCOUNT-KEY==>" Add a new Azure site to the data_docs_sites section of your great_expectations.yml. data_docs_sites: local_site: class_name: SiteBuilder show_how_to_buttons: true store_backend: class_name: TupleFilesystemStoreBackend base_directory: uncommitted/data_docs/local_site/ site_index_builder: class_name: DefaultSiteIndexBuilder az_site: # this is a user-selected name - you may select your own class_name: SiteBuilder store_backend: class_name: TupleAzureBlobStoreBackend container: \$web connection_string: ${AZURE_STORAGE_WEB_CONNECTION_STRING} site_index_builder: class_name: DefaultSiteIndexBuilder You may also replace the default local_siteif you would only like to maintain a single Azure Data Docs site. Note Since the container is called $web, if we simply set container: $webin great_expectations.ymlthen Great Expectations would unsuccefully try to find the variable called webin config_variables.yml. We use an escape char \before the $so the substitute_config_variable method will allow us to reach the $webcontainer. You also may configure Great Expectations to store your expectations and validations in this Azure Storage account. You can follow the documentation from the guides for expectations and validations but unsure you set container: \$webinplace of other container name. Build the Azure Blob Data Docs site. You can create or modify a suite and this will build the Data Docs website. Or you can use the following CLI command: great_expectations docs build --site-name az_site.> great_expectations docs build --site-name az_site The following Data Docs sites will be built: - az_site: https://<your-storage-account>.blob.core.windows.net/$web/index.html Would you like to proceed? [Y/n]: y Building Data Docs... Done building Data Docs If successful, the CLI will provide the object URL of the index page. You may secure the access of your website using an IP filtering mechanism. Limit the access to your company.
https://legacy.docs.greatexpectations.io/en/latest/guides/how_to_guides/configuring_data_docs/how_to_host_and_share_data_docs_on_azure_blob_storage.html
2022-06-25T13:23:58
CC-MAIN-2022-27
1656103035636.10
[]
legacy.docs.greatexpectations.io
Optional: Customize your deployment¶ At this point, you have your first, working local deployment of Great Expectations. You’ve also been introduced to the foundational concepts in the library: Data Contexts, Datasources, Expectations, Profilers, Data Docs, Validation, and Checkpoints. Congratulations! You’re off to a very good start. The next step is to customize your deployment by upgrading specific components of your deployment. Data Contexts make this modular, so that you can add or swap out one component at a time. Most of these changes are quick, incremental steps—so you can upgrade from a basic demo deployment to a full production deployment at your own pace and be confident that your Data Context will continue to work at every step along the way. This last section of this tutorial is designed to present you with clear options for upgrading your deployment. For specific implementation steps, please check out the linked How-to guides. Components¶ Here’s an overview of the components of a typical Great Expectations deployment: Great Expectations configs and metadata Integrations to related systems Options for storing Great Expectations configuration¶ The simplest way to manage your Great Expectations configuration is usually by committing great_expectations/great_expectations.yml to git. However, it’s not usually a good idea to commit credentials to source control. In some situations, you might need to deploy without access to source control (or maybe even a file system). Here’s how to handle each of those cases: Options for storing Expectations¶ Many teams find it convenient to store Expectations in git. Essentially, this approach treats Expectations like test fixtures: they live adjacent to code and are stored within version control. git acts as a collaboration tool and source of record. Alternatively, you can treat Expectations like configs, and store them in a blob store. Finally, you can store them in a database. Options for storing Validation Results¶ By default, Validation Results are stored locally, in an uncommitted directory. This is great for individual work, but not good for collaboration. The most common pattern is to use a cloud-based blob store such as S3, GCS, or Azure blob store. You can also store Validation Results in a database. Options for customizing generated notebooks¶ Great Expectations generates and provides notebooks as interactive development environments for expectation suites. You might want to customize parts of the notebooks to add company-specific documentation, or change the code sections to suit your use-cases. Additional Datasources¶ Great Expectations plugs into a wide variety of Datasources, and the list is constantly getting longer. If you have an idea for a Datasource not listed here, please speak up in the public discussion forum. How to configure a Pandas/filesystem Datasource How to configure a Pandas/S3 Datasource How to configure a Redshift Datasource How to configure a Snowflake Datasource How to configure a BigQuery Datasource How to configure a Databricks Azure Datasource How to configure an EMR Spark Datasource How to configure a Databricks AWS Datasource How to configure a self managed Spark Datasource Options for hosting Data Docs¶ By default, Data Docs are stored locally, in an uncommitted directory. This is great for individual work, but not good for collaboration. A better pattern is usually to deploy to a cloud-based blob store (S3, GCS, or Azure blob store), configured to share a static website. Additional Validation Operators and Actions¶ Most teams will want to configure various Validation Actions as part of their deployment. How to update Data Docs as a Validation Action How to store Validation Results as a Validation Action How to trigger Slack notifications as a Validation Action How to trigger Email as a Validation Action If you also want to modify your :ref:reference__core_concepts__validation__validation_operator, you can learn how here: Options for triggering Validation¶ There are two primary patterns for deploying Checkpoints. Sometimes Checkpoints are executed during data processing (e.g. as a task within Airflow). From this vantage point, they can control program flow. Sometimes Checkpoints are executed against materialized data. Great Expectations supports both patterns. There are also some rare instances where you may want to validate data without using a Checkpoint.
https://legacy.docs.greatexpectations.io/en/latest/guides/tutorials/getting_started/customize_your_deployment.html
2022-06-25T13:25:56
CC-MAIN-2022-27
1656103035636.10
[]
legacy.docs.greatexpectations.io