content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Lessons From and For Hiring Managers¶: - Have a strategy for getting more referrals whether that is actively asking employees or having an overly generous referral bonus. Referrals are probably your top source of candidates so figuring out how to get more will save you time and money in the end. Then do the same for your other best recruiting channels. - Reach passive candidates.. - Recruiting for diversity is a focus for hiring managers. Put together a concrete plan for getting more diverse candidates and meeting your diversity goals. Engaging with local developer communities and nonprofits and encouraging referrals worked well for companies we talked to. - Get more signal in your application process by adding a step such as a cover letter, quiz, or simply add a couple questions folks have to answer. Making candidates spend even one extra minute on their application can save you hours filtering resumes. Why is Read the Docs tackling hiring? Hasn’t that been done?¶ While we didn’t initially think about Read the Docs ads for recruiting, based on some unexpected past successes, 50-100 hours of engineering time to make a hire. No huge revelation here, but companies make a massive investment in hiring.. Top hiring channels¶ It was no big surprise to hear that referrals were the top hiring channel.. Reaching people who aren’t looking is critical¶ Joel Spolsky had it right when he said top developers “barely ever apply for jobs at all. That’s because they already have jobs.”. Recruiters can be worth it¶. Full disclosure: Triplebyte is an advertiser on Read the Docs. We’re fans. Diversity and inclusivity in hiring¶. Of the companies with hard diversity goals, most of them told us that their strategy entailed syndicating their job openings to diversity focused jobs sites or in some cases engaging directly with relevant interest groups. In terms of efficacy, reviews of these various jobs boards was mixed. On the more positive side, hiring managers cited Girl Develop It, Black Girls Code, and PyLadies as fantastic channels for both increasing candidate diversity and for high quality applicants. Two companies also talked about trying to decrease bias in their job postings using Textio and similar services.. Remember, sourcing is the bigger problem¶ Aline Lerner, founder of interviewing.io said it best when she said “Engineering hiring isn’t a filtering problem. It’s a sourcing problem.” The reason we heard it a lot is that filtering is a very time-consuming task that hiring managers directly deal with. The real problem here is that the number of applicants to a job post is a vanity metric.. Thanks¶ get in touch. Check back soon for our next post in this series which covers tips for candidates based on the same interviews! Update: This blog was updated to mention our new post in the series Ready to hire your next developer, fast! Get in front of passive candidates already using your tech by promoting your job openings with Read the Docs.
https://blog.readthedocs.com/lessons-from-hiring-manager-interviews/
2021-05-06T06:45:52
CC-MAIN-2021-21
1620243988741.20
[]
blog.readthedocs.com
Purging deleted entities You can use Atlas REST API calls to remove entities from Atlas. Only entities that have been deleted in the source system and marked as deleted in Atlas can be purged. When a data asset is deleted, such as after a DROP TABLE command in Hive, Atlas continues to retain the asset's entity, including metadata, lineage, and audit record. The status of the entity is set to "deleted"; deleted entities show up in search results when the checkbox to Show historical entities is checked. Deleted entities appear in lineage graph dimmed-out. In some cases, it may be appropriate to remove entities for deleted assets from Atlas. For example, in a development or test environment, you may choose to clean out specific entities rather than clearing the entire Atlas database. Be careful not to purge entities in a production environment without understanding the impact of removing entities on compliance processes in your organization. Deleted entities can be removed completely from Atlas by using the REST API call PUT /admin/purge. - The entity is removed from Atlas. - Related, dependent entities are also removed. For example, when purging a deleted Hive table, the deleted entities for the table columns, DLL, and storage description are also purged. - The entity is no longer available in search results, even with Show historical entities enabled. - Lineage relationships that include the purged entities are removed, which breaks lineages that depend upon a purged entity to show connections between ancestors and descendents. - Classifications propagated across the purged entities are removed in all descendent entities. - Classifications assigned to the purged entities and set to propagate are removed from all descendent entities. Note that classifications can propagate to an entity from more than one source; if one source is purged, the classification will remain on the entity as propagated from the other source. Purged entities cannot be restored. Atlas retains an audit record of the purge operations, which is available through the REST API call allows you to retrieve a list of entities purged in a given time interval. In addition, the tab in the Atlas UI records entities that were successfully purged.
https://docs.cloudera.com/runtime/7.2.9/atlas-reference/topics/atlas-reference-purging-deleted-entities.html
2021-05-06T06:48:03
CC-MAIN-2021-21
1620243988741.20
[array(['../images/atlas-deleted-entity.png', None], dtype=object)]
docs.cloudera.com
RangeAreaSeriesLabel Class Defines label settings for range area series. Namespace: DevExpress.XtraCharts Assembly: DevExpress.XtraCharts.v19.1.dll Declaration public class RangeAreaSeriesLabel : SeriesLabelBase Public Class RangeAreaSeriesLabel Inherits SeriesLabelBase Remarks The RangeAreaSeriesLabel class provides label functionality for series of the range area view type. At the same time, the RangeAreaSeriesLabel class serves as a base for the RangeArea3DSeriesLabel class. In addition to the common label settings inherited from the base SeriesLabelBase class, the RangeAreaSeriesLabel class declares a range area specific setting that allows you to specify the label’s kind (RangeAreaSeriesLabel.Kind), as well as angles that define a series label’s position (RangeAreaSeriesLabel.MaxValueAngle and RangeAreaSeriesLabel.MinValueAngle). An instance of the RangeAreaSeriesLabel class can be obtained via the SeriesBase.Label property of a series whose view type is RangeAreaSeriesView.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.RangeAreaSeriesLabel?v=19.1
2021-05-06T07:57:28
CC-MAIN-2021-21
1620243988741.20
[]
docs.devexpress.com
Sepasoft - MES Modules Cirrus Link - MQTT Modules Knowledge Base Articles Inductive University Forum IA Support SDK Documentation Ignition 8.1 Ignition 8 Ignition 7.9 Ignition 7.8 changes.mady.by.user Tina Jalalian Saved on Mar 24, 2015 Saved on May 14, 2015 Java DataBbase Connectivity API, is a standardized way for Java-based applications to interact with a wide range of databases and data sources. A JDBC Driver enables Ignition to connect to, and use data from, a particular database system.
https://docs.inductiveautomation.com/pages/diffpagesbyversion.action?pageId=1709750&selectedPageVersions=6&selectedPageVersions=7
2021-05-06T07:28:20
CC-MAIN-2021-21
1620243988741.20
[]
docs.inductiveautomation.com
Remove-Outbound Connector This cmdlet is available only in the cloud-based service. Use the Remove-OutboundConnector cmdlet to delete an Outbound connector from your cloud-based organization. Note: We recommend that you use the Exchange Online PowerShell V2 module to connect to Exchange Online PowerShell. For instructions, see Connect to Exchange Online PowerShell. For information about the parameter sets in the Syntax section below, see Exchange cmdlet syntax. Syntax Remove-Outbound Connector [-Identity] <OutboundConnectorIdParameter> [-Confirm] [-OutboundConnector "Contoso Outbound Connector" This example deletes the Outbound connector named Contoso Outbound Connector. connector you want to.
https://docs.microsoft.com/en-us/powershell/module/exchange/remove-outboundconnector?view=exchange-ps
2021-05-06T07:50:18
CC-MAIN-2021-21
1620243988741.20
[]
docs.microsoft.com
Prime’s projected development is segmented into three separate phases, each with its own characteristics, objectives, and strategies. These phases form the scaffolding of an ever-evolving strategy that guides and aligns stakeholders. Although decentralized autonomous organizations (DAO) like Prime are architecturally decentralized, they still require leadership and orchestration to accomplish common goals and generate value. Without shared strategy — and skilled communicators to herd value creators — Prime’s ecosystem participants can’t be expected to understand where and why to focus their energy. This type of confusion is a form of organizational myopia called the tyranny of structurelessness. The Incubation phase did not start with the announcement of PrimeDAO, but began Q2 2020, with the creation of a working group at ETHDenver. The initial stakeholders collectively created the litepaper, and aligned on a general outline of how to initialize Prime. This founding working group has coordinated via private group chats, private meetings, and alignment through mutual contribution. With the public revealing of Prime, we take the first step from off-chain governance to on-chain governance by deploying the first version of Prime to the xDAI chain. This deployment creates an on-chain forum for its ecosystem governors to collaboratively align and legislate agreements. This first DAO is deployed on the DAOstack framework and utilizes holographic consensus for governance. At this point, there still isn’t a PRIME token, but only Reputation (REP) — which is non-transferable voting right — as the source of governance truth. The step from off-chain to on-chain governance is a big one. As an on-chain entity, Prime governs a treasury, where funds can be exchanged to workers or service providers for their efforts. It can govern smart contract protocols and interfaces, where parameters can be adjusted, contracts can be updated, or decentralized ENS websites and applications can have their content managed. Or it can simply align on proposals that formalize a social consensus, such as norms and organizational protocols — like a code of conduct. Prime can do other things, such as call smart contracts, but let’s let this sit for now. In this phase, we must develop a number of tactical governable primitives for the PrimeDAO. Development of the primitives will be matched by the DAO’s governance to an ecosystem of development teams according to their strengths and capabilities, with a designated team for dynamically maintaining and reporting a PrimeDAO ecosystem roadmap. This Growth phase is projected to start in Q4 2020 with the deployment and launch of PrimeDAO governance and resource allocation. The PrimeDAO will utilize the Alchemy platform by DAOstack throughout this phase. In this phase, the dual primary objectives are to mature PrimeDAO by expanding its suite of products in order to achieve economic sustainability and to design and improve Decentralized Governance and Operations specific to the needs and requirements of PrimeDAO. We anticipate that the former will accelerate adoption in the DeFi space and the latter will require DAO fractalization and delegation mechanisms. That is: the capacity to designate and deploy sub-DAOs and joint DAO ventures, form nested organizational hierarchies, and designate budgets and special product or protocol privileges to designated administrators. PrimeDAO will launch its first product, primedao.eth/trade, a dex aggregator that uses Totle's DEX smart routing technology to create a PrimeDAO native DEX Aggregation product that is fully integrated into PrimeDAO. Afterward, PrimeDAO can decide on other promising DeFI business models that contribute to the advancement of the industry and strengthen PrimeDAO. The PrimeDAO now consists of a network of independent stakeholders that coordinate publicly and transition value through PRIME tokens. In this phase, the function of the PrimeDAO Development Foundation will be limited and the vast majority of interactions will be executed directly by PrimeDAO or through independent third parties. All infrastructure including the smart contracts will become increasingly decentralized during this phase. Once stakeholders have agreed to the specification of a new PrimeDAO governance and tokenomics for the next phase—including the speculative initiation of a continuous fundraising mechanism, such as a PRIME bonding curve—the network will transition to its Maturity phase. The Maturity phase is projected to start in Q1 2022 with the launch of the native PrimeDAO with full on-chain governance and fully integrated tokenomics. Through continuous improvement, PrimeDAO should continue to evolve into a completely decentralized autonomous organization and is able to self-fund, self-govern and self-organize. At this point, PrimeDAO should have developed into a global network of decentralized financial applications and interfaces that allows anybody—without being limited by a lack of resources of any kind—to be able to utilize and benefit from PrimeDAO’s integrated financial offering. PrimeDAO as a public goodwill help improve coordination and effective use of resources and contribute greatly to a more thriving global society. The ‘completion’ of the Maturity phase depends on the state of the industry and evolution of PrimeDAO, however specific cryptoeconomic simulation tools such as CADCAD can help create models to determine when PrimeDAO has reached a point of resiliency and economic sustainability.
https://docs.primedao.io/primedao/roadmap
2021-05-06T07:03:51
CC-MAIN-2021-21
1620243988741.20
[]
docs.primedao.io
scipy.special.i0e¶ scipy.special. i0e(x) = <ufunc 'i0e'>¶ Exponentially scaled modified Bessel function of order 0. Defined as: i0e(x) = exp(-abs(x)) * i0(x). - Parameters - xarray_like Argument (float) - Returns - Indarray Value of the exponentially scaled modified Bessel function of order 0 at x. Notes The range is partitioned into the two intervals [0, 8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. The polynomial expansions used are the same as those in i0, but they are not multiplied by the dominant exponential factor. This function is a wrapper for the Cephes [1] routine i0e. References - 1 Cephes Mathematical Functions Library,
https://docs.scipy.org/doc/scipy-1.4.1/reference/generated/scipy.special.i0e.html
2021-05-06T07:30:32
CC-MAIN-2021-21
1620243988741.20
[]
docs.scipy.org
Exporting Keyboard Shortcuts You can export your current keyboard shortcut configuration into an XML file. You can reimport this file later to restore your configuration, or share it with project collaborators so that you all use the same keyboard shortcuts. NOTE When you export your keyboard shortcuts configuration, only the currently selected keyboard shortcuts set is exported. Do one of the following to open the Keyboard Shortcuts dialog: - Windows or GNU/Linux: In the top menu, select Edit > Keyboard Shortcuts. - macOS: In the top menu, select Harmony Premium > Keyboard Shortcuts. - In the Keyboard Shortcuts: drop-down, make sure the keyboard shortcut set you want to export is selected. At the right of the Keyboard Shortcuts: drop-down, click on the Save... button. A save dialog appears. Browse to the location where you want to save your keyboard shortcut file. - Type in the desired name for your keyboard shortcut file. Click on Save. The currently keyboard shortcut set has been exported as an XML file, in the selected location with the file name you gave it.
https://docs.toonboom.com/help/harmony-20/premium/keyboard-shortcuts/export-keyboard-shortcut-set.html
2021-05-06T06:52:05
CC-MAIN-2021-21
1620243988741.20
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
You can edit the DHCP settings of an isolated organization VDC network. The DHCP service of an organization VDC network provides IP addresses from its address pool to VM NICs that are configured to request an address from DHCP. The service provides the address when the virtual machine powers on. Prerequisites - These operations require the predefined organization administrator or system administrator roles or a role that includes an equivalent set of rights. - Verify that your network is an isolated organization virtual data center network. Procedure - On the Virtual Datacenters dashboard screen, click the card of the virtual data center you want to explore, and select Networks from the left panel. - Click the name of the network that you want to edit. - Click the IP Management tab. - Select DHCP.The DHCP settings display on the right. - To enable DHCP, click Edit on the right of DHCP Pools Service. - Toggle on the DHCP Pools Service and click Save.Addresses requested by DHCP clients are pulled from a DHCP pool. - Create a DHCP pool for the network. - Click Add. - Enter an IP address range for the pool.The IP address range that you specify cannot overlap with the static IP address pool for the organization virtual data center. - Specify the default lease time for the DHCP addresses in seconds.The default value is 3,600 seconds. - Specify the maximum lease time for the DHCP addresses in seconds.This is the maximum length of time that the DHCP-assigned IP addresses are leased to the virtual machines. The default value is 7,200 seconds. - Click Save.
https://docs.vmware.com/en/VMware-Cloud-Director/10.0/com.vmware.vcloud.tenantportal.doc/GUID-24AB3A28-F4F9-465A-8439-B2BDA1B23A1E.html
2021-05-06T07:22:27
CC-MAIN-2021-21
1620243988741.20
[]
docs.vmware.com
You can use the installation wizard to create a private cloud environment and install vRealize Suite products. Prerequisites - Configure Product Binaries for the products to install. See Configure Product Binaries. - Ensure that you have added a vCenter server to the data center with valid credentials and the request is complete. - Generate a single SAN certificate with host names for each product to install from the Certificate tab in the UI. Verify that your system meets the hardware and software requirements for each of the vRealize Suite products you want to install. See the following product documentation for system requirements. - vRealize Automation SaltStack Config (formerly known as Salt Stack Enterprise) offers two setup options: - vRealize Automation SaltStack Config vRA-Integrated: This setup is introduced as a part of vRealize Automation 8.3.0. SaltStack Config (SSC) is a single node setup, which does not support multiple node setup or vertical scale up options. Prior to installing SaltStack Config vRA-Integrated, ensure that the supported version of vRealize Automation is installed. After vRealize Automation is installed, if multiple tenancy is not enabled, the SaltStack instance associates with the base tenant of vRealize Automation. When multi-tenancy is enabled in vRealize Automation, SaltStack Config vRA-Integrated associates with the newly added tenants, and then proceeds with the installation. When vRealize Automation is imported, the SaltStack Config vRA-Integrated instances which are associated with vRealize Automation are also imported. - vRealize Automation SaltStack Config Standalone: This setup has no dependency on vRealize Automation. You can install vRealize Automation SaltStack Config Standalone with a dedicated vRealize Automation Standard Plus license. --enable. - Install Microsoft .NET Framework 3.5. - Install Microsoft .NET Framework 4.5.2 or later. A copy of .NET is available from any vRealize Automation appliance: you use Internet Explorer for the download, verify that Enhanced Security Configuration is disabled. Navigate to res://iesetup.dll/SoftAdmin.htm on the Windows server. -. - On all of the windows IaaS machines used in vRealize Automation deployment, log in to windows machine at least once as a domain user. If you do not login at least once to the IaaS machines, then the following error appears: Private key is invalid: Error occurred while decoding private key. The computer must be trusted for delegation and the current user must be configured to allow delegation. - Ensure that the IaaS nodes do not have any vRealize Automation components already installed. Follow the steps in the KB article 58871 to uninstall any vRealize Automation components in the IaaS node. -. - Verify that the TLS 1.0 and 1.1 values are not present in the IaaS windows machine registry path HKLM SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols. - Alternatively, vRealize Automation install precheck provides a script, which can be executed in all Windows and database server to perform the above operations. - If you are importing an existing vRealize Operations Manager installation, set a root password for that installation.
https://docs.vmware.com/en/VMware-vRealize-Suite-Lifecycle-Manager/8.4/com.vmware.vrsuite.lcm.8.4.doc/GUID-A59E7B07-21FC-45B6-BCC3-1FF2979300C2.html
2021-05-06T07:29:14
CC-MAIN-2021-21
1620243988741.20
[]
docs.vmware.com
Compiler configuration¶ Specifying configuration¶ You can specify configuration parameters by passing a Cott Cottle.
https://cottle.readthedocs.io/en/stable/page/04-configuration.html
2021-05-06T07:36:34
CC-MAIN-2021-21
1620243988741.20
[]
cottle.readthedocs.io
Cloudera Machine Learning Release Notes Cloudera Machine Learning, Cloudera’s platform for machine learning and AI, is now available on CDP Private Cloud. Cloudera Machine Learning unifies self-service data science and data engineering in a single, portable service as part of an enterprise data cloud for multi-function analytics on data anywhere. Organizations can now build and deploy machine learning and AI capabilities for business at scale, efficiently and securely. Cloudera Machine Learning on Private Cloud is built for the agility and power of cloud computing, but operates inside your private and secure data center. You can download the User Guide.
https://docs.cloudera.com/machine-learning/1.2/index.html
2021-05-06T06:54:51
CC-MAIN-2021-21
1620243988741.20
[array(['ml-index-page.png', 'Guide to ML Documentation'], dtype=object)]
docs.cloudera.com
Overview Get familiar with Streams Replication Manager and its components.. Streams Replication Manager consists of two main components. The Stream Replication Engine and the Stream Replication Management Services. - Stream Replication Engine The Stream Replication Engine is a next generation multi-cluster and cross-datacenter replication engine for Kafka clusters. - Stream Replication Management Services Stream Replication Management Services are services powered by open source Cloudera extensions which utilize the capabilities of the Stream Replication Engine. These services provide: - Easy installation - Lifecycle management - Management and monitoring of replication flows across clusters The Stream Replication Management Services includes the following custom extensions: - Cloudera SRM Driver The Cloudera SRM Driver is a small wrapper around the Stream Replication Engine that adds the extensions provided by Cloudera. It provides the ability to spin up SRM clusters and has a metrics reporter. The driver is managed by Cloudera Manager and is represented by the Streams Replication Manager Driver role. - Cloudera SRM Client The Cloudera SRM Client provides users with command line tools that enable replication management for topics and consumer groups. The command line tool associated with the Cloudera SRM Client is called srm-control. - Cloudera SRM Service The Cloudera SRM Service consist of a REST API and a Kafka Streams application to aggregate and expose cluster, topic and consumer group metrics. The service is managed by Cloudera Manager and is represented by the Streams Replication Manager Service role.
https://docs.cloudera.com/runtime/7.2.9/srm-overview/topics/srm-replication-overview.html
2021-05-06T06:57:11
CC-MAIN-2021-21
1620243988741.20
[array(['../images/srm-product-overview-in-csp.png', None], dtype=object)]
docs.cloudera.com
Our API gives you programmatic access to your winemaking accept-types=xml to the end of the query string in the url and you will get the vessels you are looking at in an Xml format. Xml format. gt 1000&accept-types=xml Returns the first 20 vessels with a volume greater than 1000. le 1000&$top=50&accept-types=json Filters vessels with the Volume less than or equal to 1000 then orders them by BatchCode and returns the first 50.&accept-types=json.
https://docs.vinsight.net/api/api-introduction/
2021-05-06T06:47:33
CC-MAIN-2021-21
1620243988741.20
[]
docs.vinsight.net
You can view the details of configured network services for an enterprise. To view the details of network services: - In the Enterprise portal, click the Open New Orchestrator UI option available at the top of the Window. - Click Launch New Orchestrator UI in the pop-up window. The UI opens in a new tab displaying the monitoring options. - Click Network Services. You can view the configuration details of the following network services:
https://docs.vmware.com/en/VMware-SD-WAN/4.0/VMware-SD-WAN-by-VeloCloud-Administration-Guide/GUID-B171F85F-1AB7-4C6D-B23F-4958C0E09672.html
2021-05-06T08:16:05
CC-MAIN-2021-21
1620243988741.20
[]
docs.vmware.com
public CacheCollector::set($key, $value) Implements \Drupal\Core\Cache\CacheCollectorInterface::set(). This is not persisted by default. In practice this means that setting a value will only apply while the object is in scope and will not be written back to the persistent cache. This follows a similar pattern to static vs. persistent caching in procedural code. Extending classes may wish to alter this behavior, for example by adding a call to persist(). Overrides CacheCollectorInterface::set public function set($key, $value) { $this->lazyLoadCache(); $this->storage[$key] = $value; // The key might have been marked for deletion. unset($this->keysToRemove[$key]); $this->invalidateCache(); } © 2001–2016 by the original authors Licensed under the GNU General Public License, version 2 and later. Drupal is a registered trademark of Dries Buytaert.
https://docs.w3cub.com/drupal~8/core-lib-drupal-core-cache-cachecollector.php/function/cachecollector-set/8.1.x
2021-05-06T07:52:30
CC-MAIN-2021-21
1620243988741.20
[]
docs.w3cub.com
To configure your blog style you’ve to go to Appearenace->Configuration and then click on Tiam: Blog Settings In this page you can configure all settings of the blog global page Let’s see them in order. Blog posts Informations In this page you can configure the position of the title and the visibility of the other information fields. Blog Pagination Here you can configure the pagination type (normal pagination or load more button) and it’s alignment. Sidebar In this configuration screen you can choose the default sidebar for your blog section. In the first dropdown box you can choose what sidebar you want to display (and also if you want do display no sidebar on your blog page). If you choose a sidebar to display the configuration panel fills with some other options where you can define where to display the sidebar, the mobile visibility and the color configurations. Blog Layout Here you can configure the main layout of your blog. In the first dropdown box you can choose between 4 different styles: - Grid Classic - Grid Mansonry - Grid Gallery - List Then you can choose the number of columns you want to display (2, 3 or 4 columns), the continue reading label, the animation effects (you can choose from 30+ animation effects) and the style of each single “post box” with colour, margin etc… Blog Layout | First Post Here you can define First post aspect. You can choose between: - As blog Layout (the first post will be display normally, as defined in blog layout) - List Fullwidth - Gallery Fullwidth - Classic fullwidth When you select one of these options the configuration panel will change to give you the ability to configure all options for your first post. Archive/Categories Layout In this page you can configure the layout of Archive/Categories pages. In the first field you can choose the layout of the page as: - Grid Classic - Grid Mansonry - Grid Gallery - List Then you can define the number of columns, the sidebar visibility and the colours of title area. When you’ve finished with the customization of your settings just click on publish button in the top left corner of the settings screen.
https://docs.emotionalthemes.com/blog-style-settings/
2019-01-16T07:55:48
CC-MAIN-2019-04
1547583657097.39
[array(['https://docs.emotionalthemes.com/wp-content/uploads/2018/03/Schermata-2018-03-10-alle-14.31.35-1024x653.png', None], dtype=object) array(['https://docs.emotionalthemes.com/wp-content/uploads/2018/03/Schermata-2018-03-10-alle-14.34.03-1024x653.png', None], dtype=object) array(['https://docs.emotionalthemes.com/wp-content/uploads/2018/03/Schermata-2018-03-10-alle-14.40.27-1024x653.png', None], dtype=object) array(['https://docs.emotionalthemes.com/wp-content/uploads/2018/03/Schermata-2018-03-10-alle-15.48.04-1024x653.png', None], dtype=object) array(['https://docs.emotionalthemes.com/wp-content/uploads/2018/03/Schermata-2018-03-11-alle-14.44.24-1024x653.png', None], dtype=object) array(['https://docs.emotionalthemes.com/wp-content/uploads/2018/03/Schermata-2018-03-11-alle-14.59.44-1024x653.png', None], dtype=object) array(['https://docs.emotionalthemes.com/wp-content/uploads/2018/03/Schermata-2018-03-11-alle-15.09.24-1024x653.png', None], dtype=object) array(['https://docs.emotionalthemes.com/wp-content/uploads/2018/03/Schermata-2018-03-11-alle-15.13.33-1024x653.png', None], dtype=object) ]
docs.emotionalthemes.com
Eucalyptus exposes a number of variables that can be configured using the euctl command. This topic explains what types of variables Eucalyptus uses, and lists the most common configurable variables. Eucalyptus uses two types of variables: ones that can be changed (as configuration options), and ones that cannot be changed (they are displayed as variables, but configured by modifying the eucalyptus.conf file on the CC). The following table contains a list of common Eucalyptus cloud variables.
https://docs.eucalyptus.cloud/eucalyptus/4.4.5/euca2ools-guide/euca-properties.html
2019-01-16T08:15:18
CC-MAIN-2019-04
1547583657097.39
[]
docs.eucalyptus.cloud
Release LogRelease Log Version 9Version 9 TIP Backwards compatible with V8. v9.1-65.1v9.1-65.1 - Fix missing facial data from header when using face=true and no eyes are detected. v9.1-65.0v9.1-65.0 - Fix trim=color by reducing the color sensitivity level. v9.0-64.7v9.0-64.7 - Fix failed application workers setup on boot without AWS User Data. - Fix health check that shows healthy before services are ready. - Increase the number of rotated logs in /var/log/httpd to 24. - Add support for Write to Cache on Cluster Mode. - Add build in tests to the /tests endpoint on the Admin API. - Add uploads to S3 support. - Add backendSync for async uploads to remote S3 gateway. - Add docker support for Imagizer. - Add new queue metrics: queueCopyJobsConsumed, queueCopyJobsProduced, queueCopyJobsFailed. Version 8Version 8 TIP Backwards compatible with V7. v8.7-63.1(p1)v8.7-63.1(p1) - Fix multi region S3 buckets. - Fix broken license validation from the previous build (v8.6-63.0). - Improve caching of S3 credentials and bucket regions. - Use signed URLs for S3 backend requests using the Imagizer fetcher rather than the AWS SDK. This modification improves the performance of image fetching from S3 buckets. v8.6-63.0v8.6-63.0 - Fix failed Consul call on boot. Consul will now be called on every boot not just the first one after configuration. v8.5-62.2p2v8.5-62.2p2 - Fix an issue that may cause CloudWatch logging to fail. - Fix bucket region auto detect when the bucket parameter is used. - Add the region parameter to specify the AWS region along with the bucket parameter. v8.5-62.2p1v8.5-62.2p1 - Fix duplicates in the bucketsAllowed configuration. - Add new socket connection metrics: netTcpSock80, netTcpSock80Estab, netTcpSock81, netTcpSock81Estab, netTcpSock17001, netTcpSock17001Estab, netTcpSock17005, netTcpSock17005Estab, netTcpSock17007, netTcpSock17007Estab, netTcpSock, netTcpSockEstab, netTcpSockFetch, netTcpSockFetchEstab, netUnixSockFetch, netUnixSockProcess, netUnixSockFetchConnected, and netUnixSockProcessConnected. - Add the applicationWorkers configuration. - Increase the default number of application fetch workers. v8.5-62.2v8.5-62.2 - Fix Ganglia configuration. - Fix null Default Mobile Image Parameters from overriding Default Image Parameters. - Modify cacheLruMoved, cacheLruNuked, cacheHit, cacheMiss, cacheHitRate, and cacheMissRate stats to be per minute rather than a total count. See cacheHitTotalRate and cacheMissTotalRate for replacement total cache rates. - Modify the http200, http300, http400, and http500 stats. Make them per minute rather than per second. - Modify the cacheOriginHit, cacheOriginHitRate, cacheOriginHitTotalRate, cacheOriginMiss, cacheOriginMissRate, and cacheOriginMissTotalRate stats. - Add a bucket parameter to allow for overriding the configured S3 bucket. v8.5-62.1v8.5-62.1 - Add facial coordinates in response HTTP header when redeye parameter is true. - Add autorotate parameter to disable auto rotation on EXIF orientation. - Fix broken fetch time stats when the defaultImageParams or urlRewrites configuration is set. v8.5-62.0v8.5-62.0 - Fix the cache none option during configuration. - Fix temp directory overflow. - Fix broken fetch time stats. - Fix incorrect RPS stats. - Fix broken cluster node state on reload. - Fix temp directory cron job from killing cache every 10 days. - Revert cache memory increases. - Revert increased increases to the number of application workers. - Add resolve parameter. v8.4-59.0v8.4-59.0 - Fix race condition which caused some instances to ignore user data on start up. - Fix issue which caused cluster configuration to fail to initialize. v8.3-58.2p1v8.3-58.2p1 - Fix bug which causes issues with ampersands in the layers parameter. v8.3-58.0v8.3-58.0 - Fix a pile up of tiff files in the /tmp directory on uploads. - Fix configuration for systems with non eth* named network interfaces. - Fix 502s on some corrupt images. - Alphabetize stats JSON output properties. - Alphabetize config JSON output properties. - Increase max number of requests per application worker. Improves performance. v8.2-57.5v8.2-57.5 - Add Cluster Mode for horizontal scaling of Imagizer instances. - Add auto-fill Address property to Consul config. - Add tags to Datadog integration. - Fix caching issues related to defaultImageParams configuration. - Fix some inconsistencies related to DPR and custom crop. - Fix inconsistent health checks during startup. - Reduce instance startup times. v8.1-55.6v8.1-55.6 - Add Datadog metric ang logging integration. - Add support for AWS instance types m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge. - Add support for register with Consul agent on boot. - Fix errors on a GET request to /updates API. v8.0-55.7v8.0-55.7 - Major system upgrades. - Significance performance improvements on JPEG processing. - Minor performance improvements on all other image processing. Version 7Version 7 TIP Backwards compatible with V6. v7.2-54.5patch7v7.2-54.5patch7 - Fix temp directory overflow. v7.2-54.5patch6v7.2-54.5patch6 - Add POST requests to the access log. - Fix incorrect stat counts on internal requests and POST requests. v7.2-54.5patch3av7.2-54.5patch3a - Increase size of temp directory. v7.2-54.5patch3v7.2-54.5patch3 - Fix incorrect content type when using the format param on POST requests. - Fix caching with the origin parameter on POST requests. - Add configurable width for face/eye detection. (New config argument: faceDetectionWidth) 7.2-54.07.2-54.0 - Fix stats collector restart loop on non-AWS instances. - Fix redeye removal on POST requests. - Fix bug which caused processed images to be cached as original images on POST requests. - Add stats collector syslog logging. - Add cache_processed parameter to POST requests. 7.1-53.07.1-53.0 - Add required element to layer object. - Add source_url API. - Add better SVG image detection. 7.0-52.47.0-52.4 - Fix missing Content-Type header on some SVG images. - Increase max face size detection. 7.0-52.07.0-52.0 - Allow rotation on images with transparent backgrounds. - Add transparent padding API. - Fix unnecessary image conversion on images which requested dimensions are larger than or equal to the original dimensions. - Fix bug in upscaling watermark images. Version 6Version 6 TIP Backwards compatible with V5. 6.1-51.16.1-51.1 - Fix AWS CloudWatch logging. - Fix Stats API on some VMs. - Fix cache zones configuration bug which caused Write to Cache to fail in some cases. - Fix malformed headers on default images. - Fix bug in AWS user data config import which caused user data to be ignored in some cases. - Add Include Image Data support to metadata requests. - Add batch processing/caching (cache_queries) support to GET_ASYNC calls. - Add batch processing/caching (cache_queries) support to POST requests. - Add Ignore Formats configuration. 6.0-49.06.0-49.0 - Fix last modified header on POST and GET_ASYNC. - Fix stats collector crash when missing IAM perms (AWS only). - Fix a corner case which caused 500 errors on corrupt images. - Fix a bug in trim_color region of interest. - Fix temp disk (ram disk) overrun with very large png files. - Add original compression quality detection. Use original quality compression as default instead of 90. - Add support for auto rotation with icc profiles - Add support for ENI-enabled AMIs (AWS only). - Add new welcome page with enable/disable config. - Update the kernel to fix the Meltdown and Spectra vulnerabilities. Version 5Version 5 TIP Backwards compatible with V4. 5.4-47.25.4-47.2 - Add CloudWatch stats. - Fix crashes during entropy crop. 5.3-46.1p15.3-46.1p1 - Add Fallback Backend. 5.3-46.15.3-46.1 - Face detection optimizations. - Redeye removal improvements (reduction of blue artifacts). - Add async get calls (GET_ASYNC). 5.3-45.15.3-45.1 - Minor post to cache optimizations. 5.2-44.35.2-44.3 - Fix a crash during trim=color and CMYK colorspace. - Use ellipses instead of rectangles for redeye removal. - Minor object detection optimizations. - Add separateHostCache config option. - Return 415 error code on invalid image files. - Add image layers API. 5.1-43.0p15.1-43.0p1 - Modify error code on invalid images. Return 415 http code. - Add configuration to disable separate host caching. 5.1-43.05.1-43.0 - Fix 'Failed to purge' issue when purging images. - Fix http sum stats. They not longer include source requests. - Fix missing content-length header in certain situations. - Fix an issue with top/bottom crop. - Add Interlace (progressive) jpeg/png API. 5.0-42.05.0-42.0 - Fix 'Failed to purge' issue when purging images. - Fix http sum stats. They not longer include source requests. - Fix missing content-length header in certain situations. - Fix an issue with top/bottom crop. - Add interlace (progressive) jpeg/png API. Version 4Version 4 4.6-40.1p54.6-40.1p5 - Add fetch and process average times to stats/Ganglia - Add http codes per minute to stats/Ganglia - Remove /cache prefix from cache purge API - Add new Picture Adjustment - Revamp patching system. Allow batch patching. - Fix bug to allow spaces in watermark URLs - Fix bug to allow changing syslog facility config 4.6-40.14.6-40.1 - Fix bug in config API which failed to let syslog facility setting to be updated - Fix bug in watermark API. Allow spaces in watermark URLs - Fix bug causing request per second stats to be misreported - Add fetch and process times to stats API and Ganglia integration - Add http stats codes per minute stat to stats API and Ganglia integration - Modify Update API to allow batch patch updates 4.6-40.04.6-40.0 - Fix various bugs - Add mark_upscale API to allow upscale on watermarks - Image Meta Data API now returns original image meta data even when image parameters are present - Rework rotate API to allow for single degree increments - Add crop type zoom for use with rotation API - Add Max Image Dimensions configuration 4.5-36.04.5-36.0 - Add option to disable the origin/hostname param - Add option to disable origin image requests - Add support for CMYK colorspace - Add new Picture Adjustment vibrance API - Add Etag header - Fix malformed Last-Modified header 4.4-35.14.4-35.1 - Add Ganglia Stats integration - Add new Picture Adjustment - Add Host Header parameter to override host header on requests - Add Image Meta Data API - Add Post to Process feature - Replace the hostname parameter with a new origin parameter - Improve internal image caching (Update Varnish) - Pass through the expires header on all images - Improve AWS user data config importer - Fix bug in padding image with upscale enabled 4.3-33.34.3-33.3 - Add Default Images feature - Add Url Rewrites feature - Add Auto White Balance API - Add Cloud Watch Logging support - Add return 404 http error code from Size Check - Improve object detection with large number of concurrent connections - Do not allow return of larger image after 'quality' only request - Fix image step calculation when sharpening images 4.2-30.24.2-30.2 - Fix large png file sizes - Fix AWS S3 bucket region lookup - Allow auto_fix API to use boolean as argument - Add center rectangle crop API - Allow upscaling images. Disabled by default 4.1-28.34.1-28.3 - Add brightness adjust API - Add contrast adjust API - Add Auto Fix image brightness/contrast API - Add hostname white-list to size check 4.0-27.04.0-27.0 - Improve concurrent connection handling - Improve memory management of application workers - Fix bug in auto format. Do not convert image if not needed - Add write to image cache Version 3Version 3 3.4-26.03.4-26.0 - Fix padding on rotated images. Pad correct the sides - Fix malformed Original-Filesize header - Fix dct scaling issue on a small number of image types - Add Original-Resolution header to images - Add specific fallback images for different file types on size check 3.3-25.03.3-25.0 - Cache domain name lookups internally - Increase number of application workers - Add wildcard matching to pass through headers - Add size check fallback image feature - Fix to use configured timeouts for signed AWS S3 fetching - Optimize non cached image requests - Increase patch file size allowed 3.2-24.13.2-24.1 - Add crop top/bottom API - Optimize decompression and processing when generating thumbnails 3.1-23.33.1-23.3 - Add the update service API for small software patches - Add fit=fill and bg API aliases - Fix logging for the admin API - Search deeper into the file when checking for Adobe RGB 3.0-22.53.0-22.5 - Add cpu steal stats API - Add enhanced stats API (cpu, cache, disks, network) - Add imagizer version on stats API - Add passThroughHeaders configuration feature - Add syslog support for access, error, and application logs - Add red-eye removal feature - Fix off by one error in crop - Fix off by one error when height is given - Update system packages for increases security - Update web server - Refactor eye detection - Allow users to completely disable caching - Increase system limits to optimize system operations Version 2Version 2 2.5-21.12.5-21.1 - Add API param aliases - Add improved check for adobe RGB - Fix bug related to ICC profiles in JPEGs - Update ca-certificates package for managing SSL 2.4-19.12.4-19.1 - Add support for images with ICC profiles - Add network config API - Add auto crop/pad API - Add color trim API - Fix application layer to correctly clean up temp files for 40x/50x errors - Update temp file cleanup script to accommodate new filenames and to be a little more aggressive 2.3-18.12.3-18.1 - Add Image blur API - Add Image padding API - Add Watermark alpha API - Fix to ensure that an image is watermarked/blurred/sharpened without resizing - Update JPEG crop to be done in a separate step instead of during the image compression. This allows imagizer to apply padding, blur, and watermark to the cropped image instead of a region of interest. 2.2-17.22.2-17.2 - Add access to logging - Add CacheControl override - Fix AWS user data import - Fix bug in the network configurator 2.1-15.12.1-15.1 - Add region lookup on s3 auth requests - Add processing of multiple network interfaces - Fix watermark fetches to use s3 backend if needed - Update AWS SDK Version 1Version 1 1.10-14.21.10-14.2 - Fix bug in watermarking - Fix mac address validation error - Clean up system libraries - Install open-vm-tools package (vmware only) 1.9-13.31.9-13.3 - Fix for entropy bug 1.8-12.31.8-12.3 - Add tiff format handling - Add improved nginx log rotation - Update application dependencies - Adjust image cache memory values to slightly smaller sizes for machines under 16GB of RAM - Adjust application workers and max number of requests for machines with less than 8GB of RAM or less than 4 CPU cores 1.7-10.41.7-10.4 - Revert change of the root disk volume type from "Provisioned IOPS(IO1)" back to "General Purpose (GP2) - Update the config script to reset everything to config settings at boot - Disabled multithreading conversion on busy instances. This increases performance - Switch to the new versioning system using AWS marketplace versions as a basis; this version becomes 1.7-10.4 1.61.6 - Add improved memory management - Decrease the amount of RAM that image cache uses (in memory-based setups) - Decrease the application worker's lifetime from 1000 requests to 500 - Decrease the number of application workers from 128 to 64 - Update web service config to restrict extended_status page visibility 1.51.5 - Fix the default backend 1.41.4 - Enable filesystem creation on the disk cache SSDs (in disk-based setups) - Increase root disk size to 20 GB (due to temp dir overflow) - Add a cronjob to clean the conversion temp dir 1.31.3 - Add API call to allow setup of disk cache - Allow image cache to be configured and restarted via API
https://docs.imagizer.com/release_log/
2019-01-16T09:09:57
CC-MAIN-2019-04
1547583657097.39
[]
docs.imagizer.com
Control Which Browsers Your Organization Supports Applies To: Microsoft Dynamics CRM 2011, Microsoft Dynamics CRM Online Jim Daly Microsoft Corporation December, 2012 Summary This article describes a Microsoft Dynamics CRM 2011 managed solution that enables a Microsoft Dynamics CRM 2011 and Microsoft Dynamics CRM Online organization with Microsoft Dynamics CRM 2011 Update Rollup 12 or the Microsoft Dynamics CRM December 2012 Service Update to control which browsers are supported for their organization. The ControlBrowserSupportforOrganization_1_0_0_1_managed.zip file contains the Control Browser Support for Organization managed solution. Applies To Microsoft Dynamics CRM 2011 and Microsoft Dynamics CRM Online Update Rollup 12 and the December 2012 Service Update Introduction Update Rollup 12 or the December 2012 Service Update adds the long awaited capability to use Microsoft Dynamics CRM with browsers other than Internet Explorer. However, with this change there is potential for customizations using JavaScript to stop working if they were not written for the new set of browsers that Microsoft Dynamics CRM supports. The Control Browser Support for Organization managed solution provides a way for an organization to enforce a policy to restrict which browsers the organization chooses to support and reduce the risk that unexpected errors will occur. Details about the browsers that are supported is available at Microsoft Dynamics CRM 2011 web application and mobile device requirements. Some organizations may already have policies about which browsers they choose to support, to reduce costs or for other reasons. A website that is available to the public has to support the browsers that the public will ordinarily use to access the site. But most public websites do not try to support every browser. Developing and testing scripts for multiple browsers is expensive and time consuming. And, if the scripts were written for other browsers, because Microsoft Dynamics CRM has required using Internet Explorer, there has been no way to test customizations using other browsers. Also, even for organizations that do not have separate development and production environments, applying the Update Rollup 12 or the December 2012 Service Update on the production environment could cause errors when someone uses a browser other than Internet Explorer. The Control Browser Support for Organization managed solution allows an organization to specify which browsers they want to support for people using Microsoft Dynamics CRM. A JavaScript developer can modify this solution to enforce a policy to restrict the browser use to any of the browsers supported by Microsoft Dynamics CRM. This topic includes the following sections: What Does the Solution Do? Using the Solution How Does the Solution Work? Configure the Solution to Enforce Other Browser Support Policies What Does the Solution Do? When the Control Browser Support for Organization managed solution is installed and someone accesses Microsoft Dynamics CRM using a browser that is supported by Microsoft Dynamics CRM but is not supported by the organization, the user briefly sees the Microsoft Dynamics CRM application as the page loads. However, as soon as the browser support rules are applied, the user will be directed to a page that explains what browsers the organization supports. A system administrator can add a security role called Any Browser Allowed that is included in the solution to any user who should be exempt from this behavior. This allows for developers who are writing and testing scripts to use other browsers supported by Microsoft Dynamics CRM. Also, users who have the system administrator security role cannot be prevented from using Internet Explorer. This solution can only restrict access for pages that display the ribbon. This solution is not intended to protect data. This solution enables an organization to take steps to enforce policies about browsers they support. It is possible for a user to defeat this solution and access the application by using an unapproved browser. Many browsers include capabilities to spoof other browsers by changing the navigation.userAgent property to represent a different browser. Using the Solution The Control Browser Support for Organization solution is easy to install and configure. To Install and Configure the Solution Download the ControlBrowserSupportforOrganization_1_0_0_0_managed.zip solution. Install the solution by using the steps described in How to Import or Export a Solution. In the Solutions list, open the solution and view the Configuration page. The configuration page includes instructions about how to locate and edit the Organization Browser Support Message (sample_OrganizationBrowserSupportMessage.htm) HTML web resource. You must open this web resource from the default solution. You cannot access a web resource through a managed solution page. Edit the Organization Browser Support Message (sample_OrganizationBrowserSupportMessage.htm) HTML web resource using the Text Editor. Use the Source tab in the Text Editor when you set recommended links. The default HTML for this web resource is: <HTML><HEAD><TITLE>Browser Support Page</TITLE> <META charset=utf-8></HEAD> <BODY style="FONT-FAMILY: Tahoma, Verdana, Arial" contentEditable=true> <SCRIPT type=text/javascript> document.onselectstart = function () { return false }; </SCRIPT> <P>At [Organization Name] we only support using Internet Explorer version 7 or higher when accessing Microsoft Dynamics CRM. </P> <P>For more information or to request support for additional browsers contact <A href="mailto:[email protected]?subject=browser support for Microsoft CRM">[Administrator Name]</A>. </P></BODY></HTML> At a minimum, you should change the following parts: [Organization Name]and <A href="mailto:[email protected]?subject=browser support for Microsoft CRM">[Administrator Name]</A>, adding an actual email address for an administrator in your organization. (Optional) If you want to specify that certain users are exempt from the browser policy, you can add the Any Browser Allowed security role to their user record. If the solution does not meet your needs, or you want to remove browser restrictions, use the instructions in Delete a solution to uninstall it. How Does the Solution Work? The solution contains the solution components that are shown in the following table. The design uses the ribbon to display a custom button in a custom group. The command for the custom button contains an enable rule that executes a function that always returns false so the button is never enabled. The button must be displayed for the enable rule to be evaluated. The enable rule contains the code to detect the browser and enforce the policy. Note This solution uses an enable rule to execute code that is not related to enabling a ribbon control. Generally, this approach is not recommended. Enable rules can be called repeatedly and a function that requires time to process can damage performance. Evaluate other approaches before resorting to this technique. If you do use this approach, take great care to ensure that the functions called by the enable rule can complete very quickly. Each time the ribbon is evaluated, the enable rule for the button executes the SDK.ControlBrowserSupportForOrganization.enforceAllowedBrowsers function in the sample_ControlBrowserSupportforOrganization.js library. A description of the operations performed by library function follows the following code. //If the SDK namespace object is not defined, create it. if (typeof (SDK) == "undefined") { SDK = {}; } // Create Namespace container for functions in this library; if (typeof (SDK.ControlBrowserSupportForOrganization) == "undefined") { SDK.ControlBrowserSupportForOrganization = { status: "unknown", AdminAndCustomizerRoleIdValues: [], AnyBrowserAllowedRoleIdValues: [], getAllowedSecurityRoleIds: function () { SDK.ControlBrowserSupportForOrganization.querySecurityRoles("?$select=RoleId,Name&$filter=Name eq 'System Administrator' or Name eq 'System Customizer' or Name eq 'Any Browser Allowed'") }, enforceAllowedBrowsers: function () { switch (SDK.ControlBrowserSupportForOrganization.status) { //If the user has already been approved to use any browser // simply return false. case "approved": return false; break; default: var userRoles = Xrm.Page.context.getUserRoles(); // Begin enforcement of allowed browsers. // This example shows allowing only Internet Explorer versions that are not earlier than version 7. var isAllowed = false; //Control rules for your organization by changing the function that sets isAllowed. isAllowed = SDK.ControlBrowserSupportForOrganization.isIE(); if (!isAllowed) { // For Microsoft Dynamics CRM Update Rollup 12 or the December 2012 Update, System Administrators or System customizers must not be blocked // from accessing the application using Internet Explorer. if (SDK.ControlBrowserSupportForOrganization.AdminAndCustomizer.AdminAndCustomizerRoleIdValues.length; n++) { var adminOrCustomizerRole = SDK.ControlBrowserSupportForOrganization.AdminAndCustomizerRoleIdValues[n]; if ((userRole.toLowerCase() == adminOrCustomizerRole.toLowerCase()) && SDK.ControlBrowserSupportForOrganization.isIE()) { SDK.ControlBrowserSupportForOrganization.status = "approved"; return false; } } } } // Check whether the user has the 'Any Browser Allowed' security role. if (SDK.ControlBrowserSupportForOrganization.AnyBrowserAllowed.AnyBrowserAllowedRoleIdValues.length; n++) { var AnyBrowserAllowedRole = SDK.ControlBrowserSupportForOrganization.AnyBrowserAllowedRoleIdValues[n]; if (userRole.toLowerCase() == AnyBrowserAllowedRole.toLowerCase()) { SDK.ControlBrowserSupportForOrganization.status = "approved"; return false; } } } } //Redirect page to web resource explaining the organization policy. window.top.location.replace(Xrm.Page.context.getClientUrl() + "/WebResources/sample_OrganizationBrowserSupportMessage.htm"); } else { //Set the flag so the code above doesn't need to run again. SDK.ControlBrowserSupportForOrganization.status = "approved"; return false; } break; } }, isIE: function () { return ( (SDK.ControlBrowserSupportForOrganization.isIE7() || SDK.ControlBrowserSupportForOrganization.isIE8() || SDK.ControlBrowserSupportForOrganization.isIE9() || SDK.ControlBrowserSupportForOrganization.isIE10()) ); }, isIE7: function () { return SDK.ControlBrowserSupportForOrganization.testUserAgent("msie 7.0"); }, isIE8: function () { return SDK.ControlBrowserSupportForOrganization.testUserAgent("msie 8.0"); }, isIE9: function () { return SDK.ControlBrowserSupportForOrganization.testUserAgent("msie 9.0"); }, isIE10: function () { return SDK.ControlBrowserSupportForOrganization.testUserAgent("msie 10.0"); }, //Simple Examples to detect Chrome, Firefox, & Safari isChrome: function () { return SDK.ControlBrowserSupportForOrganization.testUserAgent("chrome"); }, isFirefox: function () { return SDK.ControlBrowserSupportForOrganization.testUserAgent("firefox"); }, //Chrome userAgent includes 'Safari' so to test for Safari the string must not include 'chrome' isSafari: function () { return (SDK.ControlBrowserSupportForOrganization.testUserAgent("safari") && (!SDK.ControlBrowserSupportForOrganization.testUserAgent("chrome"))); }, testUserAgent: function (string) { // Function to test whether a specific string is included within the navigator.userAgent. return new RegExp(string.toLowerCase()).test(navigator.userAgent.toLowerCase()); }, querySecurityRoles: function (queryString) { var req = new XMLHttpRequest(); var clientUrl = Xrm.Page.context.getClientUrl(); req.open("GET", clientUrl + "/XRMServices/2011/OrganizationData.svc/RoleSet" + queryString, true); req.setRequestHeader("Accept", "application/json"); req.setRequestHeader("Content-Type", "application/json; charset=utf-8"); req.onreadystatechange = function () { if (this.readyState == 4 /* complete */) { req.onreadystatechange = null; //Addresses memory leak issue with IE. if (this.status == 200) { var returned = window.JSON.parse(this.responseText).d; for (var i = 0; i < returned.results.length; i++) { if ((returned.results[i].Name == "System Administrator") || (returned.results[i].Name == "System Customizer")) { SDK.ControlBrowserSupportForOrganization.AdminAndCustomizerRoleIdValues.push(returned.results[i].RoleId); } else { SDK.ControlBrowserSupportForOrganization.AnyBrowserAllowedRoleIdValues.push(returned.results[i].RoleId); } } if (returned.__next != null) { //In case more than 50 results are returned. // This occurs if an organization has more than 16 business units. var queryOptions = returned.__next.substring((clientUrl + "/XRMServices/2011/OrganizationData.svc/RoleSet").length); SDK.ControlBrowserSupportForOrganization.querySecurityRoles(queryOptions); } else { //Now that the roles have been retrieved, try again. SDK.ControlBrowserSupportForOrganization.enforceAllowedBrowsers(); } } else { var errorText; if (this.status == 12029) { errorText = "The attempt to connect to the server failed."; } if (this.status == 12007) { errorText = "The server name could not be resolved."; } try { errorText = window.JSON.parse(this.responseText).error.message.value; } catch (e) { errorText = this.responseText } } } }; req.send(); }, __namespace: true }; } All functions and objects in this library use the SDK.ControlBrowserSupportForOrganization namespace. For brevity, the following description doesn’t include the namespace when it refers to specific objects or functions in this library. The key consideration in the design of this solution is that the enforceAllowedBrowsers function must not take a long time processing because this decreases performance. Actions that require data access are performed asynchronously and the function calls itself during the callback to complete processing. After the result is determined, it is cached in a global variable so that processing does not have to be performed again. The enforceAllowedBrowsers function follows this pattern: Check the value of status. The default value for this variable is "unknown". If the value is "approved", the function returns falseand is completed. The ribbon may be evaluated several times while the page is opened. The goal of this design is to quickly determine whether the browser is approved and exit as quickly as possible. The status value persists as long as the page is open. When the value is not "approved", the enforceAllowedBrowsers function continues to verify whether the browser is allowed. The default behavior is to require Internet Explorer. Therefore, the isIE function is used to call several other functions that test the navigator.userAgent object to determine the browser. The isIE function actually calls several separate functions to test for different versions of Internet Explorer. If any one of them return true, the value returned is true. If the browser is allowed, the value of status is set to "approved"and the function returns false. If the browser is not allowed, the script checks whether two arrays contain data: AdminAndCustomizerRoleIdValues and AnyBrowserAllowedRoleIdValues. If these arrays contain data, their security role ID values are compared to the security roles for the user, which are available using Xrm.Page.context.getUserRoles. If a matching security role ID is found the status is set to "approved"and the function returns false. If the arrays do not contain data, which is expected the first time the enforceAllowedBrowsers function runs, the function initiates a process of retrieving the necessary security role ID values by using the getAllowedSecurityRoleIds function. This function contains the following code: SDK.ControlBrowserSupportForOrganization.querySecurityRoles("?$select=RoleId,Name&$filter=Name eq 'System Administrator' or Name eq 'System Customizer' or Name eq 'Any Browser Allowed'") The querySecurityRoles function uses the REST endpoint for web resources to asynchronously request a list of security role ID values for specific security roles. Each time a business unit is created, any existing security roles are created and will have unique ID values. If an organization has more than 16 business units, the results may be more than the 50 records that are returned by default using the REST endpoint. Therefore, this function will look for the continuation token (__next) and it will call itself until all the results are returned. With each request, the values are added to the respective array. When all the results are returned, the enforceAllowedBrowsers function is called again. This time the AdminAndCustomizerRoleIdValues and AnyBrowserAllowedRoleIdValues arrays will have values and can determine whether the user is a member of the security roles that are excluded from the rules. If the user’s security role memberships do not exclude them from the rules, they are redirected to the sample_OrganizationBrowserSupportMessage.htm page. Configure the Solution to Enforce Other Browser Support Policies Changing the policy enforced by the solution requires a JavaScript developer to change some lines in the sample_ControlBrowserSupportforOrganization.js web resource. The first step is to access this web resource so that you can edit it. Because the Control Browser Support for Organization solution is managed, you cannot simply open the managed solution and then open the web resource for editing. You have to use an indirect approach of locating the sample_ControlBrowserSupportforOrganization.js JScript web resource in the default solution. From there you can open the web resource and edit it using the text editor in the application. Note Editing JavaScript code by using the text editor in the application is generally not recommended. At a minimum, copy the text out of the text editor and paste it into a code editor of your choice. After you finish editing you can paste it back into the text editor to save and publish. A better solution is to use the Developer Toolkit for Microsoft Dynamics CRM 2011 and Microsoft Dynamics CRM Online or a third-party tool such as the WebResources Manager for Microsoft Dynamics CRM 2011 or the CRM Solution Manager. Any of these options will provide a support for editing JavaScript and help prevent syntax errors. After you have the sample_ControlBrowserSupportforOrganization.js Jscript web resource open for editing, locate the following line: isAllowed = SDK.ControlBrowserSupportForOrganization.isIE(); If your requirements are for a policy that only supports the Google Chrome browser, you can use one of the existing example functions in the library. Just change the assignment of isAllowed using this code: isAllowed = SDK.ControlBrowserSupportForOrganization.isChrome(); There are also example functions to check for Mozilla Firefox or Apple Safari. If these examples do not meet your requirements, you can apply your own logic to use a different method to test the value of the navigator.userAgent value provided by the browser. The default logic is a simple string match defined by the SDK.ControlBrowserSupportForOrganization.testUserAgent function in the following code: testUserAgent: function (string) { // Function to test whether a specific string is included within the navigator.userAgent. return new RegExp(string.toLowerCase()).test(navigator.userAgent.toLowerCase()); }, As mentioned in the Microsoft Dynamics CRM SDK Use JavaScript with Microsoft Dynamics CRM 2011 and Microsoft Dynamics CRM Online topic JavaScript Programming Best Practices section, browser detection is generally not recommended for use in scripts. You should generally use feature detection. For more information, see How to Detect Features Instead of Browsers. However, because this solution is actually about detecting browsers, using a browser detection approach is appropriate. Conclusion If your organization doesn’t want to support all the browsers that Microsoft Dynamics CRM 2011 and Microsoft Dynamics CRM Online support with Update Rollup 12 or the December 2012 Service Update, you may want to apply some way to prevent people from using certain browsers when they use Microsoft Dynamics CRM. This article described the Control Browser Support for Organization managed solution showing what it does, how to use it, and how it works. It also described how it can be configured to enforce policies other than the default policy, which is to only let users use Internet Explorer. Send comments about this article to Microsoft.
https://docs.microsoft.com/en-us/previous-versions/dynamics-crm2011/developer-articles/jj860463(v=crm.6)
2019-01-16T08:19:47
CC-MAIN-2019-04
1547583657097.39
[]
docs.microsoft.com
Hong & Goda (2007)¶ - class openquake.hazardlib.gsim.hong_goda_2007. HongGoda2007[source]¶ Implements GMPE developed for RotD100 ground motion as defined by Hong, H. P. and Goda, K. (2007), “Orientation-Dependent Ground Motion Measure for Seismic Hazard Assessment”, Bull. Seism. Soc. Am. 97(5), 1525 - 1538 This is really an experimental GMPE in which the amplification term is taken directly from Atkinson & Boore (2006) rather than constrained by the records themselves. There may exist a possible units issue as the amplification function for AB2006 is in cm/s/s whereas the GMPE here is given in g DEFINED_FOR_INTENSITY_MEASURE_COMPONENT= 'Horizontal Maximum Direction (RotD100)'¶ The supported intensity measure component is RotD100 DEFINED_FOR_INTENSITY_MEASURE_TYPES= set([<class 'openquake.hazardlib.imt.SA'>, <class 'openquake.hazardlib.imt.PGV'>, <class 'openquake.hazardlib.imt.PGA'>])¶ The supported intensity measure types are PGA, PGV, and SA DEFINED_FOR_STANDARD_DEVIATION_TYPES= set(['Total', 'Inter event', 'Intra event'])¶ The supported standard deviations are total, inter and intra event, see table 4.a, pages 22-23 DEFINED_FOR_TECTONIC_REGION_TYPE= 'Active Shallow Crust'¶ The supported tectonic region type is active shallow crust REQUIRES_SITES_PARAMETERS= set(['vs30'])¶ The required site parameter is vs30, see equation 1, page 20. get_mean_and_stddevs(sites, rup, dists, imt, stddev_types)[source]¶ See superclass methodfor spec of input and result values. Implements equation 14 of Hong & Goda (2007)
https://docs.openquake.org/oq-hazardlib/0.21/gsim/hong_goda_2007.html
2019-01-16T07:47:26
CC-MAIN-2019-04
1547583657097.39
[]
docs.openquake.org
Monitor Block List There are two ways you can cause the firewall to place an IP address on the block list: - Configure a Vulnerability Protection profile with a rule to Block IP connections and apply the profile to a Security policy, which you apply to a zone. - Configure a DoS Protection policy rule with the Protect action and a Classified DoS Protection profile, which specifies a maximum rate of connections per second allowed. When incoming packets match the DoS Protection policy and exceed the Max Rate, and if you specified a Block Duration and a Classified policy rule to include source IP address, the firewall puts the offending source IP address on the block list. In the cases described above, the firewall automatically blocks that traffic in hardware before those packets use CPU or packet buffer resources. If attack traffic exceeds the blocking capacity of the hardware, the firewall uses IP blocking mechanisms in software to block the traffic. The firewall automatically creates a hardware block list entry based on your Vulnerability Protection profile or DoS Protection policy rule; the source address from the rule is the source IP address in the hardware block list. Entries on the block list indicate in the Type column whether they were blocked by hardware (hw) or software (sw). The bottom of the screen displays: - Count of Total Blocked IPs out of the number of blocked IP addresses the firewall supports. - Percentage of the block list that the firewall has used. To view details about an address on the block list, hover over a Source IP address and click the down arrow link. Click the Who Is link, which displays the Network Solutions Who Is feature, providing information about the address. For information on configuring a Vulnerability Protection profile, see Customize the Action and Trigger Conditions for a Brute Force Signature. For more information on block list and DoS Protection profiles, see DoS Protection Against Flooding of New Sessions. Related Documentation Block IP List Entries Block IP List Entries The following table explains the block list entry for a source IP address that the firewall is blocking. Field Description Block ... Hardware IP Address Blocking Hardware IP Address Blocking When the firewall blocks a source IP address, such as when you configure a Classified DoS Protection policy rule with the ... Monitor Blocked IP Addresses Monitor Blocked IP Addresses The firewall maintains a block list of source IP addresses that it’s blocking. When the firewall blocks a source IP address, ... Monitor > Block IP List Monitor > Block IP List You can configure the firewall to place IP addresses on the block list in several ways, including the following: Configure ... Multiple-Session DoS Attack Multiple-Session DoS Attack Configure DoS Protection Against Flooding of New Sessions by configuring a DoS Protection policy rule, which determines the criteria that, when matched ... Create Internet-to-Data-Center-DoS-Protection-Policy-Rules Protect your data center web servers and the firewall from DoS attacks to prevent attackers from taking down your data center network. ... DoS Protection Option/Protection Tab DoS Protection Option/Protection Tab Select the Option/Protection tab to configure options for the DoS Protection policy rule, such as the type of service (http or ... Networking Features Networking Features New Networking Features Description Tunnel Content Inspection The firewall can now inspect the traffic content of cleartext tunnel protocols: Generic Routing Encapsulation (GRE) ... DoS Protection Against Flooding of New Sessions DoS Protection Against Flooding of New Sessions DoS protection against flooding of new sessions is beneficial against high-volume single-session and multiple-session attacks. In a single-session ...
https://docs.paloaltonetworks.com/pan-os/8-0/pan-os-admin/monitoring/monitor-block-list
2019-01-16T08:39:49
CC-MAIN-2019-04
1547583657097.39
[]
docs.paloaltonetworks.com
References The guides in this section cover all of the PhoneGap-specific tooling and are meant to be used as references. If you're new to PhoneGap, we recommend starting with the Getting Started guides guides first for a more complete understanding. If you're looking for information on the Cordova CLI or specific Cordova configuration, you should refer to the Official Apache Cordova Documentation.Edit this page on GitHub
http://docs.phonegap.com/references/
2018-02-18T03:25:14
CC-MAIN-2018-09
1518891811352.60
[]
docs.phonegap.com
Community file The community file contains information about the campus, including the number of buildings and the number of floors for each building. The file naming standard is: Must begin with map Must contain -geojson-com-map- For example, map-23641-mv-1-ev-1-geojson-com-map-fv-2.json Campus informationSample code for campus and map set properties.Building informationEach drawing in the campus map file represents a building or campus overview. The campus overview is a map that shows the entire campus, and is included for multi-building campuses only. Level informationEach building (drawing) has a list of levels. Each level is a map and represents one floor, though that is not a rule.Related TasksProcess map set filesRelated ReferenceLevel geometry file
https://docs.servicenow.com/bundle/geneva-service-management-for-the-enterprise/page/product/facilities_service_management/reference/r_CommunityFile.html
2018-02-18T02:36:15
CC-MAIN-2018-09
1518891811352.60
[]
docs.servicenow.com
Data Source Components The Telerik Reporting Data Source Components allow you to connect report items (Report, Table/Crosstab/List and Chart) to different types of data sources such as database or middle-tier business objects, without requiring code. They are intended to specify declaratively how to retrieve data for Data Items but do not contain any data at all. Their purpose is only to specify the means how to obtain it (e.g. in the case of SqlDataSource - by executing a SQL query against a database, in the case of ObjectDataSource - by invoking a method/property of a custom business object, etc.). You can view the Data Source Components as wrapper for your data that can only read the data and cannot modify it. The Telerik Reporting engine includes data source objects: - SqlDataSource – Enables you to work with Microsoft SQL Server, MySQL, Oracle, OLE DB or ODBC databases. - CsvDataSource – Enables you to work with CSV data. - ObjectDataSource - Enables you to work with business objects or other classes and allows you to create reports that display data from the middle-tier. - EntityDataSource – Enables you to connect to the ADO.NET Entity Framework. - OpenAccessDataSource - Enables you to connect to Telerik Data Access. - CubeDataSource – Enables you to retrieve data from an OLAP cube using Microsoft Analysis Services. - OpenClientDataSource – Enables you to retrieve data from OpenEdge AppServer ABL procedures.
https://docs.telerik.com/reporting/connecting-to-data-data-source-components
2018-02-18T03:18:27
CC-MAIN-2018-09
1518891811352.60
[]
docs.telerik.com
- Created by Twonky, last modified on Feb 07, 2017 Twonky Ref App Android Improvements - Changed dmr control activity so that surface is always visible. This improves support for Android N surface. - turned off DMR Control toasts by default and added a switch option to General Settings screen to enable the toasts when needed. Bug Fixes - fixed an UI issue where server that is not enabled for upload can be selected as a target - fixed an issue where DMR queue list items were not updated after an item was removed from it and removed items were shown - fixed an issue with list view not being updated correctly after deleting items from the queue - fixed app a crashes when attempting to play m3u8-video with Android N - fixed crash when network changes - fixed dtcp activation successful toast not shown - fixed issues with queue reordering and removal. - fixed problem that caused video surface to be visible when viewing photos - fixed upload goes to failed state and is not removed from the view after manually cancelling the upload Known Issues - "device not activated message" is shown and app does not change renderer even when LMP has already started - DRM Control and DRM Queue views freeze until attempt to add an un-functional URL to the queue fails. - DTCP content playback stops when user performs seek - DTCP content playback stops when user resumes paused playback - after manually cancelling upload the item goes to failed state and is not removed from the view - already cleared queue item appears in the DMR Control screen after new queue plays to the end - an error message is not shown when transcoded resource is not available for DTCP content - app a crashes when attempting to play m3u8-video with Android N - app crashes when server was lost while browsing the contents - bookmarked container is enabled even when device is offline - can not play protected playready content - error message is now shown when attempting to beam unsupported m3u8 content - google cast sometimes crashes when beaming item and rendering device is in wrong state. - - playback does not always continue automatically to the end of the queue - progress is not shown correctly for premium video - seeking forward to the end of the song stops playback on Apple TV - server does not leave network or re-announces itself when Media Type Filter is changed - server settings activity sometimes crashes on network change - skip to next track does not work when beaming to Sony STR-DN1030 - there is an issue when queue moved to next item when PCSPlayer lost surface and issue where set current item always started playback even if player was not in playing state. - transferring back beaming to local device doesn't work for premium video - user cannot resume to play the content after sleep Twonky Ref App iOS Improvements - turned off DMR Control toasts by default and added a switch option to General Settings screen to enable the toasts when needed. Bug Fixes - enabled skip forward and skip backward buttons in DRM Control for deleted media item - fixed "Initializing" message that was stuck on the DMR control view after restoring application state - fixed "Set Local Renderer Public" setting being forgotten when application re-started - fixed an issues where queue view remained in loading state while trying to add an unsupported URL to the queue - fixed broken Network Visibility setting that caused the server being visible even after being hidden - fixed crash in player screen after pressing rapidly Skip Prev while beaming from external server - fixed issue that photo carousel on DMR screen was not updated properly in some circumstances - fixed issue that song duration sometimes shows incorrect values - fixed issue where mp3 content could not be beamed to some external renderers - fixed retry option for DTCP Move downloads - fixed slide show transition that caused some picture being skipped Known Issues - DMR Queue and DMR Control display incorrect media items after editing a queue on the external renderer - DTCP content playback stops when user performs seek - DTCP content playback stops when user resumes paused playback - already played songs are played again in shuffle mode and wrong song is displayed - an error message is not shown when transcoded resource is not available for DTCP content - app crashes when it downloads a content after sleep - app displays wrong status for download after app resumes - app freezes if user taps [Select All] and [Clear Queue] continuously - content is not shared when auto-share is disabled and a single device is manually enabled - devices cannot be enabled manually if the auto-enable new devices is turned OFF - hidden server does not become accessible again after network visibility is turned back ON - media items are displayed multiple times - playback occasionally begins from the last item in the queue after queue is generated - queue disappears when queue is transferred to a new device for the first time - seeking forward to the end of the song stops playback on Apple TV - No labels
http://docs.twonky.com/display/TRN/Twonky+Ref+App+8.4
2018-02-18T02:52:13
CC-MAIN-2018-09
1518891811352.60
[]
docs.twonky.com
The reliability and robustness of SQLite is achieved in part by thorough and careful testing. As of version 3.20.0 (2017-08-01), the SQLite library consists of approximately 125.4 KSLOC of C code. (KSLOC means thousands of "Source Lines Of Code" or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 730 times as much test code and test scripts - 91616 25.3 KSLOC of C code used to create the TCL interface. The test scripts are contained in 1135 files totaling 13.3MB in size. There are 39747 57.3 MB or 782.3 KSLOC of C code implementing 42213 distinct test cases. TH3 tests are heavily parameterized, though, so a full-coverage test runs about 1.7. In addition to the three major test harnesses, there several other small programs that implement specialized tests. 142.6 thousand test cases. The veryquick tests include most tests other than 111.3 thousand fuzz SQL statements are generated and tested. The American Fuzzy Lop or "AFL" fuzzer is a recent (circa 2014) innovation from Michal Zalewski. Unlike most other fuzzers that blindly generate random inputs, the AFL fuzzer instruments the program being tested (by modifying the assembly-language output from the C compiler) and uses that instrumentation to detect when an input causes the program to do something different - to follow a new control path or loop a different number of times. Inputs that provoke new behavior are retained and further mutated. In this way, AFL is able to "discover" new behaviors of the program under test, including behaviors that were never envisioned by the designers. AFL has proven remarkably adept at finding arcane bugs in SQLite. Most of the findings have been assert() statements where the conditional was false under obscure circumstances. But AFL has also found a fair number of crash bugs in SQLite, and even a few cases where SQLite computed incorrect results. Because of its past success, AFL became a standard part of the testing strategy for SQLite beginning with version 3.8.10 (2015-05-07). Both SQL statements and database files are fuzzed. Billions and billions of mutations have been tried, but AFL's instrumentation has narrowed them down to less than 50,000 test cases that cover all distinct behaviors. Newly discovered test cases are periodically captured and added to the TCL test suite where they can be rerun using the "make fuzztest" or "make valgrindfuzz" commands. Beginning in 2016, a team of engineers at Google started the OSS. SQLite is one of many open-source projects that OSS Fuzz tests. The test/ossfuzz.c source file in the SQLite repository is SQLite's interface to OSS fuzz., including the unix VFS, has 100% branch test coverage under TH3 in its default configuration as measured by gcov. Extensions such as FTS3 and RTree are excluded from conditionals. In release outcome. 846. Branch coverage in SQLite is currently measured using gcov with the "-b" option. First the test program is compiled using options "-g -fprofile-arcs -ftest-coverage" and then the test program is run. Then "gcov -b" is run to generate a coverage report. The coverage report is verbose and inconvenient to read, so the gcov-generated report is processed using some simple scripts to put it into a more human-friendly format. This entire process is automated using scripts, of course. Note that running SQLite with gcov is not a test of SQLite — it is a test of the test suite. The gcov run does not test SQLite because the -fprofile-args and -ftest-coverage options cause the compiler to generate different code. The gcov run merely verifies that the test suite provides 100% branch test coverage. The gcov run is a test of the test - a meta-test. After gcov has been run to verify 100% branch test coverage, then the test program is recompiled using delivery compiler options (without the special -fprofile-arcs and -ftest-coverage options) and the test program is rerun. This second run is the actual test of SQLite. It is important to verify that the gcov test run and the second real test run both give the same output. Any differences in output indicate either the use of undefined or indeterminate behavior in the SQLite code (and hence a bug), or a bug in the compiler. Note that SQLite has, over the previous decade, encountered bugs in each of GCC, Clang, and MSVC. Compiler bugs, while rare, do happen, which is why it is so important to test the code in an as-delivered configuration.. SQLite strives to verify that every branch instruction makes a difference using mutation testing.. Unfortunately, SQLite contains many branch instructions that help the code run faster without changing the output. Such branches generate false-positives during mutation testing. As an example, consider the following hash function used to accelerate table-name lookup: 55 static unsigned int strHash(const char *z){ 56 unsigned int h = 0; 57 unsigned char c; 58 while( (c = (unsigned char)*z++)!=0 ){ /*OPTIMIZATION-IF-TRUE*/ 59 h = (h<<3) ^ h ^ sqlite3UpperToLower[c]; 60 } 61 return h; 62 }. To work around this problem, comments of the form " /*OPTIMIZATION-IF-TRUE*/" and " /*OPTIMIZATION-IF-FALSE*/" are inserted into the SQLite source code to tell the mutation testing script to ignore some branch instructions. The developers of SQLite have found that full coverage testing is an extremely effective method for locating and preventing bugs. Because every single branch instruction in SQLite core code is covered by test cases, the developers can be confident that changes made in one part of the code do not have unintended consequences in other parts of the code. The many new features and performance improvements that have been added to SQLite in recent years would not have been possible without the availability full-coverage testing. Maintaining 100% MC/DC is laborious and time-consuming. The level of effort needed to maintain full-coverage testing is probably not cost effective for a typical application. However, we think that full-coverage testing is justified for a very widely deployed infrastructure library like SQLite, and especially for a database library which by its very nature "remembers" past mistakes. Dynamic analysis refers to internal and external checks on the SQLite code which are performed while the code is live and running. Dynamic analysis has proven to be a great help in maintaining the quality of SQLite. The SQLite core contains 5285. In the C programming language, it is very easy to write code that has "undefined" or "implementation defined" behavior. That means that the code might work during development, but then give a different answer on a different system, or when recompiled using different compiler options. Examples of undefined and implementation-defined behavior in ANSI C include: Since undefined and implementation-defined behavior is non-portable and can easily lead to incorrect answers, SQLite works very hard to avoid it. For example, when adding two integer column values together as part of an SQL statement, SQLite does not simple add them together using the C-language "+" operator. Instead, it first checks to make sure the addition will not overflow, and if it will, it does the addition using floating point instead. To help ensure that SQLite does not make use of undefined or implementation defined behavior, the test suites are rerun using instrumented builds that try to detect undefined behavior. For example, test suites are run using the "-ftrapv" option of GCC. And they are run again using the "-fsanitize=undefined" option on Clang. And again using the "/RTC1" option in MSVC. Then the test suites are rerun using options like "-funsigned-char" and "-fsigned-char" to make sure that implementation differences do not matter either. Tests are then repeated on 32-bit and 64-bit systems and on big-endian and little-endian systems, using a variety of CPU architectures. Furthermore, the test suites are augmented with many test cases that are deliberately designed to provoke undefined behavior. For example: . The SQLite developers use an on-line checklist to coordinate testing activity and to verify that all tests pass prior each SQLite release. Past checklists are retained for historical reference. (The checklists are read-only for anonymous internet viewers, but developers can log in and update checklist items in their web browsers.) The use of checklists for SQLite testing and other development activities is inspired by The Checklist Manifesto . The latest checklists contain approximately 200 items that are individually verified for each release. Some checklist items only take a few seconds to verify and mark off. Others involve test suites that run for many hours. The release checklist is not automated: developers run each item on the checklist manually. We find that it is important to keep a human in the loop. Sometimes problems are found while running a checklist item even though the test itself passed. It is important to have a human reviewing the test output at the highest level, and constantly asking "Is this really right?" The release checklist is continuously evolving. As new problems or potential problems are discovered, new checklist items are added to make sure those problems do not appear in subsequent releases. The release checklist has proven to be an invaluable tool in helping to ensure that nothing is overlooked during the release process. Static analysis means analyzing source code at compile-time to check for correctness. Static analysis includes compiler warning messages and more in-depth analysis engines such as the Clang Static Analyzer. SQLite compiles without warnings on GCC and Clang using the -Wall and -Wextra flags on Linux and Mac and on MSVC on Windows. No valid warnings are generated by the Clang Static Analyzer tool "scan-build" either (though recent versions of clang seem to generate many false-positives.) Nevertheless, some warnings might be generated by other static analyzers. Users are encouraged not to stress over these warnings and to instead take solace in the intense testing of SQLite described above. hope of inspiring confidence that SQLite is suitable for use in mission-critical applications. SQLite is in the Public Domain.
http://docs.w3cub.com/sqlite/testing/
2018-02-18T02:56:02
CC-MAIN-2018-09
1518891811352.60
[]
docs.w3cub.com
TargetTrackingConfiguration Represents a target tracking policy configuration. Contents - CustomizedMetricSpecification A customized metric. Type: CustomizedMetricSpecification object Required: No - DisableScaleIn Indicates whether scale in by the target tracking policy is disabled. If scale in is disabled, the target tracking policy won't remove instances from the Auto Scaling group. Otherwise, the target tracking policy can remove instances from the Auto Scaling group. The default is disabled. Type: Boolean Required: No - PredefinedMetricSpecification A predefined metric. You can specify either a predefined metric or a customized metric. Type: PredefinedMetricSpecification object Required: No - TargetValue The target value for the metric. Type: Double Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_TargetTrackingConfiguration.html
2018-02-18T02:40:56
CC-MAIN-2018-09
1518891811352.60
[]
docs.aws.amazon.com
The snapd system In traditional Linux distributions, software is made available in packages: - that rely on the availability of services in the OS or other software. - whose data isn’t confined, so can be accessed by other software. - that can be detrimentally effected by a system or software upgrade. - that are complex to uninstall or downgrade. - rely on a small number of approved ‘packagers’ to add them to the distro repositories. Creating and distributing software can therefore be a time consuming process and the end result doesn’t offer the user a high degree of security and manageability. The snapd system aims to fix these challenges by offering: - System components and applications as self contained (except for the most basic OS features, such as network access), read only images, called snaps. A confinement and security model that: - Offers snaps a secure storage area isolated from other snaps. - Enables snaps to make features available to other snaps and for other snaps to consume those features over defined interfaces. - A store where developers can easily make their software directly available to users and from which devices can automatically pull updates on a daily basis. - A simple transactional update system where snaps can be easily uninstalled (by deleting the snap package) or rolled back (by reverting to the previous snap image and private storage area). On a snapd system these features are implemented by: - snapd, a management environment that handles installing and updating snaps using the transactional system, as well as garbage collection of old versions of snaps. - snap-confine, an execution environment for the applications and services delivered in snap packages. The snapd system simplifies the development of devices and their software because, with the exception of a limited number of OS features, you’re in control of all the components in your application. You simply add everything needed to the snap package. You then make the snap available using the Snap Store, or, if you are the device creator, create your own store. OS snaps The OS snap is a repacked rootfs that contains just ‘enough’ to run and manage snaps on a read-only file system. Generally there will also be basic features such as network services, libc, systemd, and others included. When you install a snap for the first time, the OS snap ( ubuntu-core) gets installed first, it’s used as the platform for subsequently installed application snaps. This way, you can be confident that a snap always runs on the same core stack, regardless of the Linux distribution.
https://docs.snapcraft.io/core/snapd
2018-02-18T03:22:53
CC-MAIN-2018-09
1518891811352.60
[array(['../media/snap_in_snappy_system.png', 'Snaps in the Snapd System Snaps are self contained, confined applications that can make use of features in other snaps using Interfaces.'], dtype=object) ]
docs.snapcraft.io
Module vulkano:: swapchain [−] [src] 6 extensions that each allow you to create a surface from a type of window: VK_KHR_xlib_surface VK_KHR_xcb_surface VK_KHR_wayland_surface VK_KHR_mir unsafe. It is your responsibility to keep the window alive for at least as long as the surface exists.::get_capabilities(). Creating a swapchain In order to create a swapchain, you will first have to enable the VK_KHR_swapchain extension on the device (and not on the instance like VK_KHR_surface). Then, you should query the capabilities of the surface with Surface::get_capabilities() and choose which values you are going to use. Then, call Swapchain::new. TODO: add example here Creating a swapchain not only returns the swapchain object, but also all the images that belong to it. Acquiring and presenting images Once you created a swapchain and retreived. TODO: add example here loop { let index = swapchain::acquire_next_image(None).unwrap(); draw(images[index]); swapchain::present(queue, index)(); final_future.flush().unwrap(); // TODO: PresentError? }
https://docs.rs/vulkano/0.5.6/vulkano/swapchain/index.html
2018-02-18T03:25:55
CC-MAIN-2018-09
1518891811352.60
[]
docs.rs
How to contribute code to Apollo¶ Audience¶ These guidelines are for developers of Apollo software, whether internal or in the broader community. Basic principles of the Apollo-flavored GitHub Workflow¶ Principle 1: Work from a personal fork¶ - Prior to adopting the workflow, a developer will perform a one-time setup to create a personal Fork of apollo and will subsequently perform their development and testing on a task-specific branch within their forked repo. This forked repo will be associated with that developer’s GitHub account, and is distinct from the shared repo managed by GMOD. Principle 2: Commit to personal branches of that fork¶ - Changes will never be committed directly to the master branch on the shared repo. Rather, they will be composed as branches within the developer’s forked repo, where the developer can iterate and refine their code prior to submitting it for review. Principle 3: Propose changes via pull request of personal branches¶ - Each set of changes will be developed as a task-specific branch in the developer’s forked repo, and then create a pull request will be created to develop and propose changes to the shared repo. This mechanism provides a way for developers to discuss, revise and ultimately merge changes from the forked repo into the shared Apollo repo. Principle 4: Delete or ignore stale branches, but don’t recycle merged ones¶ - Once a pull request has been merged, the task-specific branch is no longer needed and may be deleted or ignored. It is bad practice to reuse an existing branch once it has been merged. Instead, a subsequent branch and pull-request cycle should begin when a developer switches to a different coding task. - You may create a pull request in order to get feedback, but if you wish to continue working on the branch, so state with “DO NOT MERGE YET”. Table of contents¶ - One Time Setup - Forking a Shared Repo - Typical Development Cycle - Refresh and clean up local environment - Create a new branch - Changes, Commits and Pushes - Reconcile branch with upstream changes - Submitting a PR (pull request) - Reviewing a pull request - Respond to TravisCI tests - Respond to peer review - Repushing to a PR branch - Merge a pull request - Celebrate and get back to work - GitHub Tricks and Tips - References and Documentation Typical Development Cycle¶ Once you have completed the One-time Setup above, then it will be possible to create new branches and pull requests using the instructions below. The typical development cycle will have the following phases: - Refresh and clean up local environment - Create a new task-specific branch - Perform ordinary development work, periodically committing to the branch - Prepare and submit a Pull Request (PR) that refers to the branch - Participate in PR Review, possibly making changes and pushing new commits to the branch - Celebrate when your PR is finally Merged into the shared repo. - Move onto the next task and repeat this cycle Refresh and clean up local environment¶ Git will not automatically sync your Forked repo with the original shared repo, and will not automatically update your local copy of the Forked repo. These tasks are part of the developer’s normal cycle, and should be the first thing done prior to beginning a new development effort and creating a new branch. In addition, this Step 1 - Fetch remotes¶ In the (likely) event that the upstream repo (the apollo shared repo) has changed since the developer last began a task, it is important to update the local copy of the upstream repo so that its changes can be incorporated into subsequent development. > git fetch upstream # Updates the local copy of shared repo BUT does not affect the working directory, it simply makes the upstream code available locally for subsequent Git operations. See step 2. Step 2 - Ensure that ‘master’ is up to date¶ Assuming that new development begins with branch ‘master’ (a good practice), then we want to make sure our local ‘master’ has all the recent changes from ‘upstream’. This can be done as follows: > git checkout master > git reset --hard upstream/master The above command is potentially dangerous if you are not paying attention, as it will remove any local commits to master (which you should not have) as well as any changes to local files that are also in the upstream/master version (which you should not have). In other words, the above command ensures a proper clean slate where your local master branch is identical to the upstream master branch. Some people advocate the use of git merge upstream/master or git rebase upstream/master instead of the git reset --hard. One risk of these options is that unintended local changes accumulate in the branch and end up in an eventual pull request. Basically, it leaves open the possibility that a developer is not really branching from upstream/master, but is branching from some developer-specific branch point. Create a new branch¶ Once you have updated the local copy of the master branch of your forked repo, you can create a named branch from this copy and begin to work on your code and pull-request. This is done with: > git checkout -b fix-feedback-button # This is an example name This will create a local branch called ‘fix-feedback-button’ and will configure your working directory to track that branch instead of ‘master’. You may now freely make modifications and improvements and these changes will be accumulated into the new branch when you commit. If you followed the instructions in Step 5 - Configure .bashrc to show current branch (optional), your shell prompt should look something like this: ~/MI/apollo fix-feedback-button $ Changes, Commits and Pushes¶ Once you are in your working directory on a named branch, you make changes as normal. When you make a commit, you will be committing to the named branch by default, and not to master. You may wish to periodically git push your code to GitHub. Note the use of an explicit branch name that matches the branch you are on (this may not be necessary; a git expert may know better): > git push origin fix-feedback-button # This is an example name Note that we are pushing to ‘origin’, which is our forked repo. We are definitely NOT pushing to the shared ‘upstream’ remote, for which we may not have permission to push. Reconcile branch with upstream changes¶ If you have followed the instructions above at Refresh and clean up local environment, then your working directory and task-specific branch will be based on a starting point from the latest-and-greatest version of the shared repo’s master branch. Depending upon how long it takes you to develop your changes, and upon how much other developer activity there is, it is possible that changes to the upstream master will conflict with changes in your branch. So it is a good practice to periodically pull down these upstream changes and reconcile your task branch with the upstream master branch. At the least, this should be performed prior to submitting a PR. Fetching the upstream branch¶ The first step is to fetch the update upstream master branch down to your local development machine. Note that this command will NOT affect your working directory, but will simply make the upstream master branch available in your local Git environment. > git fetch upstream Rebasing to avoid Conflicts and Merge Commits¶ Now that you’ve fetched the upstream changes to your local Git environment, you will use the git rebase command to adjust your branch > # Make that your changes are committed to your branch > # before doing any rebase operations > git status # ... Review the git status output to ensure your changes are committed # ... Also a good chance to double-check that you are on your # ... task branch and not accidentally on master > git rebase upstream/master The rebase command will have the effect of adjusting your commit history so that your task branch changes appear to be based upon the most recently fetched master branch, rather than the older version of master you may have used when you began your task branch. By periodically rebasing in this way, you can ensure that your changes are in sync with the rest of Apollo development and you can avoid hassles with merge conflicts during the PR process. Dealing with merge conflicts during rebase¶ Sometimes conflicts happen where another developer has made changes and committed them to the upstream master (ideally via a successful PR) and some of those changes overlap with the code you are working on in your branch. The git rebase command will detect these conflicts and will give you an opportunity to fix them before continuing the rebase operation. The Git instructions during rebase should be sufficient to understand what to do, but a very verbose explanation can be found at Rebasing Step-by-Step Advanced: Interactive rebase¶ As you gain more confidence in Git and this workflow, you may want to create PRs that are easier to review and best reflect the intent of your code changes. One technique that is helpful is to use the interactive rebase capability of Git to help you clean up your branch prior to submitting it as a PR. This is completely optional for novice Git users, but it does produce a nicer shared commit history. See squashing commits with rebase for a good explanation. Submitting a PR (pull request)¶ Once you have developed code and are confident it is ready for review and final integration into the upstream version, you will want to do a final git push origin ... (see Changes, Commits and Pushes above). Then you will use the GitHub website to perform the operation of creating a Pull Request based upon the newly pushed branch. See submitting a pull request. Reviewing a pull request¶ The set of open PRs for the apollo can be viewed by first visiting the shared apollo GitHub page at. Click on the ‘Pull Requests’ link on the right-side of the page: Note that the Pull Request you created from your forked repo shows up in the shared repo’s Pull Request list. One way to avoid confusion is to think of the shared repo’s PR list as a queue of changes to be applied, pending their review and approval. Respond to TravisCI tests near the bottom of the individual PR page, to the right of the Merge Request symbol: TBD - Something should be written about developers running tests PRIOR to TravisCI and the the PR. This may already be in the README.html, but should be cited. Respond to peer review Repushing to a PR branch¶ It’s likely that after created a Pull Request, you will receive useful peer review or your TravisCI tests will have failed. In either case, you will make the required changes on your development machine, retest your changes, and you can then push your new changes back to your task branch and the PR will be automatically updated. This allows a PR to evolve in response to feedback from peers. Once everyone is satisfied, the PR may be merged. (see below). Merge a pull request¶ One of the goals behind the workflow described here is to enable a large group of developers to meaningfully contribute to the Apollo codebase. The Pull Request mechanism encourages review and refinement of the proposed code changes. As a matter of informal policy, Apollo expects that a PR will not be merged by its author and that a PR will not be merged without at least one reviewer approving it (via a comment such as +1 in the PR’s Comment section). Celebrate and get back to work¶ You have successfully gotten your code improvements into the shared repository. Congratulations! The branch you created for this PR is no longer useful, and may be deleted from your forked repo or may be kept. But in no case should the branch be further developed or reused once it has been successfully merge. Subsequent development should be on a new branch. Prepare for your next work by returning to Refresh and clean up local environment. References and Documentation¶ - The instructions presented here are derived from several sources. However, a very readable and complete article is Using the Fork-and-Branch Git Workflow. Note that the article doesn’t make clear that certain steps like Forking are one-time setup steps, after which Branch-PullRequest-Merge steps are used; the instructions below will attempt to clarify this. - New to GitHub? The GitHub Guides are a great place to start. - Advanced GitHub users might want to check out the GitHub Cheat Sheet
http://genomearchitect.readthedocs.io/en/latest/Contributing.html
2018-02-18T02:42:27
CC-MAIN-2018-09
1518891811352.60
[array(['images/githubPullRequest.png', None], dtype=object) array(['images/githubTestProgress.png', None], dtype=object) array(['images/githubTestStatus.png', None], dtype=object)]
genomearchitect.readthedocs.io
Electron. Since Chromium is quite a large project, the final linking stage can take quite a few minutes, which makes it hard for development. In order to solve this, Chromium introduced the “component build”, which builds each component as a separate shared library, making linking very quick but sacrificing file size and performance. just just building Electron for rebranding you are not affected. Test your changes conform to the project coding style using: $ npm run lint Test functionality using: $ npm test Whenever you make changes to Electron source code, you’ll need to re-run the build before the tests: $ npm run build && npm test You can make the test suite run faster by isolating the specific test or block you’re currently working on using Mocha’s exclusive tests feature. Just append .only to any describe or it function call: describe.only('some feature', function () { // ... only tests in this block will be run }) Alternatively, you can use mocha’s grep option to only run tests matching the given regular expression pattern: $ npm test -- --grep child_process Tests that include native modules (e.g. runas) can’t be executed with the debug build (see #2558 for details), but they will work with the release build. To run the tests with the release build use: $ npm test -- -R © 2013–2017 GitHub Inc. Licensed under the MIT license.
http://docs.w3cub.com/electron/development/build-system-overview/
2018-04-19T19:27:37
CC-MAIN-2018-17
1524125937016.16
[]
docs.w3cub.com
makes it easy to work with dynamic data and use it on your website. You can easily use Joelie Selectors to output a set of photos, list all blog posts on a web site or just find data faster when you are using the Joelie.org website. Joelie Selectors eliminate the need for writing SQL in most cases, which can speed up everyone from complete beginners to veteran programmers. Joelie Selectors allow you to describe data easily, whether you are describing it to a computer or another person. For example, the Joelie Selector statement for "all photos by user bob" is just one line of text: Here's the same basic statement in SQL, which is three lines at its simplest: You can use Joelie Selectors to Define, Select, Insert, Update or Delete data from Joelie. You can also use Real SQL in your Joelie code when you aren't able to express a certain query using Joelie Selectors.
http://www.docs.joelie.org/Selectors
2018-04-19T18:56:29
CC-MAIN-2018-17
1524125937016.16
[]
www.docs.joelie.org
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. If a connection ID is provided, the call returns only that particular connection. Namespace: Amazon.DirectConnect Assembly: AWSSDK.dll Version: (assembly version) Container for the necessary parameters to execute the DescribeConnections service method. .NET Framework: Supported in: 4.5, 4.0, 3.5
https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MDirectConnectIDirectConnectDescribeConnectionsDescribeConnectionsRequestNET35.html
2018-04-19T19:54:27
CC-MAIN-2018-17
1524125937016.16
[]
docs.aws.amazon.com
How to: Connect Components in a Data Flow This procedure describes how to connect the output of components in a data flow to other components within the same data flow. To connect components in a data flow In Business Intelligence Development Studio,, indicated by red arrows, that you can connect in the same way. Note Some data flow components can have multiple outputs and you can connect each output to a different transformation or destination. To save the updated package, click Save Selected Items on the File menu. See Also Tasks How to: Add a Component to a Data Flow How to: Remove a Component from a Data Flow How to: Configure an Error Output in a Data Flow Component Concepts Data Flow Elements Data Flow How-to Topics (SSIS) Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms140249(v=sql.90)
2018-04-19T20:03:21
CC-MAIN-2018-17
1524125937016.16
[]
docs.microsoft.com
Best practices: Installing Office 365 ProPlus from CDN or DFS The Best Practices Guide includes deployment recommendations and real-world examples from the Office 365 Product Group and delivery experts from Microsoft Services. For a list of all the articles, see Best practices. When planning for and implementing a Office 365 ProPlus deployment, it is critical to understand the available deployment options. This article covers the high level description of the different architecture components that are used in the CDN and DFS client deployment processes. Platform architecture Office 365 ProPlus is available in both 32-bit and 64-bit editions. You should understand the advantages and disadvantages before selecting a specific architecture. The following table details the advantages and disadvantages of selecting 64-bit Office: Note For detailed guidance, see Choose the 64-bit or 32-bit version of Office. It is recommended that 32-bit is used for both 32-bit and 64-bit operating systems if users in your organization depend on existing extensions to Office, such as Microsoft ActiveX® controls, third-party add-ins, in-house solutions built on previous versions of Office, or 32-bit versions of programs that interface directly with Office. If some users in your organization are Excel expert users who work with Excel spreadsheets that are larger than 4 GB, they can install the 64-bit edition of Office 365 ProPlus. In addition, if you have in-house solution developers, we recommend that those developers have access to the 64-bit edition of Office 365 ProPlus so that they can test and update your in-house solutions in the 64-bit edition of Office 365 ProPlus. Click-to-Run technology Click-to-Run is a Microsoft installation and virtualization technology that reduces the time that is required to install Office and helps you run multiple versions of Office on the same computer. The virtualization technology provides an isolated environment for Office to run on your computer. This isolated environment provides a separate location for the Office product files and settings to be stored so that they don't change other applications that are already installed on the computer. This lets you run the latest version of Office side -by-side with an earlier version of Office that is already installed on the computer. Note The earlier version of Office that is already installed on the computer must be one of the following versions of Office: Office 2010, Office 2007, or Office 2003. Microsoft only tests the N-1 for side by side compatibility.> The versions of Office that you install must be the same edition. For example, both installations of Office are the 32 -bit edition. Even though the Office product runs in a self-contained environment, the Office product can interact with the other applications that are installed on the computer. Macros, in-document automation, and cross-Office product interoperability will work. Click-to-Run is designed to allow locally-installed add-ins and dependent applications to work with it. However, some add-ins or other integration points with Office might behave differently or might not work when you are using Click-to-Run. Click-to-Run is used to install and update Office products. These capabilities are based on technologies included in Microsoft Application Virtualization (App-V). Feature selection Due to the nature of Click-to-Run deployment, customizations and feature selections at the application level are not possible. This is unlike Office MSI versions, where the Office Customization Tool (OCT) can be used to generate customized .msp files for use at install time. Customization After the Office configuration decisions have been made, an approach must be defined to apply these settings to the Office installation. Customization of Office deployment can be broken into two key approaches: During deployment Post deployment Deployment customization The Office 365 ProPlus installation architecture offers only one method to perform customization during deployment: using an XML definition file. Setup customization file (XML) The XML file configures the way that Setup will interact with user and how Office 365 ProPlus is installed and maintained. You can specify the following options in the XML file: Languages to install. Platform architecture to install (32-bit or 64-bit). License activation feature. Installation logs usage. Display information. Update settings. Multiple language deployment When configuring the deployment of multiple languages, it is important to consider the approach the deployment of these languages. By default, Setup installs only the language version that match elements that are defined in the XML customization file. You can install more than one language on a single computer without using more than one license. The first language that is defined in the XML configuration file is the default language. Each language size is between 150 -250 MB. Prepare to deploy The MSI version of Office allows for the Office Customization Tool (OCT) to customize Office installation. The OCT tool cannot be leveraged for Office 365 ProPlus. Instead, customization is accomplished by using Group Policy. Review and record the customization settings in the legacy version of Office. Download the Office 2016 Administrative Template files. Choose the 32 or 64-bit template files. Compare the recorded customization settings with the Office 2016 Administrative Template Excel spreadsheet for Office 365 ProPlus Customization. Import the Office 2016 Administrative Template Files into Active Directory and configure the appropriate Group Policy settings. See Managing Group Policy ADMX Files Step-by-Step Guide. The Office 2016 Administrative Template files are updated with new settings to control the look and feel of Office 365 ProPlus Languages It is a preferred practice that Office 365 ProPlus languages are installed during the initial installation. Additionally, languages can be deployed post installation. You can review the supported languages for Office 365 ProPlus at Language identifiers. Office applications The following applications are included with Office 365 ProPlus: Office 365 ProPlus. Although InfoPath is not included in Office 365 ProPlus, it is available for Office 365 ProPlus subscribers. The download for InfoPath 2013 for Office 365 ProPlus Subscription is located here. Review the applications. Decide which applications to deploy. Visio and Project 2016 Visio 2016 and Project 2016 are not included with Office 365 ProPlus, and require a separate license. You can install the volume licensed versions of Project and Visio 2016, with the requirements are located at Use the Office Deployment Tool to install volume licensed editions of Visio 2016 and Project 2016. The installations leverage C2R technology, but activate by using a traditional MAK/KMS system. Side by side or removal of legacy Office Side by Side - It is a preferred practice that the legacy versions of Office are removed during the installation of Office ProPlus. Side by side should be only be used for limited testing during the pilot deployment. Removal of Legacy Office -The preferred practice for the removal of legacy versions of Office is to utilize OffScrub for the specific version Office. OffScrub is fully supported by Premier and should be contacted to obtain the required versions.
https://docs.microsoft.com/en-us/DeployOffice/best-practices/best-practices-installing-office-365-proplus-from-cdn-or-dfs
2018-04-19T20:38:57
CC-MAIN-2018-17
1524125937016.16
[]
docs.microsoft.com
Getting Started Welcome to the Product Labels documentation. Whether you are new or an advanced user, you can find useful information here. Next steps: How to install extension How to create a new placeholder How to create new label Was this page helpful? Your feedback about this content is important. Your feedback about this content is important. Let us know what you think.
https://docs.mirasvit.com/doc/extension_cataloglabel/current/
2018-04-19T19:21:03
CC-MAIN-2018-17
1524125937016.16
[]
docs.mirasvit.com
This route class will transparently inflect the controller, action and plugin routing parameters, so that requesting /my-plugin/my-controller/my-action is parsed as ['plugin' => 'MyPlugin', 'controller' => 'MyController', 'action' => 'myAction'] This route class will transparently inflect the controller and plugin routing parameters, so that requesting /my_controller is parsed as ['controller' => 'MyController'] Plugin short route, that copies the plugin param to the controller parameters It is used for supporting /:plugin routes. Redirect route will perform an immediate redirect. Redirect routes are useful when you want to have Routing layer redirects occur in your application, for when URLs move. A single Route used by the Router to connect requests to parameter maps. © 2005–2017 The Cake Software Foundation, Inc. Licensed under the MIT License. CakePHP is a registered trademark of Cake Software Foundation, Inc. We are not endorsed by or affiliated with CakePHP.
http://docs.w3cub.com/cakephp~3.4/namespace-cake.routing.route/
2018-04-19T19:27:46
CC-MAIN-2018-17
1524125937016.16
[]
docs.w3cub.com
Request¶ Request object. Interface¶ - class wptserve.request. Authentication(headers)[source]¶ Object for dealing with HTTP Authentication The username supplied in the HTTP Authorization header, or None The password supplied in the HTTP Authorization header, or None - class wptserve.request. CookieValue(morsel)[source]¶ Representation of cookies. Note that cookies are considered read-only and the string value of the cookie will not change if you update the field values. However this is not enforced. The name of the cookie. The value of the cookie The expiry date of the cookie The path of the cookie The comment of the cookie. The domain with which the cookie is associated The max-age value of the cookie. Whether the cookie is marked as secure Whether the cookie is marked as httponly - class wptserve.request. MultiDict[source]¶ Dictionary type that holds multiple values for each key - class wptserve.request. Request(request_handler)[source]¶ Object representing a HTTP request. The local directory to use as a base when resolving paths Regexp match object from matching the request path to the route selected for the request. HTTP version specified in the request. HTTP method in the request. Request path as it appears in the HTTP request. The prefix part of the path; typically / unless the handler has a url_base set Absolute URL for the request. Parts of the requested URL as obtained by urlparse.urlsplit(path) Raw request line RequestHeaders object providing a dictionary-like representation of the request headers. raw_headers. Dictionary of non-normalized request headers. Request body as a string File-like object representing the body of the request. MultiDict representing the parameters supplied with the request. Note that these may be present on non-GET requests; the name is chosen to be familiar to users of other systems such as PHP. MultiDict representing the request body parameters. Most parameters are present as string values, but file uploads have file-like values. Cookies object representing cookies sent with the request with a dictionary-like interface. Object with username and password properties representing any credentials supplied using HTTP authentication. Server object containing information about the server environment. - class wptserve.request. RequestHeaders(items)[source]¶ Dictionary-like API for accessing request headers. get(key, default=None)[source]¶ Get a string representing all headers with a particular value, with multiple headers separated by a comma. If no header is found return a default value
http://wptserve.readthedocs.io/en/latest/request.html
2018-04-19T19:38:22
CC-MAIN-2018-17
1524125937016.16
[]
wptserve.readthedocs.io
Infobuildstep. The GNUAutotoolsfactorypage can now take the maxsearchargumentcall. The start, restart, and reconfigcommands will now wait for longer than 10 seconds as long as the master continues producing log lines indicating that the configuration is progressing. Added new config option protocolswhichnowparameters in the build detail page (pull request 1061). - Fix failures where git cleanfailsand buildset_properties.property_value. 5.18.1.4.option.parameter).
https://buildbot.readthedocs.io/en/v2.5.0/relnotes/0.8.9.html
2019-12-05T22:05:37
CC-MAIN-2019-51
1575540482284.9
[]
buildbot.readthedocs.io
You can configure Testimonial Page Settings using this settings panel. You can either follow the video or follow the below steps to configure the Testimonial Page Settings. How to Create a Testimonial Page? Note: When you create a Testimonial page, Please select the Testimonial Page template in the page attributes to get the predefined Testimonial Page template. You need to create a testimonial page before configuring the testimonial page settings. - Login to your WordPress Admin panel. - Go to Pages > Add New - Enter the Page Title - Select the Template as Testimonial Page - Click on Publish How to Configure the Testimonial Section Template? Please follow the below steps to configure the testimonial section template. - Login to your WordPress admin panel - Go to Appearance> Customize> Testimonial Page Settings> Testimonial Template Section - Click on Add a Widget - Enter Name, Designation, and Testimonial - Upload the image of the person - Click on Publish How to Configure Testimonial Call to Action Section? Please follow the below settings to configure the testimonial call to action section. - Login to your WordPress admin panel - Go to Appearance> Customize> Testimonial Page Settings> Testimonial Call To Action Section - Click on Add a Widget & Select Blossom: Call To Action - Enter the Title & Description of the section - Select the Number of Call to Action buttons you want to display - Enter the Button Label & Button Link - Select the Button Alignment & Background color - Upload an image if you don’t want to display a single color - Click on Publish
https://docs.blossomthemes.com/docs/blossom-coach-pro/page-settings/testimonial-page-settings/
2019-12-05T23:08:44
CC-MAIN-2019-51
1575540482284.9
[array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Create-a-testimonial-Page.png', 'Create a testimonial Page'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Selct-testimonial-widget.png', 'Selct-testimonial-widget'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Create-a-testimonial.png', 'Create-a-testimonial'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/select-blossom-call-to-action.png', 'select-blossom-call-to-action'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Configure-Testimonial-call-to-action-section.png', 'Configure-Testimonial-call-to-action-section'], dtype=object) ]
docs.blossomthemes.com
AVEncCommonMeanBitRate Property AVEncCommonMeanBitRate Property Specifies the average bit rate, in bits per second. This property applies only to constant bit rate (CBR) and variable bit rate (VBR) control modes. Property GUID CODECAPI_AVEncCommonMeanBitRate Data Type UINT32 (VT_UI4) Possible Values Encoders can implement this property as an enumerated set or as a linear range. Remarks This property is read/write. Requirements Header: Include codecapi.h. See Also
https://docs.microsoft.com/en-us/previous-versions/ms779445%28v%3Dvs.85%29
2019-12-05T23:34:31
CC-MAIN-2019-51
1575540482284.9
[]
docs.microsoft.com
... - connect to the M&M Central with the Management Console / CLI / Web-Interface / SOAP-interface - add a DNS/DHCP Server (Controller) to the M&M Central. - auto-update the DNS/DHCP Server Controllers to a newer version (starting with version 6.2 for supported platforms) - SNMP discovery might fail/timeout - PING discovery might not work Ports The Men & Mice Suite uses two services, in addition to the DNS/DHCP service itself. ... The Men & Mice Central service listens on port 1231/TCP for inbound connections from the Management Console / CLI / Web-Interface / SOAP-Interface. These Ports are officially registered by Men & Mice for the Men & Mice Suite modules ()Of course, the DNS service itself listens on the standard port 53, both UDP and TCP. Beside the Men & Mice Suite specific ports you want to check if the following ports/services are open/allowed: - If you configure SNMP Profiles in the Men & Mice Suite for router/switch discovery of ARP information as well as subnet information you want to check that the configured SNMP port (usually port 162/UDP) on the routers/switches is accessible by the machine that is executing the Men & Mice Central process/service. - The DNS/DHCP Controller service on Windows is utilizing RPC calls to communicate with the DNS/DHCP servers. Please make sure that the Controller service can access port 135/TCP and high ports (usually 49152-65535) on the managed DNS/DHCP server. If the Controller is directly installed on the DNS/DHCP server (Microsoft with Agent Installed) this should not be an issue as the communication is done locally. In case the Controller is used as proxy (Microsoft Agent-Free) and is not directly installed on the DNS/DHCP server, the mentioned ports (135/tcp and high ports) on the managed DNS/DHCP servers must be accessible by the machine that runs the Men & Mice Controller. - If you configure PING sweeps (ICMP echo) on subnets in the Men & Mice Suite you want to make sure that ICMP echo requests are allowed by the machine that runs the Men & Mice Central process/service Overview The Men & Mice Management Console connects only to the Central service. The Central service connects to all the associated DNS/DHCP Server Controller services and to the updater service.
https://docs.menandmice.com/pages/diffpagesbyversion.action?pageId=6360971&selectedPageVersions=5&selectedPageVersions=6
2019-12-05T22:08:47
CC-MAIN-2019-51
1575540482284.9
[array(['/download/attachments/6360971/ports1.png?version=1&modificationDate=1419328012791&api=v2', None], dtype=object) ]
docs.menandmice.com
BigInteger Data Type Getting Started with AL Developing Extensions Feedback
https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/developer/methods-auto/biginteger/biginteger-data-type
2019-12-05T23:46:12
CC-MAIN-2019-51
1575540482284.9
[]
docs.microsoft.com
Deployments - Select the Provisioninglink in the navigation bar. - Select the Deploymentslink in the sub-navigation bar. - Click the Add button. - Enter a Name for the deployment and a description (optional) - Click the Save Changes button to save. Add Version¶ Select the Provisioninglink in the navigation bar. Select the Deploymentslink - Select the Provisioninglink in the navigation bar. - Select the Deploymentslink in the sub-navigation bar. - Click the Edit Deployment icon on the row of the deployment you wish to edit. - Modify information as needed - Click the Save Changes button to save.
https://docs.morpheusdata.com/en/3.4.5/provisioning/deployments/deployments.html
2019-12-05T22:07:40
CC-MAIN-2019-51
1575540482284.9
[]
docs.morpheusdata.com
TOPICS× SPA Editor Overview Single. The SPA Editor is the recommended solution for projects that require SPA framework based client-side rendering (e.g. React or Angular). Introduction: - SPA Blueprint for the technical requirements of an SPA - Getting Started with SPAs in AEM for a quick tour of a simple SPA Design. Page Model Management cq: - If the template is editable, add it to the page policy. - Or add the categories using Editor. Communication Data Type. Workflow You can understand the flow of the interaction between the SPA and AEM by thinking of the SPA. Basic SPA Editor Workflow Keeping in mind the key elements of the SPA Editor, the high-level workflow of editing a SPA within AEM appears to the author as follows. - SPA Editor loads. - SPA is loaded in a separate frame. - SPA requests JSON content and renders components client-side. - SPA Editor detects rendered components and generates overlays. - Author clicks overlay, displaying the component’s edit toolbar. - SPA Editor persists edits with a POST request to the server. - SPA Editor requests updated JSON to the SPA Editor, which is sent to the SPA with a DOM Event. - SPA re-renders the concerned component, updating its DOM. Keep in mind: - The SPA is always in charge of its display. - The SPA Editor is isolated from the SPA itself. - In production (publish), the SPA editor is never loaded. Client-Server Page Editing Workflow This is a more detailed overview of the client-server interaction when editing a SPA. -. Requirements & Limitations To enable the author to use the page editor to edit the content of an SPA, your SPA application must be implemented to interact with the AEM SPA Editor SDK. Please see the Getting Started with SPAs in AEM document for minimum that you need to know to get yours running. Supported Frameworks The SPA Editor SDK supports the following minimal versions: - React 16.3 - Angular 6.x Previous versions of these frameworks may work with the AEM SPA Editor SDK, but are not supported. Additional Frameworks. Limitations The AEM SPA Editor SDK was introduced with AEM 6.4 service pack 2. It is fully supported by Adobe, and as a new feature it continues to be enhanced and expanded. The following AEM features are not yet - Launch
https://docs.adobe.com/content/help/en/experience-manager-65/developing/headless/spas/spa-overview.html
2019-12-05T22:18:04
CC-MAIN-2019-51
1575540482284.9
[array(['/content/dam/help/experience-manager-65.en/help/sites-developing/assets/screen_shot_2018-08-20at144152.png', None], dtype=object) array(['/content/dam/help/experience-manager-65.en/help/sites-developing/assets/screen_shot_2018-08-20at143628.png', None], dtype=object) array(['/content/dam/help/experience-manager-65.en/help/sites-developing/assets/screen_shot_2018-08-20at144324.png', None], dtype=object) array(['/content/dam/help/experience-manager-65.en/help/sites-developing/assets/untitled1.gif', None], dtype=object) array(['/content/dam/help/experience-manager-65.en/help/sites-developing/assets/page_editor_spa_authoringmediator-2.png', None], dtype=object) ]
docs.adobe.com
Pricing Table on the homepage is a great way to attract your visitor’s attention and let them know about your business packages. You can either watch the video or follow the below steps to configure the Pricing Table Section. Please follow the below steps to configure the Pricing Table Section of your website. - Login to your WordPress Admin Panel - Go to Appearance > Customize > Front Page Settings > Pricing Table Section. - Click Add a Widget. - Add “Blossom: Pricing Table” widget for pricing section. - Choose a type of plan. The pricing table with popular plan type gets highlighted. Enter title, curry, price, and per value. Click Add item to add feature list of the package. Enter Featured Link and Label. Click Done. Add more widget to enter more packages details. - Click Publish
https://docs.blossomthemes.com/docs/blossom-coach-pro/homepage-settings/how-to-configure-pricing-table-section/
2019-12-05T21:48:29
CC-MAIN-2019-51
1575540482284.9
[array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Pricing-Table-Section-demo.png', 'Pricing Table Section demo'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Select-Blossom-Pricing-Table-Widget.png', 'Select Blossom Pricing Table Widget'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/04/Configure-Blossom-Coach-Pro-Pricing-Table-Section.png', 'Configure Blossom Coach Pro Pricing Table Section'], dtype=object) ]
docs.blossomthemes.com
NEW - Dashboard Settings The system now allows for more customization of the dashboard look and feel. The following options have been added into the Dashboard settings. - Widget filter icons: Toggle widget filters - Widget settings: Toggle widget settings - Actions: Toggle dashboard filters and sharing - Dashboard Title: Toggle Dashboard title - Widget title: Toggle widget titles - Borders: Toggle widget boarders - - Conditional formatting on the Data Grid Font color has now been added as a conditional formatting option on the Data Grid visualization type IMPROVEMENTS - Multiple Custom Map Layer Support The system has been enhanced to now support multiple custom WMS server map layers for the Geo - Cluster chart type - Query syntax change warning When creating a query, a warning message pop-up now appears if the Query Builder is accessed after a direct query has been entered - 2FA Code Entry For ease of use purposes, the cursor is now automatically positioned and active in the 2FA input field - Regression Results The initial results displayed for Regression machine learning models now use RMSE rather than Absolute Deviation. Absolute Deviation can be found by clicking into the statistical results per model run. FIXED - Fixed an issue with column sorting not putting zero at the bottom when sort ascending - Addressed an issue with the Bubble Chart time slider obscuring data when repositioning to another time slice
https://docs.knowi.com/hc/en-us/articles/360006469833-Release-Notes-Jun-29-2018
2019-12-05T23:03:08
CC-MAIN-2019-51
1575540482284.9
[]
docs.knowi.com
Integrate Freshdesk with a Smartloop botIntegrate Freshdesk with a Smartloop bot Please follow the simple steps to integrate Freshdesk with a Smartloop bot. We will be using a pre-built template to jumpstart a few things. Ideally our goal is to do the following: Freshdesk Integration - Create a new ticket in Freshdesk. - View the ticket details and status: - Be notified when a ticket is updated in Freshdesk - Add comments to the ticket directly from a messenger bot. - Upload attachments. Install TemplateInstall Template First, you'll need the Freshworks Template. Follow the instructions to install it in your account. We will be using it as a starting point to integrate a messenger bot with Freshdesk. Customize the TemplateCustomize the Template To configure settings in the Smartloop bot, click on "Configure" button. alt> Click on "Connect". Take a note of the Freshdesk Webhook that will use in the next step. alt> In the Involves any of these events, include settings for responding to any reply and status update as shown: In Perform these actions, add a new Trigger Webhook with Request Type set to POST. In the URL, paste the Freshdesk Webhook you copied in the earlier step. Please include the following JSON in the custom headers: { "x-api-key": "<Smartloop API key>" } This API key is the one that was copied earlier from the API Access section of the Smartloop Configure section. To conclude, you would need to include the following two attributes in the Content section - Ticket ID and Triggered Event, as shown below: Click on Preview and save to save the settings on Freshdesk page. You will now see a summary modal click on Save and enable. You've now successfully connected your Freshdesk account to your Smartloop bot. Finally, publish this bot to Facebook
https://docs.smartloop.ai/freshdesk-integration.html
2019-12-05T23:20:45
CC-MAIN-2019-51
1575540482284.9
[array(['/assets/img/freshdesk-events.a789f887.png', None], dtype=object) array(['/assets/img/freshdesk-actions.8dff4a75.png', None], dtype=object) array(['/assets/img/freshdesk-content.18fa5588.png', None], dtype=object)]
docs.smartloop.ai
All Files About Effects Preview T-HFND-010-010 or use a Render Preview node for each effect..
https://docs.toonboom.com/help/harmony-15/premium/effects/about-effect-preview.html
2019-12-05T21:44:41
CC-MAIN-2019-51
1575540482284.9
[]
docs.toonboom.com
Compiler Messages When you compile a help project, HTML Help Workshop notifies you if there are basic problems in the project files by creating compiler messages. You can monitor these messages to see whether any problems, such as missing links or graphics, exist in your help files. HTML Help Workshop reports these compiler messages: Note A condition you should be aware of, which will probably not cause serious problems when you open your help file. Note error messages have a number range from 1000 through 2999. For example, a broken link causes this type of message: The file "c:\htmlhelp\httempex.htm" has a link to a non-existent file: "tmplstep.htm". Warning A condition that results in a defective help file. Warning error messages have a number range from 3000 through 4999. For example, an invalid DLL causes this type of message: HHW4000: Warning: Unable to initialize for full-text search. The .dll may not be installed or is invalid. Error A condition that prevents the help file from being built. Error messages have a number range from 5000 through 6999. Internal Error An error caused by the HTML Help Workshop program. Internal Error messages have a number greater than 7000. Note HTML Help Workshop can display up to 64K of messages on the screen in Microsoft Windows 95 and 1 MB in Windows 2000, but there is no practical limit to how many compiler messages can be saved to a file. Related topics About Compiling a Help Project
https://docs.microsoft.com/en-us/previous-versions/windows/desktop/legacy/ms670120(v=vs.85)
2020-11-24T04:57:41
CC-MAIN-2020-50
1606141171077.4
[]
docs.microsoft.com
Client Actions: Send Unsent State Messages What it does: This tool sends State Messages that are cached on the ConfigMgr client to the ConfigMgr server. How it does it: This tool completes this action by using remote WMI. Navigation: Navigate to the Send Unsent State Messages Tool by right clicking on a device, selecting Recast RCT > Client Actions > Sent Unsent State Messages: Screenshot: When the action is run, the following dialog box will open: Permissions: The Send Unsent State Messages tool requires the following permissions: Recast Permissions: - Requires the Send Unsent State Messages.
https://docs.recastsoftware.com/features/Device_Tools/Client_Actions/Send_Unsent_State_Messages/index.html
2020-11-24T03:48:34
CC-MAIN-2020-50
1606141171077.4
[array(['media/SS_NAV.png', 'Send Unsent State Messages ScreenShot'], dtype=object) ]
docs.recastsoftware.com
Installing Right Click Tools with Recast Management Server the Recast Management Server and Right Click Tools. You can download the latest installers for the Recast Management Server and Right Click Tools_5<<_6<< Install_7<< a user tries to access something that they do not have permission to, the action will fail with the error "Invalid Recast Permissions."> _9<< The install will continue, and when it is finished, this screen will appear. Verification_11<< Your Right Click Tools and Recast Management Server has been set up correctly. Click Here for more documentation specific to the Recast Management Server.=
https://docs.recastsoftware.com/features/Installation/management_server/index.html
2020-11-24T03:59:43
CC-MAIN-2020-50
1606141171077.4
[array(['media/Server_1_missing_net.png', '.net Core Bundle'], dtype=object) array(['media/iis_nav.png', 'IIS Configuration'], dtype=object) array(['media/cert_nav.png', 'Certificate Configuration'], dtype=object) array(['media/sql_nav.png', 'SQL Server'], dtype=object) array(['media/license_nav.png', 'License'], dtype=object) array(['media/Server_OK.png', 'Server Working Screenshot'], dtype=object) array(['media/users_nav.png', 'Adding user'], dtype=object) array(['media/RCT_First_install_screen.png', 'Right Click Tools Enterprise Server Installation'], dtype=object) array(['media/RCT_second_install_server.png', 'Right Click Tools select Enterprise Server'], dtype=object) array(['media/rctenterprisestandwithserver_nav.png', 'Right Click Tools Enterprise add RMS Address'], dtype=object) array(['media/RCT_finished.png', 'Right Click Tools Complete'], dtype=object) array(['media/verify_node.png', 'Recast Software Node With Server'], dtype=object) ]
docs.recastsoftware.com
Creating a department The departments allow you to efficiently route visitor’s requests to designated groups. This ensures that your visitor’s messages are always directed to the right agents. To create a department - Go to your dashboard and select Settings > Departments. - Click Add Department. - Enter relevant details, as shown in the screenshot below. - Click on the All agents option or select a Group to add them to the department. - Click Create Department.
https://docs.tiledesk.com/knowledge-base/creating-a-department/
2020-11-24T03:37:50
CC-MAIN-2020-50
1606141171077.4
[array(['https://i2.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/image.png?resize=1024%2C931&ssl=1', None], dtype=object) ]
docs.tiledesk.com
Setting up Leave Messages The leave command allows Zira to send leave messages to a specified channel. You can see all current leave commands by running the command with no arguments. important Zira will need the Send messages permission in the channel you want to send messages to. Setting up the leave message systemSetting up the leave message system The leave channel command is used to enable the leave message system and simultaneously set a channel for new leave announcements. z/leave channel #goodbye # by channel name Adding new leave messagesAdding new leave messages The leave add command is used to add new leave messages to the list of messages randomly sent when a new user leaves. z/leave add Goodbye $user$#$discriminator$! $guild$ has $membercount$ members now. :( Currently available placeholders for leave messages include the following: $user$ # username $discriminator$ # user discriminator $id$ # user ID $mention$ # a user mention $guild$ # guild name $membercount$ # new member count Viewing current leave messagesViewing current leave messages The leave list command lists all current messages that Zira will randomly send upon a new user leave. z/leave list The IDs listed are used for the leave remove command below. Deleting messages from the leave message poolDeleting messages from the leave message pool The leave remove command is used to remove a message from the list of messages randomly sent when a new user leaves. The ID of a leave message can be obtained from the leave list command above. z/leave remove aW8ftz52C Disabling the leave message systemDisabling the leave message system The leave disable command is used to disable the leave message system entirely. z/leave disable
https://docs.zira.gg/docs/en/leave/
2020-11-24T03:56:02
CC-MAIN-2020-50
1606141171077.4
[array(['/img/zleave.png', 'z/leave'], dtype=object) array(['/img/zleave-channel-example.png', 'z/leave channel example'], dtype=object) array(['/img/zleave-add-example.png', 'z/leave add example'], dtype=object) array(['/img/zleave-list-example.png', 'z/leave list example'], dtype=object) array(['/img/zleave-remove-example.png', 'z/leave remove example'], dtype=object) array(['/img/zleave-disable-example.png', 'z/leave disable example'], dtype=object) ]
docs.zira.gg
PacketPeer¶ Inherits: Reference < Object Inherited By: PacketPeerStream, PacketPeerUDP, WebSocketPeer, PacketPeerGDNative, NetworkedMultiplayerPeer Category: Core. Method Descriptions¶ Return the number of packets currently available in the ring-buffer. - PoolByteArray get_packet ( ) Get a raw packet. Return the error state of the last packet received (via get_packet and get_var). Get a Variant. - Error put_packet ( PoolByteArray buffer ) Send a raw packet. Send a Variant as a packet.
https://godot-es-docs.readthedocs.io/en/latest/classes/class_packetpeer.html
2020-11-24T04:32:27
CC-MAIN-2020-50
1606141171077.4
[]
godot-es-docs.readthedocs.io
How does the image caption feature work? You can enable the image caption feature with the Show image caption with the product name option in the app preferences (Zoom preferences): When the option is enabled and your images code (the title or the alt attributes) contains a description of the image, it will be displayed a the bottom of the image: You can add alt texts to your images from your Shopify admin following the instructions at
https://docs.codeblackbelt.com/article/155-how-does-the-image-caption-feature-work
2020-11-24T04:17:31
CC-MAIN-2020-50
1606141171077.4
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594a823404286305c68d3fa9/images/5d16013e04286305cb87d86a/file-xgUFiTpzXD.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594a823404286305c68d3fa9/images/59c3a41a2c7d3a73488d086f/file-rFerm2QzhA.png', None], dtype=object) ]
docs.codeblackbelt.com
- in favor of conformant values. unreusable classes. Also, the kind of classes that emerge from “utility-first” tend to be design-centered (e.g. .button, .alert, .card) rather than domain-centered (e.g. are discouraged in CSS because they can affect unintended elements in the hierarchy. Also, since they are not meaningful names, they do not add meaning to the code. // Bad ul { color: #fff; } // Good .class-name { color: #fff; } Formatting Indentation should always use two spaces for each indentation level. // Bad, four spaces p { color: #f00; } // Good p { color: #f00; } Semicolons Always include semicolons after every property. When the stylesheets are minified, the semicolons will be removed automatically. // Bad .container-item { width: 100px; height: 100px } // Good .container-item { width: 100px; height: 100px; } Shorthand The shorthand form should be used for properties that support it. // Bad margin: 10px 15px 10px 15px; padding: 10px 10px 10px 10px; // Good margin: 10px 15px; padding: 10px; Zero Don’t use ID selectors in CSS. // Bad #my-element { padding: 0; } // Good .my-element { padding: 0; } Variables Before adding a new variable for a color or a size, guarantee: - There isn’t already one - There isn’t a similar one we can use instead. L won’t fix every problem, but it should fix a majority. Ignoring.
https://docs.gitlab.com/ee/development/fe_guide/style/scss.html
2020-11-24T03:45:45
CC-MAIN-2020-50
1606141171077.4
[]
docs.gitlab.com
Crate actix_service Version 1.0.6 See all actix_service's items See Service docs for information on this crate's foundational trait. Service Pipeline service - pipeline allows to compose multiple service into one service. Pipeline factory Trait for types that can be converted to a Service Trait for types that can be converted to a ServiceFactory ServiceFactory An asynchronous operation from Request to a Response. Request Response Factory for creating Services. The Transform trait defines the interface of a service factory that wraps inner service during construction. Transform Apply transform to a service. Convert Fn(Config, &mut Service1) -> Future<Service2> fn to a service factory Fn(Config, &mut Service1) -> Future<Service2> Apply tranform function to a service. Service factory that prodices apply_fn service. apply_fn Create ServiceFactory for function that can produce services Create ServiceFactory for function that accepts config argument and can produce services Create ServiceFactory for function that can act as a Service Convert object of type T to a service S T S Adapt external config argument to a config for provided service factory Contruct new pipeline with one service in pipeline chain. Contruct new pipeline factory with one service factory. Replace config with unit
https://docs.rs/actix-service/1.0.6/actix_service/
2020-11-24T04:04:50
CC-MAIN-2020-50
1606141171077.4
[]
docs.rs
How to fix the product image thumbnails functionality Full Page Zoom won't interfere with your theme thumbnail system. However, we usually find installations where the thumbnails are not working properly due to some previous template edition or a bad/incomplete uninstall of a previous app. You can always contact us for help but, if you are familiar with web technologies, you can also try to fix it by telling the app where your thumbnails are located within your HTML markup. To do so, go to the app preferences page and navigate to Advanced preferences, and then to Selector preferences: The thumbnails selector value must be a jQuery selector pointing to the thumbnail elements (either the image elements or the links which contain them). Please note that the selector must match all thumbnails. You can also use a different selector for mobile devices, if the theme markup is different on mobile.
https://docs.codeblackbelt.com/article/1279-how-to-fix-the-product-image-thumbnails-functionality
2020-11-24T04:14:54
CC-MAIN-2020-50
1606141171077.4
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594a823404286305c68d3fa9/images/5d166b9b04286305cb87ddeb/file-erYk3G6JF0.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/594a823404286305c68d3fa9/images/5d166ba004286305cb87ddec/file-Ld7Y1GlYMJ.png', None], dtype=object) ]
docs.codeblackbelt.com
Credentials inventory Introduced in GitLab 12.6. GitLab administrators are responsible for the overall security of their instance. To assist, GitLab provides a Credentials inventory to keep track of all the credentials that can be used to access their self-managed instance. Using Credentials inventory, you can see all the personal access tokens (PAT) and SSH keys that exist in your GitLab instance. In addition, you can revoke and delete and see: - Who they belong to. - Their access scope. - Their usage pattern. - When they expire. Introduced in GitLab 13.2. - When they were revoked. Introduced in GitLab 13.2. To access the Credentials inventory, navigate to Admin Area > Credentials. The following is an example of the Credentials inventory page: Revoke a user’s personal access token Introduced in GitLab 13.4. If you see a Revoke button, you can revoke that user’s PAT. Whether you see a Revoke button depends on the token state, and if an expiration date has been set. For more information, see the following table: When a PAT is revoked from the credentials inventory, the instance notifies the user by email. Delete a user’s SSH key Introduced in GitLab 13.5. You can Delete a user’s SSH key by navigating to the credentials inventory’s SSH Keys tab. The instance then notifies the user.
https://docs.gitlab.com/ee/user/admin_area/credentials_inventory.html
2020-11-24T02:55:26
CC-MAIN-2020-50
1606141171077.4
[]
docs.gitlab.com
This section walks through how to manage Alfresco Share sites via the ReST API. Being able to manage sites remotely is useful as they are widely used when you want to collaborate on content in the Repository. The ReST API has a full set of calls to do most things around sites.
https://docs.alfresco.com/6.2/concepts/dev-api-by-language-alf-rest-manage-sites-intro.html
2020-11-24T03:35:03
CC-MAIN-2020-50
1606141171077.4
[]
docs.alfresco.com
Trouble Shooting AWS CodeCommit plugin - Eclipse was unable to write to the secure store. Problem: when checking out or checking in an AWS CodeCommit repository, I got an error saying Writing to secure store failed, No password provided. Solution: Open up Preferences -> General -> Security -> Security Storage -> Contents -> GIT -> Delete.
https://docs.aws.amazon.com/toolkit-for-eclipse/v1/user-guide/trouble-shooting.html
2020-11-24T04:54:04
CC-MAIN-2020-50
1606141171077.4
[]
docs.aws.amazon.com
Inspector Updates NinjaRMM Inspector in Production - The NinjaRMM Inspector is in Production - Metrics and Actionable Alert rules added SentinelOne Inspector in Production - The SentinelOne Inspector is in Production - Metrics and Actionable Alert rules added SolarWinds N-central Inspector in Production - The SolarWinds N-central Inspector is in Production - Metrics and Actionable Alert rules added Feature Updates Quick Views Quick Views give users the ability to create and save filtered and sorted data tables for easy future reference. This allows the creation, both by Liongard and users, of data tables that align to specific use cases. Video isn't playing? Click here. Liongard prebuilt Quick Views for the following Inspectors: - Active Directory - Hyper-V - Internet Domain/DNS - Microsoft 365 - SonicWall - SQL Server - Ubiquiti UniFi Liongard Example Template Update For new partners, Liongard's Example Actionable Alert template will only have 8 Actionable Alert rules enabled by default. The following Actionable Alert rules will be enabled: - Active Directory | Change to Privileged Users - Active Directory | Privileged User with Stale Password - Internet Domain | Expiration - Internet Domain | Change to MX Record - Office 365 | Change to Privileged Users - Office 365 | Privileged User(s) with Stale Password - TLS/SSL | Certificate Expiration - Windows Inspector | Local Privileged User Added/Deleted Platform Updates Chat Support In Liongard, under the Support dropdown, users can now access chat support under, “Chat with Support”. Partners who purchased Liongard through Liongard will be directed to Liongard chat support. Reseller partners will be routed to their corresponding support link. Roar to Liongard Transition Liongard's capabilities, and the value we deliver to our partners, has evolved tremendously since we hit the market five years ago. We recently updated our positioning to more accurately reflect Liongard's true impact, which you'll note in our "Standardize, Secure, and Scale" messaging. As part of that effort, we're renaming our "Roar" platform to "Liongard," in order to streamline and strengthen our brand. Minor Updates and Bug Fixes - Updated the help text below the Alert Comment section, in the Actionable Alert Rule Builder, to more clearly explain how Alert Comments work. - In the past, if a user’s session expired and they opened another Liongard tab, they would receive our “Oops!” splash page with no clear message that they needed to sign in again. Now, if this happens users will be rerouted to our log-in page with the message, "Your session has expired, please log back in". - Fixed an issue where the refresh status button on the Inspector's Admin Screen was not properly refreshing the status. - Updated the, “Choose a recent timeline entry” option within the Metric Builder to include inspection time. - Updated our progress bar loading icon on the Single Environment Dashboard to load correctly. - Added two new Office 365 Metrics to better format in BrightGauge - Added Risky User Data to the Azure Active Directory Data Print - Updated the Discovers column for Network Discovery to display the Inspectors the Network Discovery Inspector discovers - Improved several JumpCloud Actionable Alerts to trigger correctly - Updated the Inspector configuration screens for Azure Active Directory and Microsoft 365 - Added "Contacts" data to the IT Glue Data Print - Removed duplicate data from the IT Glue Data Print - Updated the "Active Directory: Maximum Password Age" Metric to correctly show the value for the MaxPasswordAge date - Improved the "Active Directory | No Password Expiration Policy" Actionable Alert logic which may cause the alert to trigger. - Stability improvements for the AWS Inspector - Added two new Metrics for the Bitdefender Inspector* - Updated the Body of the "Office 365 | Exposure to Privileged Account(s) Due to Lack of Strong Authentication Triggering" Actionable Alert Updated 25 days ago
https://docs.liongard.com/docs/release-notes-through-2020-10-29
2020-11-24T02:58:27
CC-MAIN-2020-50
1606141171077.4
[array(['https://play.vidyard.com/WjomN8mQfpFKqxt7JWoZ8w.jpg', None], dtype=object) array(['https://files.readme.io/4077455-Screen_Shot_2020-10-23_at_10.10.48_AM.png', 'Screen Shot 2020-10-23 at 10.10.48 AM.png'], dtype=object) array(['https://files.readme.io/4077455-Screen_Shot_2020-10-23_at_10.10.48_AM.png', 'Click to close...'], dtype=object) ]
docs.liongard.com
Hot spares act as standby drives in RAID 1, RAID 5, or RAID 6 volume groups. They are fully functional drives that contain no data. If a drive fails in the volume group, the controller automatically reconstructs data from the failed drive to a hot spare. If a drive fails in the storage array, the hot spare drive is automatically substituted for the failed drive without requiring a physical swap. If the hot spare drive is available when a drive fails, the controller uses redundancy data to reconstruct the data from the failed drive to the hot spare drive. A hot spare drive is not dedicated to a specific volume group. Instead, you can use a hot spare drive for any failed drive in the storage array with the same capacity or smaller capacity. A hot spare drive must be of the same media type (HDD or SSD) as the drives that it is protecting.
https://docs.netapp.com/ess-11/topic/com.netapp.doc.ssm-sam-116/GUID-06EC31E6-171F-421D-91F2-C55974643C87.html?lang=en
2020-11-24T04:04:38
CC-MAIN-2020-50
1606141171077.4
[]
docs.netapp.com
You can install the SnapCenter Plug-ins Package for Windows on multiple hosts simultaneously by using the Install-SmHostPackage PowerShell cmdlet. You must have logged in to SnapCenter as a domain user with local administrator rights on each host on which you want to install the plug-in package. The information regarding the parameters that can be used with the cmdlet and their descriptions can be obtained by running Get-Help command_name. Alternatively, you can also refer to the Cmdlet Reference Guide. SnapCenter Software 4.3 Windows Cmdlet Reference Guide You can use the -skipprecheck option when you have already installed the plug-ins manually and you do not want to validate whether the host meets the requirements for installing the plug-in.
https://docs.netapp.com/ocsc-43/topic/com.netapp.doc.ocsc-isg/GUID-DADDF666-FE13-4676-9123-BD58E5008572.html?lang=en
2020-11-24T04:00:55
CC-MAIN-2020-50
1606141171077.4
[]
docs.netapp.com
When an attempt to establish a connection to a remote service fails during establishment, this message is generated. This audit message means an outgoing or incoming connection attempt failed at the lowest level, due to communication problems — the corresponding service was unable to access the remote host, and the TCP/IP connection was not established. This message can be used to detect system problems such as configuration errors where content is being pushed to unreachable hosts, or where routing problems result in inaccessibility of hosts. The message can also be used to report on the hosts to which content was pushed. The “Connection Identifier” field allows correlation of audit messages related to actions performed during a session.
https://docs.netapp.com/sgws-110/topic/com.netapp.doc.sg-audit/GUID-9141BBF0-8924-4F74-AA28-81324397A55E.html?lang=en
2020-11-24T04:26:07
CC-MAIN-2020-50
1606141171077.4
[]
docs.netapp.com
Delete snapshots inSync Private Cloud Editions: Elite Enterprise Procedure To delete snapshot of a device - On the inSync Master Management Console menu bar, click Users. - Click the link of the user whose device snapshots you want to delete. - If you want to delete the snapshots stored in the primary storage, click Restore > Manage Snapshots On Primary. If you want to delete the snapshots stored in the secondary storage, click Restore > Manage Snapshots On Secondary. - On the Manage Snapshots window, select the device. - Select the snapshots that you want to delete. You cannot recover a deleted snapshot. - Click Delete Snapshot.
https://docs.druva.com/010_002_inSync_On-premise/inSync_On-Premise_5.8/030_Get_Started_Backup_Restore/050_Data_Backup_and_Restore/070_Back_up_and_restore_data/040_Delete_snapshots
2020-11-24T04:16:43
CC-MAIN-2020-50
1606141171077.4
[array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'File:/tick.png'], dtype=object) ]
docs.druva.com
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: (1) - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold (2) OpenShift Container Platform, OpenShift Container Platform OpenShift Container Platform only supports using the NFS protocol. If NFS is not available and enabled in the underlying OpenStack cloud, you cannot use the Manila CSI Driver Operator to provision storage for OpenShift Container Platform.. OpenShift Container Platform OpenShift Container Platform on AWS, GCP, Azure, and other platforms, with the exception of the storage class reference in the PVC definition. RHOSP is deployed with appropriate Manila share infrastructure so that it can be used to dynamically provision and mount volumes in OpenShift Container Platform. To dynamically create a Manila CSI volume using the web console: In the OpenShift Container Platform.
https://docs.openshift.com/container-platform/4.9/storage/container_storage_interface/persistent-storage-csi-manila.html
2022-09-25T08:37:06
CC-MAIN-2022-40
1664030334515.14
[]
docs.openshift.com
Get credential Estimated reading: 1 minute 68 views Get credential activity retrieves information of Generic Credentials defined in Windows Credential Manager. *Generic credential nameCredential name.*Result variable nameResult variable name of the credential.Note: * Fields selected with are required, others are optional. Generic credential name example You can use the Generic credential name as shown in the example. test2.gmail Result variable name example You can use the Result variable name as shown in the examples. credentialResult
https://docs.robusta.ai/docs/documentation-2021-12/robusta-rpa-components/windows-system/get-credential/
2022-09-25T07:26:40
CC-MAIN-2022-40
1664030334515.14
[]
docs.robusta.ai
Supporting ambient light sensors Ambient light sensors can measure current lighting conditions. You can use data from light sensors to automatically adjust screen brightness and keyboard illumination. You can also create light-aware applications that adjust user interface elements for current lighting conditions. In Windows 8, automatic brightness control with ambient light sensors (adaptive brightness) is fully supported. Windows 8 includes class driver support for both ACPI 3.0b-compliant and HID-compliant ambient light sensor implementations. This means that you do not have to write custom drivers to support ambient light sensors. These sensors can also be used by Sensor API-based client applications, because these drivers integrate with the Windows Sensor and Location platform. For more information about ambient light sensors and the adaptive brightness feature in Windows 8, see the white paper "Integrating Ambient Light Sensors with " Windows 7 on the Windows Hardware Developer Central website. For ambient light sensors that are not ACPI 3.0b-compliant or HID-compliant, you must create a sensor driver to integrate with the Sensor and Location platform. Handling light sensor properties For Windows 8, the correct type for SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX is VT_R4. However, for Windows 7, the correct type was VT_UI4. As a result, device drivers need to correctly handle both types. Another point of difference between Windows 8 and Windows 7 is that the earlier ALS device drivers expected SENSOR_PROPERTY_CHANGE_SENSITIVITY to be passed as a single value rather than as a set of values in an IPortableDeviceValues object. The following pseudo code demonstrates the correct handling of possible types for SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX. SetLuxChangeSensitivity(PROPVARIANT var) { if (var.vt == VT_UNKNOWN) { CComPtr<IPortableDeviceValues> spValues; PROPVARIANT entry; // // Var is a pointer to an IPortableDeviceValues // container. Cast and iterate through its entries. // spValues = static_cast<IPortableDeviceValues*>(pVar->punkVal); foreach entry in spValues { // // Note: omitting check for SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX key // if (entry.vt == VT_R4) { // // VT_R4 is the expected type for // SENSOR_DATA_TYPE_LIGHT_LEVEL_LUX. // Reference entry.fltVal. // } else if (entry.vt == VT_UI4) { // // VT_UI4 is deprecated, but use it anyway. // Reference entry.ulVal. // } else { // // All other types are invalid. // Return an error accordingly. // } } } else if (var.vt == VT_UI4) { // // Top level type of VT_UI4 is deprecated for // SENSOR_PROPERTY_CHANGE_SENSITIVITY, but use it anyway. // Reference entry.ulVal. // } else { // // All other types are invalid. // Return an error accordingly. // } } Related topics Sensor Driver Development Basics Send comments about this topic to Microsoft
https://docs.microsoft.com/en-us/windows-hardware/drivers/sensors/supporting-ambient-light-sensors
2018-01-16T14:25:18
CC-MAIN-2018-05
1516084886436.25
[]
docs.microsoft.com
Defining a dynamic model factory¶ The basic principle that allows us to create dynamic classes is the built-in function type(). Instead of the normal syntax to define a class in Python: class Person(object): name = "Julia" The type() function can be used to create the same class, here is how the class above looks using the type() built-in: Person = type("Person", (object,), {'name': "Julia"}) Using type() means you can programatically determine the number and names of the attributes that make up the class. Django models¶ Django models can be essentially defined in the same manner, with the one additional requirement that you need to define an attribute called __module__. Here is a simple Django model: class Animal(models.Model): name = models.CharField(max_length=32) And here is the equivalent class built using type(): attrs = { 'name': models.CharField(max_length=32), '__module__': 'myapp.models' } Animal = type("Animal", (models.Model,), attrs) Any Django model that can be defined in the normal fashion can be made using type(). Django’s model cache¶ Django automatically caches model classes when you subclass models.Model. If you are generating a model that has a name that may already exist, you should firstly remove the existing cached class. There is no official, documented way to do this, but current versions of Django allow you to delete the cache entry directly: from django.db.models.loading import cache try: del cache.app_models[appname][modelname] except KeyError: pass Note When using Django in non-official or undocumented ways, it’s highly advisable to write unit tests to ensure that the code does what you indend it to do. This is especially useful when upgrading Django in the future, to ensure that all uses of undocumented features still work with the new version of Django. Using the model API¶ Because the names of model fields may no longer be known to the developer, it makes using Django’s model API a little more difficult. There are at least three simple approaches to this problem. Firstly, you can use Python’s ** syntax to pass a mapping object as a set of keyword arguments. This is not as elegant as the normal syntax, but does the job: kwargs = {'name': "Jenny", 'color': "Blue"} print People.objects.filter(**kwargs) A second approach is to subclass django.db.models.query.QuerySet and provide your own customisations to keep things clean. You can attach the customised QuerySet class by overloading the get_query_set method of your model manager. Beware however of making things too nonstandard, forcing other developers to learn your new API. from django.db.models.query import QuerySet from django.db import models class MyQuerySet(QuerySet): def filter(self, *args, **kwargs): kwargs.update((args[i],args[i+1]) for i in range(0, len(args), 2)) return super(MyQuerySet, self).filter(**kwargs) class MyManager(models.Manager): def get_query_set(self): return MyQuerySet(self.model) # XXX Add the manager to your dynamic model... # Warning: This project uses a customised filter method! print People.objects.filter(name="Jenny").filter('color', 'blue') A third approach is to simply provide a helper function that creates either a preprepared kwargs mapping or returns a django.db.models.Q object, which can be fed directly to a queryset as seen above. This would be like creating a new API, but is a little more explicit than subclassing QuerySet. from django.db.models import Q def my_query(*args, **kwargs): """ turns my_query(key, val, key, val, key=val) into a Q object. """ kwargs.update((args[i],args[i+1]) for i in range(0, len(args), 2)) return Q(**kwargs) print People.objects.filter(my_query('color', 'blue', name="Jenny")) What comes next?¶ Although this is enough to define a Django model class, if the model isn’t in existence when syncdb is run, no respective database tables will be created. The creation and migration of database tables is covered in database migration. Also relevant is the appropriately time regeneration of the model class, (especially if you want to host using more than one server) see model migration and, if you would like to edit the dynamic models in Django’s admin, admin migration.
http://dynamic-models.readthedocs.io/en/latest/topics/model.html
2018-01-16T13:17:35
CC-MAIN-2018-05
1516084886436.25
[]
dynamic-models.readthedocs.io
Enterprise Manual Installation under Linux The recommended way to install SwiftyBeaver Enterprise is via Docker. If you can’t install it that way then you can also do a manual installation under Linux which is explained on this page. Linux Requirements SwiftyBeaver Enterprise has a low memory and system footprint and does not require any additional packages to be installed. It can even be run under Alpine Linux and of course also is working under any modern Linux distribution like Ubuntu, Fedora, Arch, etc. 1. Download Server Download the binary to a folder of your choice, make it executable and run it as privileged user. curl -L -o SBEnterprise chmod +x SBEnterprise 2. Set Environment Variables SwiftyBeaver Enterprise requires certain environment variables to be set before the server is started. You can do that all in one line or to make it easier to read line by line. Please read environment variables to learn more about the details: APPS="app1|secret1|89FM6o7mBUmTDjxTBiEJCrKAxaQVXsxr,app2|secret2|q9eZ9FuaqXWaQnVeFZPQq2zVVrdfmC4b" ES_HOST="" ES_AUTH_USER="elastic" ES_AUTH_PASSWORD="changeme" 3. Run Server Now when all environment variables are set the running of the server is a simple call: ./SBenterprise The SwiftyBeaver Enterprise will start in a blocking way which makes it easier to see any issues during startup. 4. Test Server Response From another SSH connection to the same server you can see if it works properly: # Unprotected ping connection to see if server is running curl "" # Basic Auth protected call to the server using app credentials curl "" -u 'app1:secret1' You can now add your server's public IP and port and one of the two app credential sets to your SBPlatformDestination() in the SwiftyBeaver Logging Framework inside your mobile app. Read here to learn more about that. Good to know: It is recommended to run SwiftyBeaver Enterprise as an auto-restarting daemon in the background. Please follow your Linux distribution’s way of daemonizing a server application to implement it. To make your SwiftyBeaver Enterprise server production-ready please read the setup instructions for a high availability setup behind a load balancer next.
http://docs.swiftybeaver.com/article/31-enterprise-manual-installation-under-linux
2018-01-16T13:13:05
CC-MAIN-2018-05
1516084886436.25
[]
docs.swiftybeaver.com
Deploying¶ This section is geared towards helping you deploy this server properly for production use. @powellc has put together an Ansible playbook for pypicloud, which can be found here: Configuration¶ Remember when you generated a config file in getting started? Well we can do the same thing with a different flag to generate a default production config file. $ ppc-make-config -p prod.ini You should make extra certain that session.secure is true if the server is visible to the outside world. This goes hand-in-hand with ensuring that your server always uses HTTPS. If you set up access control and don’t use HTTPS, someone will just sniff your credentials. And then you’ll feel silly. WSGI Server¶ You probably don’t want to use waitress for your production server, though it will work fine for small deploys. I recommend using uWSGI with NGINX. It’s fast and mature. After creating your production config file, it will have a section for uwsgi. You can run uwsgi with $ pip install uwsgi $ uwsgi --ini-paste-logged prod.ini Now uwsgi is running, but it’s not speaking HTTP. It speaks a special uwsgi protocol that is used to communicate with nginx. Below is a sample nginx configuration that will listen on port 80 and send traffic to uwsgi. server { listen 80; server_name pypi.myserver.com; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:3031; } } Note When using access control in production, you may need pip to pass up a username/password. Do that by just putting it into the url in the canonical way: pip install mypkg -i Warning If you are using pypi.fallback = cache, make sure your uWSGI settings includes enable-threads = true. The package downloader uses threads.
http://pypicloud.readthedocs.io/en/latest/topics/deploy.html
2018-01-16T13:18:21
CC-MAIN-2018-05
1516084886436.25
[]
pypicloud.readthedocs.io
All Collections Setup for People Setup for People New to Crowdkeep? Here's how to setup your employee Crowdkeep account to manage your own time 1 article in this collection Written by Nazli Tropha Installing the iOS and Android App When working outside company facilities, taking Time-Off, on Holiday, and Sick. Written by Nazli Tropha Updated over a week ago
http://docs.crowdkeep.com/setup-for-people
2018-01-16T13:45:56
CC-MAIN-2018-05
1516084886436.25
[]
docs.crowdkeep.com
Mirror Driver INF File Use the Mirror.inf sample mirror driver INF file as a template for constructing your own mirror driver INF file. For more information, see Installing a Boot Driver and INF File Sections and Directives. Note Starting with Windows 8, mirror drivers will not install on the system. For more information, see Mirror Drivers. Send comments about this topic to Microsoft
https://docs.microsoft.com/en-us/windows-hardware/drivers/display/mirror-driver-inf-file
2018-01-16T14:25:28
CC-MAIN-2018-05
1516084886436.25
[]
docs.microsoft.com
vRealize Business for Cloud provides users greater visibility into the financial aspects of their IaaS delivery and lets them optimize and improve these operations. The architecture illustrates the main components of vRealize Business for Cloud, the server, FactsRepo inventory service, data transformation service, data collection services, and reference database. Data Collection Services Data collection services include a set of services for each private and public cloud endpoint such as vCenter Server, vCloud Director, and AWS for retrieving both inventory information (servers, virtual machines, clusters, storage devices, and associations between them) and usage (CPU and memory) statistics. The data collected from data collection services is used for cost calculations. FactsRepo Inventory Service It is an inventory service built on MongoDB to store the collected data that the vRealize Business for Cloud server uses for the cost computation. Data Transformation Service The data transformation service converts the data received from data collection services into the structures consumable by FactsRepo. The data transformation service is a single point of aggregation of data from all data collectors. vRealize Business for Cloud Server vRealize Business for Cloud server is a web application that runs on Pivotal tc Server. vRealize Business for Cloud has multiple data collection services that run periodically to collect inventory information and statistics and uses vPostgres as the persistent store. The data collected from data collection services is used for cost calculations The vPostgres stores only computed data; FactsRepo stores raw data. Reference Database This component is responsible for providing default, out-of-the-box costs for each of the supported cost drivers. Reference database is updated automatically or manually, and user can download the latest data set and import the data set into vRealize Business for Cloud. The new values affect cost calculation. Reference data that is used depends on currency you select during installation. You cannot change the currency configuration after deploying vRealize Business for Cloud. Communication between Server and Reference Database Reference database is a compressed and encrypted file, which the users can download and install manually or update automatically. You can update the most current version of reference database. For more information, see Update the Reference Database for vRealize Business for Cloud. Other Sources of Information These sources are optional, and are used only if installed and configured. The sources include vRealize Automation, vCloud Director, vRealize Operations Manager, Amazon Web Services (AWS), Microsoft Azure, and EMC Storage Resource Manager (SRM). How vRealize Business for Cloud works vRealize Business for Cloud collects data from external sources continuously and periodically updates FactsRepo. The collected data can be viewed on the dashboard or can generate the report. The data synchronization or update happens at regular interval. However, you can manually trigger the data collection process when the inventory changes occur, such as initialization of the system or addition of a private, public, or hybrid cloud account. External Interfaces Below are the interfaces/APIs published to external applications.
https://docs.vmware.com/en/vRealize-Business/7.3/com.vmware.vRBforCloud.install.doc/GUID-3A657F40-D9B9-4645-B5F3-5F6720524C71.html
2018-04-19T17:10:06
CC-MAIN-2018-17
1524125937015.7
[array(['images/GUID-579C67F5-69C9-4804-9095-5CF29315AEAB-high.png', None], dtype=object) ]
docs.vmware.com
ContentTypeProfileConfig The configuration for a field-level encryption content type-profile mapping. Contents - ContentTypeProfiles The configuration for a field-level encryption content type-profile. Type: ContentTypeProfiles object Required: No - ForwardWhenContentTypeIsUnknown. Type: Boolean Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ContentTypeProfileConfig.html
2018-04-19T17:28:03
CC-MAIN-2018-17
1524125937015.7
[]
docs.aws.amazon.com
Appendix E - IAS Events in the Windows 2000 System Event Log When event logging is enabled from the Service tab in the properties of an IAS server, the events listed in Table 7 can be used to troubleshoot problems with your IAS or NAS configuration. Table 7 - IAS Events in the Windows 2000 System Event Log Event log message Meaning “Unknown user name or bad password.” “The specified user does not exist.” “The specified domain does not exist.” The user might have typed Windows 2000 Server Help. If the remote access server is a member of Some NASs automatically strip the domain name from the user name before forwarding the user name to a RADIUS server. Turn off the feature that strips the domain name from the user name. For more information, see your NAS documentation. must also enable CHAP on the domain that are not permitted by a remote access policy (for example, during an unauthorized time period, using an unauthorized port type, calling from an unauthorized on the domain controller (or in Local Users and Groups) to verify.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/bb742393(v=technet.10)
2018-04-19T18:27:11
CC-MAIN-2018-17
1524125937015.7
[]
docs.microsoft.com
When CS Time is installed it will only keep specific data younger than 365 days. The data files affected are: clockings, daily hours, warnings and leave. You can change the Auto-delete settings to keep the data in the above files for up to 9999 days (that's approximately 27+ years). To do so, go to the Configuration Module and open the System Options />. Click on the Auto-Delete option under the Processor section and change the number of days next to each file. You can also set the time during the day during which auto-deletes must not happen. Usually this is set to happen after the previous day's hours has been processed and after normal working hours. Permalink: Viewing Details:
http://docs.tnasoftware.com/999_FAQ/Extend_period_CS_Time_keeps_data_for
2018-04-19T17:19:21
CC-MAIN-2018-17
1524125937015.7
[]
docs.tnasoftware.com
FieldLevelEncryptionProfileList List of field-level encryption profiles. Contents - Items The field-level encryption profile items. Type: Array of FieldLevelEncryptionProfileSummary objects Required: No - MaxItems The maximum number of field-level encryption profiles you want in the response body. Type: Integer Required: Yes - NextMarker If there are more elements to be listed, this element is present and contains the value that you can use for the Markerrequest parameter to continue listing your profiles where you left off. Type: String Required: No - Quantity The number of field-level encryption profiles. Type: Integer Required: Yes See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_FieldLevelEncryptionProfileList.html
2018-04-19T17:27:47
CC-MAIN-2018-17
1524125937015.7
[]
docs.aws.amazon.com
- . > infinispan's .xml configuration file (infinispan.xml in attachment) -"). The same setup can be achieved programatically: The names of the site (case sensitive) should match the name of a site as defined within JGroups' RELAY2 protocol configuration file. Besides the global configuration, each cache specifies its backup policy in the "site" element:" -).. Non transactional caches In the case of non-transactional caches the replication happens during each operation. Given that data is sent in parallel to backups and local caches, it is possible for the operations to succeed locally and fail remotely, or the other way, causing inconsistencies..
https://docs.jboss.org/author/display/ISPN52/Cross+site+replication
2018-04-19T17:55:34
CC-MAIN-2018-17
1524125937015.7
[array(['/author/download/attachments/65274061/Cross+datacenter+replication-user+doc+%282%29.png?version=1&modificationDate=1348081998000', None], dtype=object) ]
docs.jboss.org
(WoW). The WoW. Open and share the Wall of Wins To activate your WoW, go to Share -> Wall of Wins from Engage. Once you click to that section, a separate tab will open in your browsers with the wall itself. clicking Add Win. Then add a title and your explanation of why it's a win. It's super important to write a quick description of what was done so the employees are reminded that the change came as a result of TINYpulse feedback. Click Save to post it to the Wall of Wins. Head over to the Wall of Wins to check out your new addition and don't forget to point it out to your employees! We at TINYpulse hear frequently from customers that employees don't notice all of the positive change that comes as a result of TINYpulse feedback. So be sure to reiterate and celebrate Wins whenever you can. To keep employees updated throughout the process, you can use the Wins Board to track progress of suggestions and celebrate Wins directly in the employee portal. Apps._5<< Please sign in to leave a comment.
https://docs.tinypulse.com/hc/en-us/articles/115004756114-Celebrate-change-with-the-Wall-of-Wins
2018-04-19T17:20:43
CC-MAIN-2018-17
1524125937015.7
[array(['https://s3.amazonaws.com/uploads.intercomcdn.com/i/o/15750601/ba51687b88f645ed473db000/file-Kn2F0uzEA9.png', None], dtype=object) array(['https://s3.amazonaws.com/uploads.intercomcdn.com/i/o/15750611/4bb52dbe7af3d1fe4d918be9/file-OyP9rz1NcV.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/34028289/3f4090f46e3c1bbbf7f605a7/wall.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/41439427/1ded6dd4b162f47d4141ba9f/Screen+Shot+2017-12-04+at+1.48.26+PM.png', None], dtype=object) array(['https://d2mckvlpm046l3.cloudfront.net/53249b51e89435eaba5995a65ee27959c841e813/http%3A%2F%2Fd33v4339jhl8k0.cloudfront.net%2Fdocs%2Fassets%2F5637af3cc697910ae05f001a%2Fimages%2F58d2316bdd8c8e7f5974c9c0%2Ffile-IUbVdaCLCV.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/42188681/fb605e1a7fe70d28db667fb7/Screen+Shot+2017-12-11+at+1.07.22+PM.png', None], dtype=object) ]
docs.tinypulse.com
Syntax sp_expired_subscription_cleanup [ [ @publisher = ] 'publisher' ] Arguments - [ @publisher= ] 'publisher' Is the name of a non-SQL Server publisher. publication is sysname, with a default value of NULL. You should not specify this parameter for a SQL Server Publisher. Return Code Values 0 (success) or 1 (failure) Remarks. Permissions Only members of the sysadmin fixed server role or db_owner fixed database role can execute sp_expired_subscription_cleanup. See Also Reference sp_mergesubscription_cleanup (Transact-SQL) sp_subscription_cleanup (Transact-SQL) System Stored Procedures (Transact-SQL) Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms181566(v=sql.90)
2018-04-19T18:46:12
CC-MAIN-2018-17
1524125937015.7
[]
docs.microsoft.com
All content with label as7+s. Related Labels: wildfly, realm, jboss, wcm, tutorial, 2012, eap, ssl, eap6, started, security, getting, getting_started, cluster, ls, mod_jk, examples, domain, gatein, favourite, httpd, demo, configuration, ha, installation, mod_cluster, http more » ( - as7, - s ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as7+s
2018-04-19T17:46:01
CC-MAIN-2018-17
1524125937015.7
[]
docs.jboss.org
Container for the parameters to the GetIdentityDkimAttributes operation. Returns the current status of Easy DKIM signing for an entity. For domain name identities, this action also returns the DKIM tokens that are required for Easy DKIM signing, and whether Amazon SES has successfully verified that these tokens have been published. This action action is throttled at one request per second. For more information about creating DNS records using DKIM tokens, go to the Amazon SES Developer Guide. Assembly: AWSSDK (Module: AWSSDK) Version: 1.5.60.0 (1.5.60.0) Inheritance Hierarchy
https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_SimpleEmail_Model_GetIdentityDkimAttributesRequest.htm
2018-12-10T06:49:02
CC-MAIN-2018-51
1544376823318.33
[array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object) array(['../icons/collapse_all.gif', None], dtype=object)]
docs.aws.amazon.com
Transfer CFT 3.2.2 Users Guide pkifname CFTPARM, PKIENTITY [PKIFNAME = filename] Name of the local certificate internal datafile in which the operation is to be performed. On each OS a default name is assigned to the local certificate database. The logical filename &CFTPKU can be used. Return to Command index Related Links
https://docs.axway.com/bundle/Transfer_CFT_322_UsersGuide_allOS_en_HTML5/page/Content/CFTUTIL/Parameter_index/pkifname.htm
2018-12-10T06:45:04
CC-MAIN-2018-51
1544376823318.33
[]
docs.axway.com
Now that you have created form (contact form in our case), the most commonly used fields in contact forms are created. But, there might be some cases where you do not want exactly these fields or you may need to add extra fields or re-order them. This section is for you to describe how to do these things. To customize a form, you might need to get the basic concept of form structure. Here it is. The form must have one or multiple placeholder in it. Placeholder is where the fields will be placed. To add a field in your form, you have to create a placeholder first if there is not any. You may also add field in the existing placeholder if there is any. In the following image, you can see two placeholders (marked in red bordered box). Here the area (1) is the placeholder with no form fields while the area (2) is the placeholder with 4 form fields (Name, Email, Subject and Message fields). If you hover over any placeholder in the form, the placeholder will be highlighted with grey bordered box around (like the image below), seeing which you can get which placeholder is having what fields or not. When you hover over any placeholder, it will also appear with some buttons on the top of it (like the image below). Let us discuss the functionality of each button of this panel one by one. The Add Field button will let you create field. Clicking this button will pop up the list of fields (like the image below) from where you can select the field you want to add. You can close this list panel clicking the Close Panel button or Cross button shown in the image above. Clicking any field button from the field list will add the corresponding field in the selected placeholder (the selected placeholder is the one where the hovered the mouse over). The Add Placeholder button will let you add a blank placeholder just after the selected placeholder. The Clone Placeholder will let you clone the selected placeholder with the form fields in it (if there is any) and append it next to it. The Copy Placeholder Place also copy the placeholder and its content, but it does not paste it as Clone Placeholder button does. Instead it just copies the placeholder and its content only. After copying it, another button named Paste Placeholder will appear in each placeholders button panel. Click Paste Placeholder button in placeholder next to which you want to paste it. The Remove Placeholder button is self explanatory, it will remove the selected placeholder.
http://docs.cybercraftit.com/docs/neoforms-user-documentation/how-to-create-form/customize-form/
2018-12-10T07:54:36
CC-MAIN-2018-51
1544376823318.33
[array(['http://docs.cybercraftit.com/wp-content/uploads/2018/07/4.png', None], dtype=object) array(['http://docs.cybercraftit.com/wp-content/uploads/2018/07/placeholder-with-red-border.png', None], dtype=object) array(['http://docs.cybercraftit.com/wp-content/uploads/2018/07/placeholder-panel.png', None], dtype=object) array(['http://docs.cybercraftit.com/wp-content/uploads/2018/07/add-field-button.png', None], dtype=object) ]
docs.cybercraftit.com
Gets the handle of the most recently loaded inactive LDBint sm_lbd_get_inactive(void); - 0 Success: The integer handle of the most recently inactivated LDB.0 Success: The integer handle of the most recently inactivated LDB. - -1 Failure: No LDBs are inactive. sm_ldb_get_inactivesearches the stack of loaded LDBs and returns the integer handle of the topmost LDB that is also inactive. Use this function together with sm_ldb_get_next_inactive to iterate over all inactive LDBs in order of most to least recently loaded. For example:int h; for ( h = sm_ldb_get_inactive(); h != -1; h = sm_ldb_get_next_inactive(h) ) { /* Do stuff with h */ } sm_ldb_get_next_inactive
http://docs.prolifics.com/panther/html/prg_html/libfu190.htm
2018-12-10T06:54:20
CC-MAIN-2018-51
1544376823318.33
[]
docs.prolifics.com
SQL Server 2012: T-SQL Code Snippets The snippets in SQL Server 2012 are essentially templates that can expedite building database statements. Saleem Hakani Imagine a series of commands you always use when creating a Trigger, Table, Stored Procedure or even a Select statement. Now imagine having all those commands established and ready to use. You can greatly reduce the amount of time and code you have to write using the new T-SQL code snippets in SQL Server 2012. T-SQL snippets let you quickly build T-SQL statements without having to remember the commands or their syntax. You can use this feature to help reduce development time and increase productivity for your developers and DBAs. Snippet templates in SQL Server 2012 are based on XML with predefined fields and values. When you use a T-SQL snippet, these fields are highlighted and you can tab through each field and change the values as required. Snippets are categorized for ease of use. You can view and select various snippets based on the category. SQL Server 2012 introduces three types of snippets: - Default Snippets (or Expansion Snippets): These are code templates for various T-SQL commands you can quickly insert into your T-SQL code when creating tables, stored procedures, triggers and so on. - Surround Snippets: These are code templates that help you implement code constructs such as Begin End, If, While and so on. - Custom Snippets: You can create your own custom snippets that will appear with the snippet menu. Create Custom Snippets Let’s look at how to create a custom snippet and add it to the snippet menu. Creating and using a snippet is a three-step process: - Create a snippet using XML - Register the snippet in SQL Server Management Studio (SSMS) - Invoke the snippet when using Query Editor By default, all T-SQL snippets are stored in the following folder and saved as .snippet files: C:\Program Files\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\SQL\Snippets\1033 Step 1. Create a T-SQL Snippet File with XML Here’s a snippet you can use to write a Select statement for any table (it will also let you use a CASE statement for an equality check on a column): CASE_END.SNIPPET File <?xml version="1.0" encoding="utf-8" ?> <CodeSnippets xmlns=""> <CodeSnippet Format="1.0.0"> <Header> <Title>Case-End</Title> <Description> Insert Case...End Construct. </Description> <Author> Saleem Hakani (Microsoft Corporation) </Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Code Language="SQL"> <![CDATA[ Select <Column_Name1>, <Column_Name2>, <Column_Name3>, <Column_Name4>= CASE <Column_Name4> WHEN '<value>' THEN '<Result>' WHEN '<value>' THEN '<Result>' WHEN '<value>' THEN '<Result>' WHEN '<value>' THEN '<Result>' ELSE 'Value not found' END, <Column_Name5>, <Column_Name6> From <Table_Name> Go ]> </Code> </Snippet> </CodeSnippet> </CodeSnippets> Step 2. Register the Snippet with SSMS Once you’ve created this file, use the Code Snippets Manager to register the snippet with SSMS. You can either add a new folder based on the snippet category or import individual snippets to the My Code Snippets folder. To add a snippet folder: - Launch SSMS - Select Tools from the menu items and click Code Snippets Manager, which launches the Snippet Manager - Click the "Add" button - Browse to the folder containing CASE_END.Snippet file, and click the Select Folder button The next step is to import the snippet in to SSMS: - Launch SSMS - Select Tools from the menu items and click Code Snippets Manager - Click the Import button at the bottom - Browse to the folder containing CASE_END.snippet file and select CASE_End.snippet file, then click the Open button Step 3. Invoke or Insert a T-SQL Snippet from Query Editor You now have a snippet called CASE_END that you can invoke from the query editor with the shortcut key by pressing CTRL + K + X. Then select the category folder in which you’ve stored the snippet. You could also right-click on the context menu in query editor and select Insert Snippet. You can also invoke a snippet by right-clicking on the context menu in the query editor. This will present you with various Snippet Options. Using these steps, you can create T-SQL code snippets and register them with SSMS. You can also create complex snippets of various regular tasks and make your life managing SQL Server much easier. Snippet Solutions Imagine you’re a developer or a DBA responsible for the security of your servers. You may have 500 logins in SQL Server, but you don’t know the server level roles to which these 500 logins are assigned. If you were to individually check each login’s properties, it would take hours or even days. Having an automated way to quickly check all server-level logins would reduce the time to code, as well as increase code accuracy and developer and DBA productivity. Here’s a snippet that will let you quickly look at server-level logins and their server-level roles and permissions. This SecuritySPY snippet list identifies server-level logins and the roles to which they’re assigned: <?xml version="1.0" encoding="utf-8" ?> <CodeSnippets xmlns=""> <CodeSnippet Format="1.0.0"> <Header> <Title>SQL_SecuritySPY - By Saleem Hakani (Microsoft Corporation)</Title> <Description> Shortcut for checking SQL Server Server Role Permissions </Description> <Author> Saleem Hakani (Microsoft Corporation) </Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Code Language="SQL"> <![CDATA[ --Author: Saleem Hakani (Microsoft Corporation) --Website: Select 'Login Name'= Substring(upper(SUSER_SNAME(SID)),1,40), 'Login Create Date'=Convert(Varchar(24),CreateDate), 'System Admin' = Case SysAdmin When 1 then 'YES (VERIFY)' When 0 then 'NO' End, 'Security Admin' = Case SecurityAdmin When 1 then 'YES (VERIFY)' When 0 then 'NO' End, 'Server Admin' = Case ServerAdmin When 1 then 'YES (VERIFY)' When 0 then 'NO' End, 'Setup Admin' = Case SetupAdmin When 1 then 'YES (VERIFY)' When 0 then 'NO' End, 'Process Admin' = Case ProcessAdmin When 1 then 'YES (VERIFY)' When 0 then 'NO' End, 'Disk Admin' = Case DiskAdmin When 1 then 'YES (VERIFY)' When 0 then 'NO' End, 'Database Creator' = Case DBCreator When 1 then 'YES (VERIFY)' When 0 then 'NO' End from Master.Sys.SysLogins order by 3 desc Go ]> </Code> </Snippet> </CodeSnippet> </CodeSnippets> Now you have a snippet called SecuritySPY. You can invoke this with the shortcut key from the query editor, as explained previously. You also can right-click on the context menu in query editor and select Insert Snippet. Creating and using snippets can streamline your SQL Server management tasks. Having a handful of predefined commands at your disposal keeps you from having to do the same thing over and over again..
https://docs.microsoft.com/en-us/previous-versions/technet-magazine/jj554304(v=msdn.10)
2018-12-10T05:57:58
CC-MAIN-2018-51
1544376823318.33
[array(['images/ff404193.saleem_hakani%28en-us%2cmsdn.10%29.jpg', 'Saleem Hakani Greg Steen'], dtype=object) ]
docs.microsoft.com
LoudVoice API \ Discussions \ Delete discussion Workflow Request: the code to send to the API Send a DELETE request to the resource /loudvoice/discussions/<discussion_token>.json to immediately remove a discussion. The <discussion_token> has to be replaced by the unique token of an existing discussion. You can also delete the discussion using it's reference and the following endpoint: /loudvoice/discussions/discussion.json?discussion_reference=<discussion_reference>. To prevent you from unintentionally deleting an entry by mixing up the DELETE/GET methods you have to include the url parameter confirm_deletion=true when calling the endpoint. Result: the code returned by the API The API will either return a HTTP status code 200 if the entry was sucessfully deleted or an appropriate message body with further details on the error that occured.
http://docs.oneall.com/api/resources/loudvoice/discussions/delete/
2018-06-18T03:34:34
CC-MAIN-2018-26
1529267860041.64
[]
docs.oneall.com
Publishing folders and assets¶ To display folders and assets in Brand Connect, they first need to be published by an admin or contributor with publish permission. For more information about permissions, see Managing users and groups. This documentation page covers: - Manually publishing folders and assets - Unpublish folders and assets - Publishing folders and assets upon activation - Publish folders and assets by date - Publish rules Manually publish folders and assets to Brand Connect¶ To manually publish folders and assets, perform the following steps: - Sign in to Acquia DAM, and then navigate to the folder or asset you want to publish. - Click the Publish icon. You can also click the Pencil icon and select Publish / Unpublish. Light gray Publish icons indicate that the folder or asset is not published, while a dark gray Publish icon indicates that the folder or asset is published. - Select the Brand Connect portal that you would like to publish to, and whether you would like to also publish subfolders. - Click Save. Unpublish folders and assets¶ If you no longer need an asset, you can remove it from display. - Sign in to Acquia DAM, and then navigate to the folders or assets that you want to unpublish. - Click the Publish icon . You can also click the Pencil icon , and then click Publish / Unpublish. Light gray Publish icons indicate that the folder or asset is not published, while a dark gray Publish icon indicates that the folder or asset is published. - Select the X next to the Brand Connect portal that you would like to unpublish to and whether you would like to also unpublish subfolders. - Click Save. Publishing folders and assets upon activation¶ Admins may set their folders and assets to publish to Brand Connect upon activation. - Sign in to Acquia DAM, then click Brands in the top navigation. - Click the brand you want to configure. - In the top navigation, select Settings. - Check the box to publish folders and assets upon activation. Now when you activate an asset or folder, it is published. In order for folders and assets to be published upon upload, ensure that the default status of folders and assets is Active under the System Preferences. Publishing folders and assets by date¶ This feature gives admins and contributors with publish permissions the ability to schedule a day and time to publish an asset or folder to Brand Connect. As an example, you’re launching a new product. The campaign materials and product photos are uploaded, but they cannot be launched to the broader audience until a specific date. - Sign in to Acquia DAM and navigate to the folder(s) or asset(s) you’d like to schedule. - Click the Publish icon. You can also click the Pencil icon and select Publish / Unpublish. - Select the Brand Connect portal that you’d like to publish to and whether you’d like to also publish subfolders. - The folders/assets will default to publishing immediately. Click Change. - Enter the day and time that you’d like to have the folder(s) or asset(s) published. The time zone used is based on your computer’s time zone settings. - Click Schedule. - Click Edit to update the publish day and time, or click Cancel to delete. Publish rules¶ Published folders and assets follow specific rules. - Newly uploaded, created or moved assets and folders inherit publish status from its parent folder. Exceptions can occur in Brand Connect Basic accounts when the asset publish limit has been reached. - You can manually publish an asset or nested folder residing in an unpublished folder. This action will display only the selected asset (or nested folder) and its parent folder(s) in Brand Connect. - Individual assets in a published folder cannot be manually unpublished. - Inactive assets and folders won’t be published in Brand Connect. If you deactivate a published folder or asset, it will be unpublished.
https://docs.acquia.com/en/stable/dam/brand-connect/publish/
2018-06-18T03:28:40
CC-MAIN-2018-26
1529267860041.64
[]
docs.acquia.com
SmartTagRecognizers SmartTagRecognizers Dim value As Integer value = instance.Creator int Creator { get; } Property Value Type: System.Int32 Remarks If the specified. See Also Reference SmartTagRecognizers Interface SmartTagRecognizers Members Microsoft.Office.Interop.Word Namespace
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2010/ms260845(v=office.14)
2018-06-18T04:54:50
CC-MAIN-2018-26
1529267860041.64
[]
docs.microsoft.com
Form design visual indicators The UI displays the following visual indicators when designing forms in custom applications. You can only edit views and sections when you are in the same application scope as the form. Editable sections display: Section headings with a solid color background. A solid line around the section. A control to set the number of columns. A Delete this section button. Grip icons beside section headings. Grip icons beside fields. Figure 1. Visual indicators of editable sections Views and sections in another application scope display as read only. Read-only sections have: Section headings with a gray background. A gray line around the section. No control to set the number of columns. No Delete this section button. No grip icons beside section headings. No grip icons beside fields. Figure 2. Visual indicators of read-only sections Default form design permissionsBy default, new application data tables have the following form design permissions.
https://docs.servicenow.com/bundle/kingston-application-development/page/build/applications/concept/c_FormDesignVisualIndicators.html
2018-06-18T03:31:50
CC-MAIN-2018-26
1529267860041.64
[]
docs.servicenow.com
Key invoice data into accounts payable using a vendor invoice Note We will not be accepting edits to this topic, because it is generated from a business process in Lifecycle Services. This task guide will help you create a vendor invoice from a purchase order and view the results of matching the purchase order, receipt, and invoice (3 way matching). Create a purchase order - Go to Accounts payable > Purchase orders > All purchase orders. - Click New. - In the Vendor account field, click the drop-down button to open the lookup. - Find a vendor to select. For example, scroll down to US-104. - Select vendor US-104. - Click OK. - In the Item number field, click the drop-down button to open the lookup. - Select an inventory item. For example, select item number 1000. - Expand or collapse the Line details section. - Click the Setup tab. - You can override the matching policy to use no matching, 2-way matching, or 3-way matching. - Expand or collapse the Line details section. - On the Action Pane, click Purchase. - Click Confirm. Receive the products - On the Action Pane, click Receive. - Click Product receipt. - In the Product receipt field, enter the product receipt number. For example, enter PR123. - Click OK to post the product receipt. - Close the page. Create a vendor invoice - Go to Accounts payable > Purchase orders > Purchase orders received but not invoiced. - Select the purchase order that you created. - On the Action Pane, click Invoice. - Click Invoice. - In the Number field, enter the invoice number. - In the Invoice description field, type a value. - In the Invoice date field, enter a date. - In the Unit price field, enter 1200. - Click Add line. - In the Item number field, click the drop-down button to open the lookup. - In the list, find the installation charge item number. For example, S0001 - Select the installation charge item number. - Note that matching has not been performed since you made the changes. - Click Update match status. - On the Action Pane, click Review. - Click Matching details. - The new line with services does not need to be matched so the status stays "Not performed". - Select the product receipt for the inventory item that you received. - The line with the product receipt was matched but there is a mismatch of quantity or price so it fails. - In the Unit price field, enter a number. - Now that the unit price matches, the status is updated to Passed. If your policy allows discrepancies or if matching is only a warning, you can still post the invoice. - Close the page. - Click Post. - Close the form. - Note that the purchase order is no longer listed as received but not invoiced.
https://docs.microsoft.com/en-us/dynamics365/unified-operations/financials/accounts-payable/tasks/key-invoice-data-ap-system-vendor-invoice
2018-06-18T04:14:02
CC-MAIN-2018-26
1529267860041.64
[]
docs.microsoft.com
Use migration with Storage vMotion to relocate a virtual machine’s configuration file and virtual disks while the virtual machine is powered on. About this task You can change the virtual machine’s execution host during a migration with Storage vMotion. Prerequisites Ensure that you are familiar with the requirements for Storage vMotion. See Storage vMotion Requirements and Limitations. Required privilege: Procedure - Right-click the virtual machine and select Migrate. - To locate a virtual machine, select a datacenter, folder, cluster, resource pool, host, or vApp. - Click the Related Objects tab and click Virtual Machines. - Select Change datastore and click Next. - Select the format for the virtual machine's disks. - Select a virtual machine storage policy from the VM Storage Policy drop-down menu. Storage policies specify storage requirements for applications that run on the virtual machine. - Select the datastore location where you want to store the virtual machine files. - Review the information on the Review Selections page.
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.vcenterhost.doc/GUID-A15EE2F6-AAF5-40DC-98B7-0DF72E166888.html
2018-06-18T03:56:04
CC-MAIN-2018-26
1529267860041.64
[]
docs.vmware.com
DeleteResourcePolicy Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account. Request Syntax { "policyName": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - policyName The name of the policy to be revoked. This parameter is required. Type: String Required: No - ResourceNotFoundException The specified resource does not exist. HTTP Status Code: 400 - ServiceUnavailableException The service cannot complete the request. HTTP Status Code: 500 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DeleteResourcePolicy.html
2018-06-18T04:06:07
CC-MAIN-2018-26
1529267860041.64
[]
docs.aws.amazon.com
You can view the location of the virtual machine configuration and working files. This information is useful when you are configuring backup systems. Prerequisites Verify that you are connected to the vCenter Server or ESXi host on which the virtual machine runs. Verify that you have access to the virtual machine in the vSphere Client inventory list. Procedure - In the vSphere Client inventory, right-click the virtual machine and select Edit Settings. - Click the Options tab and select General Options. - Record the location of the configuration and working files and click OK to close the dialog box.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-6C084CD4-1E1B-4FC4-8ACE-E93A8B2AC556.html
2018-06-18T04:15:38
CC-MAIN-2018-26
1529267860041.64
[]
docs.vmware.com
Regulation A+ FOR IMMEDIATE RELEASE 2015-49 Washington D.C., March 25, 2015 —. * * * FACT SHEET Regulation A+ SEC Open Meeting March 25, 2015 Highlights of the Final Rules The final rules, often referred to as Regulation A+, would implement Title IV of the JOBS Act and provide for two tiers of offerings: -. The exemption would not be available to companies that: - Are already SEC reporting companies and certain investment companies. - Have no specific business plan or purpose or have indicated their business plan is to engage in a merger or acquisition with an unidentified company. - Are seeking to offer and sell asset-backed securities or fractional undivided interests in oil, gas or other mineral rights. - Have been subject to any order of the Commission under Exchange Act Section 12(j) entered within the past five years. - Have not filed ongoing reports required by the rules during the preceding two years. - Are disqualified under the “bad actor” disqualification rules. The rules exempt securities in a Tier 2 offering from the mandatory registration requirements of Exchange Act Section 12(g) if the issuer meets all of the following conditions: - Engages services from a transfer agent registered with the Commission. - Remains subject to a Tier 2 reporting obligation. - Is current in its annual and semiannual reporting at fiscal year-end. - Has the dollar and Section 12(g) registration thresholds would have a two-year transition period before it must register its class of securities, provided it timely files all of its ongoing reports required under Regulation A. Preemption of Blue Sky Law In light of the total package of investor protections included in amended Regulation A, the rules provide for the preemption of state securities law registration and qualification requirements for securities offered or sold to “qualified purchasers,” defined to be any person to whom securities are offered or sold under a Tier 2 offering. Background.
https://506docs.com/regulation-a-2/
2018-06-18T03:21:44
CC-MAIN-2018-26
1529267860041.64
[]
506docs.com
Everything you need to know about using our products. The Nomad 883 Pro is CE certified. The Nomad power supply is CE & FCC Certified. The Shapeoko power supply is CE and FCC Certified and the motors are CE Certified. The latest tutorials sent straight to your inbox. Share this tutorial with your community.
https://docs.carbide3d.com/general-faq/are-your-machines-certified/
2019-08-17T23:05:27
CC-MAIN-2019-35
1566027313501.0
[]
docs.carbide3d.com