content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
There are many reasons why a cluster might fail or be slow in processing data. The following sections list the most common issues and suggestions for fixing them. Topics Check the following: The cluster age is less than two months. Amazon EMR preserves metadata information about completed clusters for your reference, at no charge, for two months. The console does not provide a way to delete completed clusters from the console; these are automatically removed for you after two months. You have permissions to view the cluster. If the VisibleToAllUsers property is set to false, other users in the same IAM account will not be able to view a cluster. You are viewing the correct region.
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-troubleshoot-errors.html
2015-08-28T05:07:16
CC-MAIN-2015-35
1440644060413.1
[]
docs.aws.amazon.com
JSessionStorageApc:: construct From Joomla! Documentation Revision as of 20::__construct Description Constructor. Description:JSessionStorageApc:: construct [Edit Descripton] SeeAlso:JSessionStorageApc:: construct [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JSessionStorageApc::_construct&direction=next&oldid=57638
2015-08-28T05:34:15
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Difference between revisions of "Introduction to Joomla! templates" From Joomla! Documentation Redirect page Revision as of 03:52, 30 April 2013 (view source)Wilsonge (Talk | contribs) (Wilsonge moved page Introduction to Joomla! templates to J1.5:Getting Started with Templates: Move to 1.5 namespace with same name as 2.5 and 3.1 series) Latest revision as of 08:11, 5 June 2013 (view source) Wilsonge (Talk | contribs) (Redirect to getting started with templates main page rather than 1.5 specific page) Line 1: Line 1: −#REDIRECT [[J1.5:Getting Started with Templates]]+#REDIRECT [[Getting_Started_with_Templates]] Latest revision as of 08:11, 5 June 2013 Getting Started with Templates Retrieved from ‘’
https://docs.joomla.org/index.php?title=Introduction_to_Joomla!_templates&diff=99916&oldid=86328
2015-08-28T05:21:20
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Revision history of "JHtmlSliders::endSliders::end/11.1 to API17:JHtmlSliders::end without leaving a redirect (Robot: Moved page)
https://docs.joomla.org/index.php?title=JHtmlSliders::end/11.1&action=history
2015-08-28T05:36:06
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Information for "GHOP students 2007-2008/Justo de Rivera" Basic information Display titleGHOP students 2007-2008/Justo de Rivera Default sort keyGHOP students 2007-2008/Justo de Rivera Page length (in bytes)766 Page ID1007CirTap (Talk | contribs) Date of page creation11:29, 20 March 2008 Latest editorBysukro (Talk | contribs) Date of latest edit07:48, 16 March 2013 Total number of edits3 Total number of distinct authors3 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded template (1)Template used on this page: Template:- (view source) Pages transcluded on (2)Templates used on this page: GHOP students 2007-2008 (view source) User talk:Justo.derivera (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=GHOP_students_2007-2008/Justo_de_Rivera&action=info
2015-08-28T05:48:32
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Information for "Components Messaging Read" Basic information Display titleHelp25:Components Messaging Read Default sort keyComponents Messaging Read Page length (in bytes)1,640 Page ID230:18, 25 December 2011 Latest editorWilsonge (Talk | contribs) Date of latest edit08:51, 16 March 2013 Total number of edits11 Total number of distinct authors3 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (7)Templates used on this page: Template:Cathelp (view source) Template:Extension DPL (view source) Template:Help screen navbox (view source) Template:Help screens 2.5 navbox (view source) Template:Rarr (view source) Chunk25:Help screen toolbar icon Cancel (view source) Chunk25:Help screen toolbar icon Help (view source) Retrieved from ‘’
https://docs.joomla.org/index.php?title=Help25:Components_Messaging_Read&action=info
2015-08-28T06:20:33
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Help Center Local Navigation. Next topic: Delete a member from a group Previous topic: Display the group barcode on your device Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/18720/Invite_a_member_to_be_BlackBerry_Messenger_contact_827857_11.jsp
2014-03-07T14:12:37
CC-MAIN-2014-10
1393999643993
[]
docs.blackberry.com
. 06:31JDOC:Translating Links/ja (16 changes; hist; +303) [Richell×16] 06:11Category:Documentation Translation/ja (2 changes; hist; +134) [Richell×2] 06:11(cur; prev; +32) Richell 06:08Template style/ja (2 changes; hist; +68) [Richell×2] 06:07Template/ja (6 changes; hist; +153) [Richell×6] 06:03Split menus/ja (2 changes; hist; +51) [Richell×2] N 06:03User:Thewallpaperman (diff; hist; +3,468) Thewallpaperman 06:01Module/ja (10 changes; hist; +1,077) [Richell×10] 06:01(cur; prev; +32) Richell 05:56(cur; prev; +102) Richell 05:52(cur; prev; +223) Richell 05:46(cur; prev; +148) Richell 05:42(cur; prev; +156) Richell 05:39(cur; prev; +105) Richell 05:35(cur; prev; +121) Richell 05:21Chunk:Anchor/ja (5 changes; hist; +275) [Richell×5] 05:20(cur; prev; +80) Richell 05:18(cur; prev; +96) Richell 05:18(User creation log) User account Thewallpaperman (Talk | contribs) was created 05:12Glossary/ja (diff; hist; +31) Richell 05:11Extension/ja (2 changes; hist; +24) [Richell×2] 05:11J3.2:Developing a MVC Component/Adding a menu type to the site part (diff; hist; -31) Josh sos
http://docs.joomla.org/index.php?title=Special:RecentChanges&days=30&from=
2014-03-07T14:10:30
CC-MAIN-2014-10
1393999643993
[]
docs.joomla.org
Roots 5.0.0 Unfortunately the WordPress Roots theme doesn’t use WordPress conventions and causes Javascript conflicts and other related issues due to this. It doesn’t take 3rd party plugins into consideration. Theme: Roots Version: 5.0.0 Summary: Affects all our plugins. Problems 1. Does not enqueue scripts 2. Uses dollar ($) object instead of […]Read more
http://docs.tribulant.com/tag/roots
2014-03-07T14:08:45
CC-MAIN-2014-10
1393999643993
[]
docs.tribulant.com
You can add VMware vCenter Servers as data source in vRealize Network Insight. Multiple VMware vCenter Servers can be added to vRealize Network Insight to start monitoring data. Note: To bring vRealize Network Insight data into VMware vCenter, use the vCenter Plug-in for vRealize Network Insight. To know about how to install and use the plug-in, see vCenter Plugin for vRealize Network Insight. Prerequisites - The predefined roles in the VMware vCenter server must have the following privileges assigned at root level that need to be propagated to the children roles: - System.Anonymous - System.Read - System.View - Global.Settings - Following VMware vCenter Server privileges are required to configure and use IPFIX: - Distributed switch: Modify and Port configuration operation - dvPort group: Modify and Policy operationNote: IPFIX is supported on the following VMware ESXi versions: - 5.5 Update 2 (Build 2068190) and later - 6.0 Update 1b (Build 3380124) and later - VMware VDS 5.5 and later - To identify the VM to VM path, you must install VMware tools on all the VMs in the data center . Procedure - Go to . - Click Add Source. - Under VMware Managers, select VMware vCenter. - Provide the following details: -: - Select Enable Netflow (IPFIX) on this vCenter to enable IPFIX.For more information on IPFIX, see the Enabling IPFIX Configuration on VDS and DVPG topic.Note: - You cannot enable IPFIX on DVPGs that have NSX-T Edges connected. - If you enable IPFIX in both VMware vCenter and VMware NSX Manager, vRealize Network Insight automatically detects and removes flow redundancies by disabling IPFIX on few of the DVPGs for the associated vCenter. - Add advanced data collection sources to your VMware vCenter Server system. - (Optional) In the Nickname text box, enter a nickname. - (Optional) In the Notes text box, add a note if necessary. - Click Submit.
https://docs.vmware.com/en/VMware-vRealize-Network-Insight/6.5.1/com.vmware.vrni.using.doc/GUID-B9F6B6B4-5426-4752-B852-B307E49E86D1.html
2022-09-25T02:41:53
CC-MAIN-2022-40
1664030334332.96
[]
docs.vmware.com
policy route route-map <map-name> rule <rule-num> set community <community> Modifies a BGP community only if it matches a prefix-list. - map-name - The name of a defined route map. - list-num - The number of a defined community list. - rule-num - The number of a defined community list rule. - aa:nn - Specifies the community in 4-octet, AS-value format. - local-AS - Advertises communities in local AS only (NO_EXPORT_SUBCONFED). - no-advertise - Does not advertise this route to any peer (NO_ADVERTISE). - no-export - Does not advertise outside of this AS of confederation boundary (NO_EXPORT). - internet - Specifies the 0 symbolic Internet community. - none - Specifies no communities. Configuration mode policy { route { route-map map-name { rule rule-num { action { deny permit match { ip { address { prefix-list prefix-num { set { community AA:NN local-AS no-advertise no-export internet none } } } } } } } } } } Use the set form of this command to to modify the BGP community attribute in a route.
https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/ip-routing/routing-policies/routing-policy-commands/policy-route-route-map-map-name-rule-rule-num-set-community-community
2022-09-25T01:16:18
CC-MAIN-2022-40
1664030334332.96
[]
docs.vyatta.com
Applies To: Windows Server 2016 You can use this topic to learn about Cloud Service Provider (CSP) deployment of RAS Gateway, including RAS Gateway pools, Route Reflectors, and deploying multiple gateways for individual tenants. The following sections provide brief overviews of some of the RAS Gateway new features so that you can understand how to use these features in the design of your gateway deployment. In addition, an example deployment is provided, including information about the process of adding new tenants, route synchronization and data plane routing, gateway and Route Reflector failover, and more. This topic contains the following sections. Using RAS Gateway New Features to Design Your Deployment - Adding New Tenants and Customer Address (CA) Space EBGP Peering Route Synchronization and Data Plane Routing How Network Controller Responds to RAS Gateway and Route Reflector Failover Advantages of Using New RAS Gateway Features Using RAS Gateway New Features to Design Your Deployment RAS Gateway includes multiple new features that change and improve the way in which you deploy your gateway infrastructure in your datacenter. BGP Route Reflector The Border Gateway Protocol (BGP) Route Reflector capability is now included with RAS Gateway, and provides an alternative to BGP full mesh topology that is normally required for route synchronization between routers. With full mesh synchronization, all BGP routers must connect with all other routers in the routing topology. When you use Route Reflector, however, the Route Reflector is the only router that connects with all of the other routers, called BGP Route Reflector clients, thereby simplifying route synchronization and reducing network traffic. The Route Reflector learns all routes, calculates best routes, and redistributes the best routes to its BGP clients. For more information, see What's New in RAS Gateway. Gateway Pools In Windows Server 2016, you can create many gateway pools of different types. Gateway pools contain many instances of RAS Gateway, and route network traffic between physical and virtual networks. For more information, see What's New in RAS Gateway and RAS Gateway High Availability. Gateway Pool Scalability You can easily scale a gateway pool up or down by adding or removing gateway VMs in the pool. Removal or addition of gateways does not disrupt the services that are provided by a pool. You can also add and remove entire pools of gateways. For more information, see What's New in RAS Gateway and RAS Gateway High Availability. M+N Gateway Pool Redundancy Every gateway pool is M+N redundant. This means that an 'M' number of active gateway VMs are backed up by an 'N' number of standby gateway VMs. M+N redundancy provides you with more flexibility in determining the level of reliability that you require when you deploy RAS Gateway. For more information, see What's New in RAS Gateway and RAS Gateway High Availability. Example Deployment The following illustration provides an example with eBGP peering over site-to-site VPN connections configured between two tenants, Contoso and Woodgrove, and the Fabrikam CSP datacenter. In this example, Contoso requires additional gateway bandwidth, leading to the gateway infrastructure design decision to terminate the Contoso Los Angeles site on GW3 instead of GW2. Because of this, Contoso VPN connections from different sites terminate in the CSP datacenter on two different gateways. Both of these gateways, GW2 and GW3, were the first RAS Gateways configured by Network Controller when the CSP added the Contoso and Woodgrove tenants to their infrastructure. Because of this, these two gateways are configured as Route Reflectors for these corresponding customers (or tenants). GW2 is the Contoso Route Reflector, and GW3 is the Woodgrove Route Reflector - in addition to being the CSP RAS Gateway termination point for the VPN connection with the Contoso Los Angeles HQ site. Note One RAS Gateway can route virtual and physical network traffic for up to one hundred different tenants, depending on the bandwidth requirements of each tenant. As Route Reflectors, GW2 sends Contoso CA Space routes to Network Controller, and GW3 sends Woodgrove CA Space routes to Network Controller. Network Controller pushes Hyper-V Network Virtualization policies to the Contoso and Woodgrove virtual networks, as well as RAS policies to the RAS Gateways and load balancing policies to the Multiplexers (MUXes) that are configured as a Software Load Balancing pool. Adding New Tenants and Customer Address (CA) Space eBGP Peering When you sign a new customer and add the customer as a new tenant in your datacenter, you can use the following process, much of which is automatically performed by Network Controller and RAS Gateway eBGP routers. Provision a new virtual network and workloads according to your tenant's requirements. If required, configure remote connectivity between the remote tenant Enterprise site and their virtual network at your datacenter. When you deploy a site-to-site VPN connection for the tenant, Network Controller automatically selects an available RAS Gateway VM from the available gateway pool and configures the connection. While configuring the RAS Gateway VM for the new tenant, Network Controller also configures the RAS Gateway as a BGP Router and designates it as the Route Reflector for the tenant. This is true even in circumstances where the RAS Gateway serves as a gateway, or as a gateway and Route Reflector, for other tenants. Depending on whether CA space routing is configured to use statically configured networks or dynamic BGP routing, Network Controller configures the corresponding static routes, BGP neighbors, or both on the RAS Gateway VM and Route Reflector. Note - After Network Controller has configured a RAS Gateway and Route Reflector for the tenant, whenever the same tenant requires a new site-to-site VPN connection, Network Controller checks for the available capacity on this RAS Gateway VM. If the original gateway can service the required capacity, the new network connection is also configured on the same RAS Gateway VM. If the RAS Gateway VM cannot handle additional capacity, Network Controller selects a new available RAS Gateway VM and configures the new connection on it. This new RAS Gateway VM associated with the tenant becomes the Route Reflector client of the original tenant RAS Gateway Route Reflector. Because RAS Gateway pools are behind Software Load Balancers (SLBs), the tenants' site-to-site VPN addresses each use a single public IP address, called a virtual IP address (VIP), which is translated by the SLBs into a datacenter-internal IP address, called a dynamic IP address (DIP), for a RAS Gateway that routes traffic for the Enterprise tenant. This public-to-private IP address mapping by SLB ensures that the site-to-site VPN tunnels are correctly established between the Enterprise sites and the CSP RAS Gateways and Route Reflectors. For more information about SLB, VIPs, and DIPs, see Software Load Balancing (SLB) for SDN. After the site-to-site VPN tunnel between the Enterprise site and the CSP datacenter RAS Gateway is established for the new tenant, the static routes that are associated with the tunnels are automatically provisioned on both the Enterprise and CSP sides of the tunnel. With CA space BGP routing, the eBGP peering between the Enterprise sites and the CSP RAS Gateway Route Reflector is also established. Route Synchronization and Data Plane Routing After eBGP peering is established between Enterprise sites and the CSP RAS Gateway Route Reflector, the Route Reflector learns all of the Enterprise routes by using dynamic BGP routing. The Route Reflector synchronizes these routes between all of the Route Reflector clients so that they are all configured with the same set of routes. Route Reflector also updates these consolidated routes, using route synchronization, to Network Controller. Network Controller then translates the routes into the Hyper-V Network Virtualization policies and configures the Fabric Network to ensure that End-to-End Data Path routing is provisioned. This process makes the tenant virtual network accessible from the tenant Enterprise sites. For Data Plane routing, the packets that reach the RAS Gateway VMs are directly routed to the tenant's virtual network, because the required routes are now available with all of the participating RAS Gateway VMs. Similarly, with the Hyper-V Network Virtualization policies in place, the tenant virtual network routes packets directly to the RAS Gateway VMs (without requiring to know about the Route Reflector) and then to the Enterprise sites over the site-to-site VPN tunnels. In addition. return traffic from the tenant virtual network to the remote tenant Enterprise site bypasses the SLBs, a process called Direct Server Return (DSR). How Network Controller Responds to RAS Gateway and Route Reflector Failover Following are two possible failover scenarios - one for RAS Gateway Route Reflector clients and one for RAS Gateway Route Reflectors - including information about how Network Controller handles failover for VMs in either configuration. VM Failure of a RAS Gateway BGP Route Reflector Client Network Controller takes the following actions when a RAS Gateway Route Reflector client fails. Note When a RAS Gateway is not a Route Reflector for a tenant's BGP infrastructure, it is a Route Reflector client in the tenant's BGP infrastructure. Network Controller selects an available standby RAS Gateway VM and provisions the new RAS Gateway VM with the configuration of the failed RAS Gateway VM. Network Controller updates the corresponding SLB configuration to ensure that the site-to-site VPN tunnels from tenant sites to the failed RAS Gateway are correctly established with the new RAS Gateway. Network Controller configures the BGP Route Reflector client on the new gateway. Network Controller configures the new RAS Gateway BGP Route Reflector client as active. The RAS Gateway immediately starts peering with the tenant's Route Reflector to share routing information and to enable eBGP peering for the corresponding Enterprise site. VM Failure for a RAS Gateway BGP Route Reflector Network Controller takes the following actions when a RAS Gateway BGP Route Reflector fails. Network Controller selects an available standby RAS Gateway VM and provisions the new RAS Gateway VM with the configuration of the failed RAS Gateway VM. Network Controller configures the Route Reflector on the new RAS Gateway VM, and assigns the new VM the same IP address that was used by the failed VM, thereby providing route integrity despite the VM failure. Network Controller updates the corresponding SLB configuration to ensure that the site-to-site VPN tunnels from tenant sites to the failed RAS Gateway are correctly established with the new RAS Gateway. Network Controller configures the new RAS Gateway BGP Route Reflector VM as active. The Route Reflector immediately becomes active. The site-to-site VPN tunnel to the Enterprise is established, and the Route Reflector uses eBGP peering and exchanges routes with the Enterprise site routers. After BGP route selection, the RAS Gateway BGP Route Reflector updates tenant Route Reflector clients in the datacenter, and synchronizes routes with Network Controller, making the End-to-End Data Path available for tenant traffic. Advantages of Using New RAS Gateway Features Following are a few of the advantages of using these new RAS Gateway features when designing your RAS Gateway deployment. RAS Gateway scalability Because you can add as many RAS Gateway VMs as you need to RAS Gateway pools, you can easily scale your RAS Gateway deployment to optimize performance and capacity. When you add VMs to a pool, you can configure these RAS Gateways with site-to-site VPN connections of any kind (IKEv2, L3, GRE), eliminating capacity bottlenecks with no down time. Simplified Enterprise Site Gateway Management When your tenant has multiple Enterprise sites, the tenant can configure all sites with one remote site-to-site VPN IP address and a single remote neighbor IP address - your CSP datacenter RAS Gateway BGP Route Reflector VIP for that tenant. This simplifies gateway management for your tenants. Fast Remediation of Gateway Failure To ensure a fast failover response, you can configure the BGP Keepalive parameter time between edge routes and the control router to a short time interval, such as less than or equal to ten seconds. With this short keep alive interval, if a RAS Gateway BGP edge router fails, the failure is quickly detected and Network Controller follows the steps provided in previous sections. This advantage might reduce the need for a separate failure detection protocol, such as Bidirectional Forwarding Detection (BFD) protocol.
https://docs.microsoft.com/en-us/windows-server/networking/sdn/technologies/network-function-virtualization/ras-gateway-deployment-architecture
2017-05-22T19:01:40
CC-MAIN-2017-22
1495463605485.49
[array(['../../../media/ras-gateway-deployment-architecture/ras_gateway_architecture.png', 'eBGP peering over site-to-site VPN'], dtype=object) ]
docs.microsoft.com
$ edit scc restricted allowHostDirVolumePlugin: false allowHostNetwork: false allowHostPorts: false allowPrivilegedContainer: false allowedCapabilities: null apiVersion: v1 groups: - system:authenticated kind: SecurityContextConstraints metadata: creationTimestamp: 2015-09-08T07:37:54Z name: restricted (1) resourceVersion: "58" selfxref: /api/v1/securitycontextconstraints/restricted uid: 849d9228-55fc-11e5-976b-080027c5bfa9 runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs To create a new SCC, first define the SCC in a JSON or YAML file: kind: SecurityContextConstraints apiVersion: v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny users: - my-admin-user groups: - my-admin-group Although this example definition was written by hand, another way is to modify the definition obtained from examining a particular SCC. Then, run oc create passing the file to create it: $ oc create -f scc_admin.yaml securitycontextconstraints/scc-admin $ oc get scc NAME PRIV CAPS HOSTDIR SELINUX RUNASUSER privileged true [] true RunAsAny RunAsAny restricted false [] false MustRunAs MustRunAsRange scc-admin true [] false RunAsAny RunAsAny If you would like to reset your security context constraints to the default settings for any reason you may delete the existing security context constraints and restart your master. The default security context constraints will only be recreated if no security context constraints exist in the system.c edit scc <name> Add the user or group to the users or groups field of the SCC. For example, to allow the e2e-user access to the privileged SCC, add their user: $ oc edit scc privileged allowHostDirVolumePlugin: true allowPrivilegedContainer: true apiVersion: v1 groups: - system:cluster-admins - system:nodes kind: SecurityContextConstraints metadata: creationTimestamp: 2015-06-15T20:44:53Z name: privileged resourceVersion: "58" selfxref: /api/v1/securitycontextconstraints/privileged uid: 602a0838-139f-11e5-8aa4-080027c5bfa9 runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny users: - system:serviceaccount:openshift-infra:build-controller - e2e-user (1)c edit scc restricted Change the runAsUser.Type strategy to RunAsAny. or edit the privileged SCC: $ oc edit scc <name>.
https://docs.openshift.com/enterprise/3.0/admin_guide/manage_scc.html
2017-05-22T17:14:27
CC-MAIN-2017-22
1495463605485.49
[]
docs.openshift.com
Updating BLT¶ Updating a composer-managed version¶ If you are already using BLT via Composer, you can update to the latest version of BLT using composer. To update to the latest version of BLT that is compatible with your existing dependencies, run the following commands: composer update acquia/blt --with-dependencies Sometimes, the first command will fail to update to the latest version of BLT. This is typically because some other dependency prevents it. If this happens, run: composer require acquia/blt:^[latest-version] --no-update && composer update Where [latest-version] is the latest version of BLT. E.g., 8.7.0. This will cause Composer to update all of your dependencies (in accordance with your version constraints) and permit the latest version of BLT to be installed. 1. Check the release information to see if there are special update instructions for the new version. 1. Review and commit changes to your project files. 1. Rarely, you may need to refresh your local environment via blt local:setup. This will drop your local database and re-install Drupal. Modifying update behavior¶ By default BLT will modify a handful of files in your project to conform to the upstream template. If you'd like to prevent this, set extra.blt.update to false in composer.json: "extra": { "blt": { "update": false } } Please not that if you choose to do this, it is your responsibility to track upstream changes. This is very likely to cause issues when you upgrade BLT to a new version. Updating from a non-Composer-managed (very old) version¶ If you are using an older version of BLT that was not installed using Composer, you may update to the Composer-managed version by running the following commands: Remove any dependencies that may conflict with upstream acquia/blt. You may add these back later after the upgrade, if necessary. composer remove drush/drush drupal/console phing/phing phpunit/phpunit squizlabs/php_codesniffer symfony/yaml drupal/coder symfony/console --no-interaction --no-update composer remove drush/drush drupal/console phing/phing phpunit/phpunit squizlabs/php_codesniffer symfony/yaml drupal/coder symfony/console --no-interaction --no-update --dev composer config minimum-stability dev (conditional) If you are using Lightning, verify that your version constraint allows it to be updated to the latest stable version: composer require drupal/lightning:~8 --no-update Require acquia/blt version 8.3.0 as a dependency: composer require acquia/blt:8.3.0 --no-update Update all dependencies: composer update Execute update script: ./vendor/acquia/blt/scripts/blt/convert-to-composer.sh Upgrade to the latest version of BLT: composer require acquia/blt:^8.6.15 --no-update composer update If using Travis CI, re-initialize .travis.yml and re-apply customizations: rm .travis.yml && blt ci:travis:init Cleanup deprecated files rm -rf .git/hooks && mkdir .git/hooks blt cleanup If using Drupal VM, re-create VM: blt vm:nuke blt vm Review and commit changes to your project files. For customized files like .travis.yml or docroot/sites/default/settings.php it is recommended that you use git add -p to select which specific line changes you'd like to stage and commit.
http://blt.readthedocs.io/en/8.x/readme/updating-blt/
2017-05-22T17:27:15
CC-MAIN-2017-22
1495463605485.49
[]
blt.readthedocs.io
Components Weblinks Links Contents - 1 Overview - 2 How to Access - 3 Description - 4 Screenshot - 5 Column Headers - 6 List Filters - 7 Toolbar - 8 Web Links / Categories Links - 9 Options - 10 Displaying Links on your Site - 11 Letting Visitors Submit Links - 12 Quick Tips - 13 Related Information Overview The Weblinks Manager allows you to manage links to other web sites and organize them into categories. How to Access Select Components → Web Links. - web links displayed to just those which match your filter parameters. Status, Category, Access, Language, Tag or Max Levels - - Select Status -. Select a status from the drop-down list box. - Trashed. - Unpublished. - Published. - Archived. - All. - - Select Category -. Select a category from the drop-down list. There is often a default category called "Uncategorised". - - Select Access -. Select a viewing access level from the drop-down list. See Access Levels for more information about access levels. - - Select Language -. Select a content language from the drop-down list. See Content Languages - - Select Tag -. Select a tag. See Tags Manager for more information about tags. - - Select Max Levels -. Select the maximum number of levels of the category hierarchy to show.. - Help. Opens this help screen. - Options. Opens the Options window where settings such as default parameters can be edited. Web Link Tab The first tab, shown below, contains display options for Web Links in front-end pages that are displayed by Web Links menu.
https://docs.joomla.org/Help37:Components_Weblinks_Links
2017-05-22T17:32:46
CC-MAIN-2017-22
1495463605485.49
[array(['/images/1/14/Help35-Column-Filter-Web-Links-Title-Ascending-DisplayNum-en.png', 'Help35-Column-Filter-Web-Links-Title-Ascending-DisplayNum-en.png'], dtype=object) array(['/images/b/b4/Help30-colheader-filter-field-en.png', 'Help30-colheader-filter-field-en.png'], dtype=object)]
docs.joomla.org
The ngRepeat directive instantiates a template once per item from a collection. Each template instance gets its own scope, where the given loop variable is set to the current collection item, and $index is set to the item index or key. Special properties are exposed on the local scope of each template instance, including: ngInit. This may be useful when, for instance, nesting ngRepeats. It is possible to get ngRepeat to iterate over the properties of an object using the following syntax: <div ng- ... </div> However, there are a few limitations compared to array iteration: The JavaScript specification does not define the order of keys returned for an object, so AngularJS relies on the order returned by the browser when running for key in myObj. Browsers generally follow the strategy of providing keys in the order in which they were defined, although there are exceptions when keys are deleted and reinstated. See the MDN page on delete for more info. ngRepeat will silently ignore object keys starting with $, because it's a prefix used by AngularJS for public ( $) and private ( $$) properties. The built-in filters orderBy and filter do not work with objects, and will throw an error if used with one. If you are hitting any of these limitations, the recommended workaround is to convert your object into an array that is sorted into the order that you prefer before providing it to ngRepeat. You could do this with a filter such as toArrayFilter or implement a $watch on the object yourself. ngRepeat uses $watchCollection to detect changes in the collection. When a change happens, ngRepeat then makes the corresponding changes to the DOM: To minimize creation of DOM elements, ngRepeat uses a function to "keep track" of all items in the collection and their corresponding DOM elements. For example, if an item is added to the collection, ngRepeat will know that all other items already have DOM elements, and will not re-render them. All different types of tracking functions, their syntax, and their support for duplicate items in collections can be found in the ngRepeat expression description. item in items track by item.id. Should you reload your data later, ngRepeatwill not have to rebuild the DOM elements for items it has already rendered, even if the JavaScript objects in the collection have been substituted for new ones. For large collections, this significantly improves rendering performance. When DOM elements are re-used, ngRepeat updates the scope for the element, which will automatically update any active bindings on the template. However, other functionality will not be updated, because the element is not re-created: The above affects all kinds of element re-use due to tracking, but may be especially visible when tracking by $index due to the way ngRepeat re-uses elements. The following example shows the effects of different actions with tracking: To repeat a series of elements instead of just one parent element, ngRepeat (as well as other ng directives) supports extending the range of the repeater by defining explicit start and end points by using ng-repeat-start and ng-repeat-end respectively. The ng-repeat-start directive works the same as ng-repeat, but will repeat all the HTML code (including the tag it's defined on) up to and including the ending HTML tag where ng-repeat-end is placed. The example below makes use of this feature: <header ng- Header {{ item }} </header> <div class="body"> Body {{ item }} </div> <footer ng-repeat-end> Footer {{ item }} </footer> And with an input of ['A','B'] for the items variable in the example above, the output will evaluate to: <header> Header A </header> <div class="body"> Body A </div> <footer> Footer A </footer> <header> Header B </header> <div class="body"> Body B </div> <footer> Footer B </footer> The custom start and end points for ngRepeat also support all other HTML directive syntax flavors provided in AngularJS (such as data-ng-repeat-start, x-ng-repeat-start and ng:repeat-start). <ANY ng- ... </ANY> See the example below for defining CSS animations with ngRepeat.Click here to learn more about the steps involved in the animation. This example uses ngRepeat to display a list of people. A filter is used to restrict the displayed results by name or by age. New (entering) and removed (leaving) items are animated. © 2010–2018 Google, Inc. Licensed under the Creative Commons Attribution License 4.0.
https://docs.w3cub.com/angularjs~1.7/api/ng/directive/ngrepeat/
2020-05-25T06:08:15
CC-MAIN-2020-24
1590347387219.0
[]
docs.w3cub.com
MEMORY STATS The MEMORY STATS command returns an Array reply about the memory usage of the server. The information about memory usage is provided as metrics and their respective values. The following metrics are reported: peak.allocated: Peak memory consumed by Redis in bytes (see INFO's used_memory) total.allocated: Total number of bytes allocated by Redis using its allocator (see INFO's used_memory) startup.allocated: Initial amount of memory consumed by Redis at startup in bytes (see INFO's used_memory_startup) replication.backlog: Size in bytes of the replication backlog (see INFO's repl_backlog_size) clients.slaves: The total size in bytes of all replicas overheads (output and query buffers, connection contexts) clients.normal: The total size in bytes of all clients overheads (output and query buffers, connection contexts) aof.buffer: The summed size in bytes of the current and rewrite AOF buffers (see INFO's aof_buffer_lengthand aof_rewrite_buffer_length, respectively) dbXXX: For each of the server's databases, the overheads of the main and expiry dictionaries ( overhead.hashtable.mainand overhead.hashtable.expires, respectively) are reported in bytes overhead.total: The sum of all overheads, i.e. startup.allocated, replication.backlog, clients.slaves, clients.normal, aof.bufferand those of the internal data structures that are used in managing the Redis keyspace (see INFO's used_memory_overhead) keys.count: The total number of keys stored across all databases in the server keys.bytes-per-key: The ratio between net memory usage ( total.allocatedminus startup.allocated) and keys.count dataset.bytes: The size in bytes of the dataset, i.e. overhead.totalsubtracted from total.allocated(see INFO's used_memory_dataset) dataset.percentage: The percentage of dataset.bytesout of the net memory usage peak.percentage: The percentage of peak.allocatedout of total.allocated fragmentation: See INFO's mem_fragmentation_ratio Array reply: nested list of memory usage metrics and their values. © 2009–2018 Salvatore Sanfilippo Licensed under the Creative Commons Attribution-ShareAlike License 4.0.
https://docs.w3cub.com/redis/memory-stats/
2020-05-25T04:07:19
CC-MAIN-2020-24
1590347387219.0
[]
docs.w3cub.com
Map traits to link customer records Contents Learn how to see more complete customer profiles in Live Now by mapping multiple records for the same customer. About traits mapping Traits are properties, such as "email" and "gender" for a customer. Altocloud gathers customer traits every time a customer visits a website that you track with the Altocloud tracking snippet. In some cases, you may have multiple customer records for the same person. For example, if a customer visits your website multiple times and uses a different browser each time. Because Altocloud creates a separate record for each instance, the separate customer records may contain only a subset of all of the traits information that is actually available for the customer. You can link these separate customer records by mapping the traits information they contain. After you do this, you'll be able to see the complete customer information in Live Now. After customer records are linked, the traits mapper updates all of the records when new trait information becomes available. Existing or duplicate traits are overwritten with the most current trait information. View mapped traits in the user interface After you map traits, they appear here: - PureCloud > Admin menu > Live Now > Customer summary (admin view) - Agent user interface > Journey gadget > Customer summary for agents Map traits globally To start mapping traits, define a global traits mapper when you deploy the Altocloud tracking snippet on your website. Specifically, when you call init to initialize the Journey JavaScript SDK, identify which attributes you want to treat as traits. See the following code example. For more information, see Methods that track events and Mappable traits. Whenever Altocloud gathers values for these attributes, they are automatically mapped as traits. You can also map traits based on specific events. Example <script> '); ac('init', 'c232166f-0136-4557-8dce-c88339d17a4e', { region: 'use1', globalTraitsMapper: [ { "fieldName": "emailAdddress", "traitName": "email" }, { "fieldName": "sex", "traitName": "gender" } ] }); ac('pageview'); </script> Map traits for a specific event In addition to mapping traits globally, you can map specific traits locally for specific events. For more information, see Methods that track events and Mappable traits. The complete set of map traits for a customer is the union of globally mapped traits and locally mapped traits. For example, suppose you map the email address field via the global traits mapper, but on one page, you ask for the customer's Facebook ID. In this case, both the email address and the Facebook ID are mapped to the customer and both appear in the customer's Live Now profile. If the same data is captured in two places, the most recent trait mapped appears in Live Now. Previous values for mapped traits are not preserved. Examples of mapped traits The following examples show how attributes that are mapped traits. Specifically: - The attributes, "email" and "emailAdddress" are mapped to the trait "email." - The attributes, "gender" and "sex" are mapped to the trait "gender." Methods that track events Traits mapping can occur whenever there is a tracked event on your website. Specifically, events are tracked when you use the following methods:
https://all.docs.genesys.com/ATC/Current/SDK/Traits_mapper
2020-05-25T06:25:08
CC-MAIN-2020-24
1590347387219.0
[]
all.docs.genesys.com
You may receive this kind of error when querying or mapping fields to Dynamics CRM objects: Retrieve can only return columns that are valid for read. Column : isprivate. Entity : account" If you receive this error, the field is likely not valid for reading purposes. You can check to see if that field is valid by navigating through Microsoft's Entity Documentation and checking if your object's field has the following field flag: "IsValidForRead": False.
https://docs.cloud-elements.com/home/db4af7a
2020-05-25T04:24:24
CC-MAIN-2020-24
1590347387219.0
[]
docs.cloud-elements.com
“Roles” Rule Use this rule to maintain the assignment of license types to user roles. User roles enable users to perform certain transactions in SAP. FlexNet Manager for SAP Applications enables you to assign a license type to a role. Usage Scenario: If you have a small number of well-managed roles that accurately represent the employees’ responsibilities, mapping roles to license types can be a simple way of determining appropriate license types (assuming that all users make use of all responsibilities to which they have been assigned).. - Roles—Specify the role or roles that are currently assigned to users, which should be used as a basis for license recommendation. You can use the wildcards * (to replace multiple characters) and ? (to replace one single character). For example, you can enter the role SAP_BC_*. As a result, all roles starting with ‘SAP_BC_’ are taken into account. Separate multiple roles with a comma or semicolon. -.
https://docs.flexera.com/fnms2019r1/EN/WebHelp/concepts/SAP-RolesRule.html
2020-05-25T04:58:51
CC-MAIN-2020-24
1590347387219.0
[]
docs.flexera.com
Tutorial: Configure the geographic traffic routing method using Traffic Manager The Geographic traffic routing method allows you to direct traffic to specific endpoints based on the geographic location where the requests originate. This tutorial shows you how to create a Traffic Manager profile with this routing method and configure the endpoints to receive traffic from specific geographies. Create a Traffic Manager Profile - From a browser, sign in to the Azure portal. If you don’t already have an account, you can sign up for a free one-month trial. - Click Create a resource > Networking > Traffic Manager profile > Create. - In the Create Traffic Manager profile: - Provide a name for your profile. This name needs to be unique within the trafficmanager.net zone. To access your Traffic Manager profile, you use the DNS name <profilename>.trafficmanager.net. - Select the Geographic routing method. - Select the subscription you want to create this profile under. - Use an existing resource group or create a new resource group to place this profile under. If you choose to create a new resource group, use the Resource Group location dropdown to specify the location of the resource group. This setting refers to the location of the resource group, and has no impact on the Traffic Manager profile that's deployed globally. - After you click Create, your Traffic Manager profile is created and deployed globally. Add endpoints - Search for the Traffic Manager profile name you created in the portal’s search bar and click on the result when it is shown. - Navigate to Settings -> Endpoints in Traffic Manager. - Click Add to show the Add Endpoint. - Click Add and in the Add endpoint that is displayed, complete as follows: - Select Type depending upon the type of endpoint you are adding. For geographic routing profiles used in production, we strongly recommend using nested endpoint types containing a child profile with more than one endpoint. For more details, see FAQs about geographic traffic routing methods. - Provide a Name by which you want to recognize this endpoint. - Certain fields on this page depend on the type of endpoint you are adding: - If you are adding an Azure endpoint, select the Target resource type and the Target based on the resource you want to direct traffic to - If you are adding an External endpoint, provide the Fully-qualified domain name (FQDN) for your endpoint. - If you are adding a Nested endpoint, select the Target resource that corresponds to the child profile you want to use and specify the Minimum child endpoints count. - In the Geo-mapping section, use the drop down to add the regions from where you want traffic to be sent to this endpoint. You must add at least one region, and you can have multiple regions mapped. - Repeat this for all endpoints you want to add under this profile Use the Traffic Manager profile - In the portal’s search bar, search for the Traffic Manager profile name that you created in the preceding section and click on the traffic manager profile in the results that the displayed. - Click Overview. - The Traffic Manager profile displays the DNS name of your newly created Traffic Manager profile. This can be used by any clients (for example, by navigating to it using a web browser) to get routed to the right endpoint as determined by the routing type. In the case of geographic routing, Traffic Manager looks at the source IP of the incoming request and determines the region from which it is originating. If that region is mapped to an endpoint, traffic is routed to there. If this region is not mapped to an endpoint, then Traffic Manager returns a NODATA query response. Next steps - Learn more about Geographic traffic routing method. - Learn how to test Traffic Manager settings.
https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-configure-geographic-routing-method
2020-05-25T05:54:47
CC-MAIN-2020-24
1590347387219.0
[array(['media/traffic-manager-geographic-routing-method/create-traffic-manager-profile.png', 'Create a Traffic Manager profile'], dtype=object) array(['media/traffic-manager-geographic-routing-method/add-traffic-manager-endpoint.png', 'Add a Traffic Manager endpoint'], dtype=object) ]
docs.microsoft.com
You can customize reports by configuring the settings in the .properties file (see Configuration Overview). This section describes examples of how your reports can be configured. See Report Settings for a complete list of settings that allow you to customize your reports to your needs. Specifying Repot Output Location You can configure the location of reports with the report.location property. For example: report.location=[path/to/location] Alternatively, you can specify the output directory for reports with the -report switch. For example: cpptestcli -report /home/reports/html Specifying Report Format By default, an HTML report is generated. You can generate a PDF report or a report with a custom extension to the specified directory by setting the report.format property. For example: report.format=pdf Generating a .csv Report - Ensure that the project was already analyzed with C/C++test and thatthecpptest.bdffileexists (see above). Create an empty configuration file (csv.properties) and add the following line: cpptest.report.csv.enabled=true Run code analysis and specify the configuration file with the -settingsswitch: cpptestcli -config "builtin://Recommended Rules" -compiler gcc_3_4 -settings csv.properties -input cpptest.bdf C/C++test will perform the following tasks: - Run the analysis as described above - Report results to the output console - Create an additional report.csvresult file
https://docs.parasoft.com/pages/viewpage.action?pageId=38633776
2020-05-25T05:59:32
CC-MAIN-2020-24
1590347387219.0
[]
docs.parasoft.com
Common Cluster Pitfalls These are some of the common problems that people have when using the cluster. We hope that these will not be a problem for you as well. Asking for multiple cores but forgetting to specify one node. -n 4 -N 1 is very different from -n 4
https://docs.rc.fas.harvard.edu/kb/common-pitfalls/
2020-05-25T05:34:22
CC-MAIN-2020-24
1590347387219.0
[]
docs.rc.fas.harvard.edu
WSO2 Carbon is the base platform on which all WSO2 Java products are developed. Built on OSGi, WSO2 Carbon encapsulates all major SOA functionality. It supports a variety of transports which make Carbon-based products capable of receiving and sending messages over a multitude of transport and application-level protocols. This functionality is implemented mainly in the Carbon core, which combines a set of transport-specific components to load, enable, manage and persist transport related functionality and configurations. All transports currently supported by WSO2 Carbon are directly or indirectly based on the Apache Axis2 transports framework. This framework provides two main interfaces for each transport implementation. - org.apache.axis2.transport.TransportListener - Implementations of this interface should specify how incoming messages are received and processed before handing them over to the Axis2 engine for further processing. - org.apache.axis2.transport.TransportSender - Implementations of this interface should specify how a message can be sent out from the Axis2 engine. Each transport implementation generally contains a transport receiver/listener and a transport sender, since they use the interfaces above. The Axis2 transport framework enables the user to configure, enable and manage transport listeners and senders independent to each other, without having to restart the server. For example, one may enable only the JMS transport sender without having to enable JMS transport listener. The transport management capability of WSO2 Carbon is provided by the following feature in the WSO2 feature repository: Name: WSO2 Carbon - Transport Management Feature Identifier: org.wso2.carbon.t ransport.mgt.feature.group If transport management capability is not included in your product by default, you can add it by installing the above feature using the instructions given in section Feature Management .
https://docs.wso2.com/display/AS530/Introduction+to+Transports
2020-05-25T05:48:35
CC-MAIN-2020-24
1590347387219.0
[]
docs.wso2.com
booleans.step message "George Boole was an English mathematician who specialized in logic, especially logic rules involving true and false. The Boolean datatype is named in his honor. In code, as in life, we base a lot of decisions on whether something is true or false. *\"If it is raining, then I will bring an umbrella; otherwise I will wear sunglasses.\"* In the conditionals section we'll make decisions. First we need to look at true and false." goals do goal "Meet True and False" goal "Compare numbers and strings" goal "Evaluate 'and', 'or', and 'not' logic" goal "Understand methods ending with question marks (predicates)" end step do message 'Here are some expressions that return `true` or `false`:' irb <<-IRB 15 < 5 15 > 5 15 >= 5 10 == 12 IRB end step do message 'Notice we use a double equals sign to check if things are equal. It\'s a common mistake to use a single equals sign.' irb <<-IRB a = 'apple' b = 'banana' a == b puts a + b a = b puts a + b IRB message "Surprise!" end step do message "For 'not equals', try these:" irb <<-IRB a = 'apple' b = 'banana' a != b IRB message "The exclamation point means **the opposite of**" irb <<-IRB !true !false !(a == b) IRB message "In `!(a == b)`, Ruby first evaluated `a == b`, then gave the opposite." message "It also means **not true** . In conditionals, we'll see things like if not sunny puts \"Bring an umbrella!\" We can also say if sunny == false puts \"Bring an umbrella!\" but \"if not sunny\" is a little more natural sounding. It's also a little safer - that double equals is easy to mistype as a single equals." end step do message "We can check more than one condition with `and` and `or` . `&&` and `||` (two pipes) is another notation for `and` and `or`." message "We do something like this when we Google for 'microsoft and cambridge and not seattle'" message "Let's type some code into IRB. First, let's define variables:" irb <<-IRB yes = true no = false IRB message <<-CONTENT Now experiment. Boolean rule 1: AND means everything must be true. For example, `true` combined with `true` is `true`: CONTENT irb <<-IRB yes and yes yes && yes IRB message "`true` combined with `false` fails the test because `and` means everything must be true:" irb <<-IRB yes and no no and yes no and no IRB message "Boolean rule 2: `or` says at least one must be true:" irb <<-IRB yes or no yes || no yes or yes IRB end step do message 'By convention, methods in Ruby that return booleans end with a question mark.' irb <<-IRB 'sandwich'.end_with?('h') 'sandwich'.end_with?('z') [1,2,3].include?(2) [1,2,3].include?(9) 'is my string'.empty? ''.empty? 'is this nil'.nil? nil.nil? IRB end explanation do message "In code we ask a lot of questions. Boolean logic gives us tools to express the questions." end further_reading do message "Some languages offer wiggle room about what evaluates to true or false. Ruby has very little. See [What's Truthy and Falsey in Ruby?]()" message "[What's Truthy and Falsey in Ruby?]() has a more detailed walkthrough of booleans." message "Ruby documentation for [true]() and [false]]()" end next_step "conditionals"
https://docs.railsbridgeboston.org/ruby/booleans/src
2020-05-25T05:28:49
CC-MAIN-2020-24
1590347387219.0
[]
docs.railsbridgeboston.org
Glossary Item Box Telerik OpenAccess Domain Model types generated by the OpenAccess Create Model Wizard support binary serialization. When you serialize an object to a binary stream, all related objects currently loaded into the OpenAccessContext will also be serialized. The examples in this topic are based on the Northwind domain model. To run the code in this example, you must have already added the Northwind domain model to your project. You must also add using statements for the following namespaces: In this example, a SerializeToBinaryStream method queries for the Customer object for the specified CustomerID value, and returns a binary MemoryStream. The MemoryStream contains an object graph of the Customer object and its related CustomerDemographic and Order objects.
https://docs.telerik.com/help/openaccess-classic/openaccess-tasks-working-with-objects-serialize-deserialize.html
2020-05-25T06:06:03
CC-MAIN-2020-24
1590347387219.0
[]
docs.telerik.com
Steve Ballmer in Australia! Steve Ballmer is coming to visit us for our internal company meeting in early November and hopefully I can get a photo with him (watch this space) But more importantly he is very keen to lay down our vision for where things are going with cloud computing and the client with local developers and this is your chance to see him in action. Come along to liberation day and see Steve in action. Registrations are going to be very limited and you can register here.
https://docs.microsoft.com/en-us/archive/blogs/aaronsaikovski/steve-ballmer-in-australia
2020-05-25T05:47:55
CC-MAIN-2020-24
1590347387219.0
[]
docs.microsoft.com
Summary: Basics We've seen built in data types, input and output, making decisions with conditionals as well as some looping. We've also identified two interesting data structures: the array and the hash. Challenge yourself to create programs for the following situations. - Write a program that verifies whether someone can vote based on their supplied age. - Write a program that plays back the message a user supplied. - Write a program that adds up five user-supplied numbers. - Make a hash for the people at your table, where the key is their name and the value is their favorite color. - Make an array of the months in the year. When programs get big, they get disorganized and hard to read. Next we'll look at how to keep things tidy with functions and classes. Next Step: Go on to Overview: Organizing
https://docs.railsbridgeboston.org/ruby/summary%3A_basics?back=loops
2020-05-25T03:51:15
CC-MAIN-2020-24
1590347387219.0
[]
docs.railsbridgeboston.org
TOPICS× User experience enhancements in Assets A. Lazy loading. Card view improvements: - Tap/click the Layout icon from the toolbar, and then choose the View Settings option. - From the View Settings dialog, select the desired thumbnail size, and then tap/click Update . - Review the thumbnails that are displayed in the chosen size. The tile in the Card view now displays additional information, such as publication status. List view improvements<< Column view improvements In addition to Card and List views, you can now navigate to the details page of an asset from the Column view. Select an asset from the Column view, and then tap/click More Details under the asset snapshot. Tree view<< From the content hierarchy, navigate to the desired asset.
https://docs.adobe.com/content/help/en/experience-manager-64/assets/ux-improvements.html
2020-05-25T06:07:32
CC-MAIN-2020-24
1590347387219.0
[array(['/content/dam/help/experience-manager-64.en/help/assets/assets/publish_status.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/assets/assets/list_view.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/assets/assets/view_settings_dialoglistview.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/assets/assets/more_details.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/assets/assets/content_tree.png', None], dtype=object) array(['/content/dam/help/experience-manager-64.en/help/assets/assets/navigate_contenttree.png', None], dtype=object) ]
docs.adobe.com
InstallShield 2012 Spring If you have an Internet connection, you can use the Check for Updates feature in InstallShield to obtain the latest InstallShield prerequisites, merge modules, and objects; service packs; patches; and other updates for the version of InstallShield that you are using. To check for updates: On the Tools menu, click Check for Updates. InstallShield launches FlexNet Connect, which checks for updates. When an update is available, you can do the following: See Also Including Redistributables in Your Installation Standalone Build
https://docs.flexera.com/installshield19helplib/helplibrary/CheckForISUpdates.htm
2020-05-25T04:03:19
CC-MAIN-2020-24
1590347387219.0
[]
docs.flexera.com
If you plan to make any modifications to Vantage, you should always do so with a child theme (or a theme specific plugin). Never modify the code in the theme folder otherwise your changes will be erased on the next theme update. To create even the simplest child theme, you should have a basic understanding of PHP, HTML, and CSS. It would be helpful to know a little about child themes for WordPress. You should also know how to add files to your hosting site. Create a Child Directory On your localhost Vantage site (/wp-content/themes/), create a new folder named vantage-child. If you’re working remotely, you’ll need to sftp or ssh to the server first and create it there. Create a Stylesheet File Inside the new child theme folder, create a file called, style.css. It’s where we’ll place any style changes you want to make. The stylesheet must begin with the following code so copy and paste it. Steps - Replace the “Author” and “Author URL” with the details relevant to you - The “Template” entry refers to the directory name of the Vantage parent theme Create a Functions File Inside the same child theme folder, create a file called, functions.php. Copy and paste in the code snippet below. This will load your child theme style.css file that we created in the previous step. @import url("../vantage/style.css");in style.css. This is no longer best practice hence using enqueue in functions.php (it’s much faster and more efficient). Activate Child Theme - Log in to your site’s admin panel - Go to “Appearance” => “Themes” - Click the “Activate” button Visit your website once you’ve activated it. Guess what? Nothing looks different at all! That’s because you’ve just setup the skeleton child theme. From here, you can add page templates, different styles, override default functions, and more. Add a New Font To get you started, let’s add a custom style to override the default Vantage font. Open your style.css and paste in this code below the header comments. Here’s what it should look like afterwards. Visit your website and then refresh your browser window. For Mac, hold Shift + Reload button. For Windows, hold down Ctrl + Reload button. This is to make sure you get a fresh copy of your style.css otherwise you won’t see your changes. It’s only a subtle change so you might not even notice it. Again, this is just a simple example of how to change something via css. Unregister Parent Stylesheet In some cases, you’ll want to completely start from scratch and not inherit any of the default Vantage styles. In order to do that, you’ll need to `dequeue` the Vantage style.css. Add the following two lines in your functions file within the existing first function you already added. Here’s what it should look like afterwards. Now if you reload your website, you’ll see all the styles gone (except for your font change). You’ve basically got a clean canvas to work from but that means there’s a whole lotta work for you to do! Again, this is just an extreme example. Most people leave the stylesheet and just override certain elements. includes/enqueue-scripts.php Add a Screenshot If you’d rather create your own after your child theme is completed, you’ll need to know the following: - The screenshot should be named screenshot.png - The image size should be 1200×900 (to support HD displays) - Place it in the root of your child theme directory For more details on screenshot.png, visit the WordPress Codex Theme Development Page. Like this tutorial? Subscribe and get the latest tutorials delivered straight to your inbox or feed reader.
https://docs.appthemes.com/tutorials/how-to-create-a-vantage-child-theme/
2019-08-17T17:15:43
CC-MAIN-2019-35
1566027313436.2
[]
docs.appthemes.com
Aggregation Aggregation is a data analysis operation which combines the data of similar observations. Example The table below shows data for three observations. The table below has been created from the table above by aggregating by gender. It can be thought of as either a summary of the table above, or, a new data set with a new definition of the observation (i.e., gender rather than person). In this example, the mathematical function that has been used to aggregate the numeric data is the mean. However, other functions are appropriate (e.g., maximum, sum). Related R functions aggregate
https://docs.displayr.com/wiki/Aggregation
2019-08-17T17:41:31
CC-MAIN-2019-35
1566027313436.2
[]
docs.displayr.com
List of Projects Fedora Build Lead: Ryan Lewis Other Members: none Honeypot Project Other Members: Keegan Lowenstein, Jeremy Bongio, Jeff Wincek, Matt Howansky, Jeff Ward LDAP Server Creation Lead: Zach Shepherd Other Members: Jim Owens, Matt McCarrell Mirror Lead: Chris Peterman Other Members: Zach Shepherd Nagios Server Lead: Matt McCarrell)
http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=List_of_Projects&oldid=714
2019-08-17T17:14:08
CC-MAIN-2019-35
1566027313436.2
[]
docs.cslabs.clarkson.edu
[ aws . codecommit ] Renames a repository. The repository name must be unique across the calling AWS account. In addition, repository names are limited to 100 alphanumeric, dash, and underscore characters, and cannot include certain characters. The suffix ".git" is prohibited. For a full description of the limits on repository names, see Limits in the AWS CodeCommit User Guide. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. update-repository-name --old-name <value> --new-name <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --old-name (string) The existing name of the repository. --new-name (string) The new name for the repository. - name of a repository This example changes the name of an AWS CodeCommit repository. This command produces output only if there are errors. Changing the name of the AWS CodeCommit repository will change the SSH and HTTPS URLs that users need to connect to the repository. Users will not be able to connect to this repository until they update their connection settings. Also, because the repository's ARN will change, changing the repository name will invalidate any IAM user policies that rely on this repository's ARN. Command: aws codecommit update-repository-name --old-name MyDemoRepo --new-name MyRenamedDemoRepo Output: None.
https://docs.aws.amazon.com/cli/latest/reference/codecommit/update-repository-name.html
2019-08-17T17:56:54
CC-MAIN-2019-35
1566027313436.2
[]
docs.aws.amazon.com
▲ Top Introduction OneSpan Sign’s Personal Certificate Client (PCC) enables users to sign with a digital certificate that resides on a Smart Card or hardware token. The following sections contain important PCC information for administrators: - Installing the PCC - Prerequisites for PCC Use - Communicating via WebSocket - Authenticating Servers - Security Measures for Customers NOTE: Certificate Signing is subject to the following restrictions: - Signing with a certificate is available only for Microsoft Windows, but it works with all supported browsers (Edge, Firefox, Chrome). - It does not work from a tablet or mobile device, including Microsoft Surface Pro. - It supports only Click-to-Sign signing. Instructions on how to sign with a certificate can be found here.
https://docs.esignlive.com/content/k_certificate_signing_admin_guide/introduction.htm
2019-08-17T17:41:50
CC-MAIN-2019-35
1566027313436.2
[]
docs.esignlive.com
An Act to amend 16.307 (2), 16.307 (2m) and 20.505 (7) (h); and to create 16.307 (1m) and 20.505 (7) (fn) of the statutes; Relating to: housing navigator grants and making an appropriation. (FE) Amendment Histories Bill Text (PDF: ) LC Bill Hearing Materials Wisconsin Ethics Commission information 2019 Assembly Bill 121 - S - Utilities and Housing
http://docs-preview.legis.wisconsin.gov/2019/proposals/sb120
2019-08-17T16:52:10
CC-MAIN-2019-35
1566027313436.2
[]
docs-preview.legis.wisconsin.gov
Invoke the Connectivity Wizard. Press thebutton. Spotfire Data Streams in order to deploy and run the generated application archive. However, it will prove valuable to review the topics in the Concepts in Brief portion of the Concepts Guide. In either case, the LiveView server runs at port 10080 by default on the Docker image's URL or on the StreamBase Runtime node's LiveView URL.
http://docs.streambase.com/latest/topic/com.streambase.tsds.ide.help/data/html/conceptgd/connwizard.html
2019-08-17T18:46:38
CC-MAIN-2019-35
1566027313436.2
[]
docs.streambase.com
Developer Guide¶ The technical handbook for developers building websites with Zotonic. It guides you through all aspects of the framework. - Introduction - Getting Started - Docker - Sites - Controllers - Dispatch rules - Resources - Templates - Media - Forms and validation - Search - Translation - Wires - Access control - Modules - Notifications - Browser/server interaction - Command-line shell - Logging - Testing sites - Deployment - Contributing to Zotonic - Release Notes - Upgrade notes - Applications and extensions
http://docs.zotonic.com/en/latest/developer-guide/
2019-08-17T18:49:27
CC-MAIN-2019-35
1566027313436.2
[]
docs.zotonic.com
Star Codes and Features Conference Controls. For more information about the conference controls & additional star codes available, please reference this document below: Note: All call parks MUST be in the 700-729 range to work with the system default settings. If you require a call park orbit with a different extension number, please send a request to the Control Tower team by an email to [email protected]. If you wish to use custom call park extensions, SkySwitch will charge $75 per domain for custom programming.
https://docs.skyswitch.com/en/articles/518-star-codes-and-features
2019-08-17T17:03:46
CC-MAIN-2019-35
1566027313436.2
[]
docs.skyswitch.com
How to convert VMware Eyeglass appliance and Migrate to Microsoft Azure References to powershell and Microsoft tools is provided as reference only and not included in product support. Add-AzureRmVhd -ResourceGroupName ResourceGroup -Destination `-LocalFilePath "C:\Users\Public\Documents\Virtual hard disks\myVHD.vhd"
https://docs.supernaeyeglass.com/articles/eyeglass-administration-guides-publication-1/howtoconvertvmwareeyeglassapplianceandmigrate
2019-08-17T17:14:02
CC-MAIN-2019-35
1566027313436.2
[]
docs.supernaeyeglass.com
Use Contrast The way you interact with Contrast depends on your particular situation, the tools and integrations you use, user settings.) With Editor permissions you can instrument an application and start viewing results in Contrast. You can also interact with the basic components of Contrast (all visible in the header high-risk libraries. View a searchable list of vulnerabilities discovered. You can view this list for each application bugtrackers, build tools, application servers, Security Incident Event Management (SIEM), notifications and chat. Perform software composition analysis (SCA) on your application to show you the dependencies between open-source libraries. Although most of the configuration for these features requires system, organization or RulesAdmin permissions, an Editor can: Instrument an application Send notifications
https://docs.contrastsecurity.com/en/use-contrast.html
2022-09-24T20:40:55
CC-MAIN-2022-40
1664030333455.97
[]
docs.contrastsecurity.com
OpenPGP Key Generation Using GPA# (Nitrokey Start - macOS) The following instructions explain the generation of OpenPGP keys directly on the Nitrokey with help of the GNU Privacy Assistant (GPA). You won’t be able to create a backup of these keys. Thus, if you lose the Nitrokey or it breaks you can not decrypt mails or use these keys anymore. Please see here for a comparison of the different methods to generate OpenPGP keys. You need to have GnuPG and GPA installed on your system. The newest version for Windows can be found here (make sure to check “GPA” during the installation!). Users of Linux systems please install GnuPG and GPA with help of the package manager (e.g. using sudo apt install gnupg gpa on Ubuntu). Key Generation# At first, open the GNU Privacy Assistant (GPA). You may are asked to generate a key, you can skip this step for now by clicking “Do it later”. In the main window, please click on “Card” or “Card Manager”. Another windows opens. Please go to “Card” -> “Generate key” to start the key generation process. Now you can put in your name and the email address you want to use for the key that will be generated next. You may choose an expiration date for your key, but you don’t have to. Please do not use the backup checkbox. This “backup” does only save the encryption key. In case of a loss of the device, you will not be able to restore the whole key set. So on the one hand it is no full backup (use these instructions instead, if you need one) and on the other hand you risk that someone else can get in possession of your encryption key. The advantage of generating keys on-device is to make sure that keys are stored securely. Therefore, we recommend to skip this half-backup. You will be asked for the admin PIN (default: 12345678) and the user PIN (default: 123456). When the key generation is finished, you can see the fingerprints of the keys on the bottom of the window. You may fill up the fields shown above, which are saved on your Nitrokey as well. Now you can close the window and go back to the main window. Your key will be visible in the key manager after refreshing. Every application which makes use of GnuPG will work with your Nitrokey as well, because GnuPG is fully aware of the fact, that the keys are stored on your Nitrokey.. Right-click on your key entry in the key manager and click “Export Keys…” to export the public key to a file and/or “Send Keys…” to upload the key to a keyserver. You can carry the keyfile with you or send it to anyone who you like. This file is not secret at all. If you want to use the Nitrokey on another system, you first import this public key via clicking on “Keys” -> “Importing Keys…” and choosing the file. If you do not want to carry a public keyfile with you, you can upload it to keyserver. If you are using another machine you can just import it by using “Server” -> “Retrieve Keys…” and entering your name or key id. Another possibility is to change the URL setting on your card. Open the card manager again and fill in the URL where the key is situated (e.g. on the keyserver or on your webpage etc.). From now on you can import the key on another system by right-clicking on the URL and click on “Fetch Key”.
https://docs.nitrokey.com/start/mac/openpgp-keygen-gpa.html
2022-09-24T18:42:21
CC-MAIN-2022-40
1664030333455.97
[array(['../../_images/117.png', 'img1'], dtype=object) array(['../../_images/213.png', 'img2'], dtype=object) array(['../../_images/311.png', 'img3'], dtype=object) array(['../../_images/411.png', 'img4'], dtype=object) array(['../../_images/57.png', 'img5'], dtype=object) array(['../../_images/67.png', 'img6'], dtype=object) array(['../../_images/75.png', 'img7'], dtype=object)]
docs.nitrokey.com
# Getting Started Follow these 3 simple steps to get started with SocketXP quickly. # Step #1: SocketXP agent installation Download and Install a simple SocketXP agent to run on the localhost server(or IoT device) where your web application or SSH server also runs. SocketXP agent is a CLI utility with which you could configure and create tunnels to your localhost web application or SSH server or any local server. You'll find the download and installation instructions for your OS and CPU platform here: # Step #2: Login to your SocketXP account Execute the following command to authenticate the SocketXP agent installed in your device with the SocketXP Cloud Gateway, using the auth-token provided to you in the SocketXP Portal (opens new window). $ socketxp login <your-authtoken-goes-here> Visit the SocketXP Portal (opens new window) to get your auth token. Don’t have an account ? Sign up for free to receive your auth-token. Note: If you prefer to name your device using a unique identifier, you could use additional arguments to the login command as shown below. $ socketxp login <your-auth-token-goes-here> --iot-device-name "sensor12345" --iot-device-group "temp-sensor" Security Note: The login command uses the authtoken to generate a unique per-device private key file at /var/lib/socketxp/device.key. The authtoken is never stored in the device as part of the /etc/socketxp/config.json file or anywhere in the device. Your customer or vendor or your team will not be able to know your authtoken unless you explicitly share the token or your SocketXP account with them. Device key cannot be used to access the REST APIs. # Step #3: Create secure tunnels Once you have authenticated the SocketXP client with the SocketXP Cloud Gateway, you could begin creating secure tunnels to your IoT device's SSH server, VNC server or any localhost web application. # Usecase #1: IoT Remote SSH Access Over the Internet: For example, to enable remote SSH access to your Raspberry Pi or IoT devices in your office or home network, execute the below command. $ socketxp connect tcp://localhost:22 Connected to SocketXP Cloud Gateway. Access the TCP service securely using the SocketXP agent in IoT Slave Mode. Note: SocketXP automatically assigns a unique ID for your device. Now you could remote SSH into your Pi or IoT device over the internet from the SocketXP Web Portal (opens new window) devices page. Click the terminal icon displayed next to your device to get into its SSH shell. # Usecase #2: Public URL for your IoT Web Service: For example, to remotely access a web service running in your localhost network (say, port 8080) over the internet, execute the below command. The command creates a secure HTTP tunnel to your localhost web service. $ socketxp connect Connected. Public URL -> After you have successfully created the HTTP tunnel, use the public URL provided by SocketXP to access your localhost web service from anywhere in the world. # Single-Touch Installation Command: The 3 step instruction explained above to setup SocketXP agent on your IoT device is a tedious process, if you have thousands of devices to install, configure and manage. With this mind, SocketXP IoT Solution also provides a single-touch installation command for installing and configuring SocketXP IoT Agent on large number IoT or RPi devices. You can find the detailed instruction on how to use the single-touch-installation command here.
https://docs.socketxp.com/guide/getting-started.html
2022-09-24T20:18:02
CC-MAIN-2022-40
1664030333455.97
[]
docs.socketxp.com
index of the selected button. Make a grid of buttons. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public int selGridInt = 0; public string[] selStrings = new string[] {"Grid 1", "Grid 2", "Grid 3", "Grid 4"}; void OnGUI() { // use 2 elements in the horizontal direction selGridInt = GUI.SelectionGrid(new Rect(25, 25, 100, 30), selGridInt, selStrings, 2); } }
https://docs.unity3d.com/ScriptReference/GUI.SelectionGrid.html
2022-09-24T20:18:14
CC-MAIN-2022-40
1664030333455.97
[]
docs.unity3d.com
State synchronization refers to the synchronization of values such as integers, floating point numbers, strings and boolean values belonging to scripts on your networked GameObjects. State synchronization is done from the Server to remote clients. The local client does not have data serialized to it. It does not need it, because it shares the Scene with the server. However, SyncVar hooks are called on local clients. Data is not synchronized in the opposite direction - from remote clients to the server. To do this, you need to use Commands. SyncVars are variables of scripts that inherit from NetworkBehaviour, which are synchronized from the server to clients. When a GameObject is spawned, or a new player joins a game in progress, they are sent the latest state of all SyncVars on networked objects that are visible to them. Use the [SyncVar] custom attribute to specify which variables in your script you want to synchronize, like this: class Player : NetworkBehaviour { [SyncVar] int health; public void TakeDamage(int amount) { if (!isServer) return; health -= amount; } } The state of SyncVars is applied to GameObjects on clients before OnStartClient() is called, so the state of the object is always up-to-date inside OnStartClient(). SyncVars can be basic types such as integers, strings and floats. They can also be Unity types such as Vector3 and user-defined structs, but updates for struct SyncVars are sent as monolithic updates, not incremental changes if fields within a struct change. You can have up to 32 SyncVars on a single NetworkBehaviour script, including SyncLists (see next section, below). The server automatically sends SyncVar updates when the value of a SyncVar changes, so you do not need to track when they change or send information about the changes yourself. While SyncVars contain values, SyncLists contain lists of values. SyncList contents are included in initial state updates along with SyncVar states. Since SyncList is a class which synchronises its own contents, SyncLists do not require the SyncVar attribute. The following types of SyncList are available for basic types: SyncListString SyncListFloat SyncListInt SyncListUInt SyncListBool There is also SyncListStruct, which you can use to synchronize lists of your own struct types. When using SyncListStruct, the struct type that you choose to use can contain members of basic types, arrays, and common Unity types. They cannot contain complex classes or generic containers, and only public variables in these structs are serialized. SyncLists have a SyncListChanged delegate named Callback that allows clients to be notified when the contents of the list change. This delegate is called with the type of operation that occurred, and the index of the item that the operation was for. public class MyScript : NetworkBehaviour { public struct Buf { public int id; public string name; public float timer; }; public class TestBufs : SyncListStruct<Buf> {} TestBufs m_bufs = new TestBufs(); void BufChanged(Operation op, int itemIndex) { Debug.Log("buf changed:" + op); } void Start() { m_bufs.Callback = BufChanged; } }
https://docs.unity3d.com/ru/2021.1/Manual/UNetStateSync.html
2022-09-24T21:00:38
CC-MAIN-2022-40
1664030333455.97
[]
docs.unity3d.com
Firewall Friendly IPs Know more about the allowed IPs and the ports to open for cloud connectivity. Know more about the allowed IPs and the ports to open for cloud connectivity. webMethods.io Integration connects with most third-party services easily and instantly. However, in some cases you may need to connect to your servers from specific IP addresses and access resources that lie behind a protective firewall. This can be achieved in webMethods.io Integration. We provide a set of static IP addresses that you need to allow in your firewall. This allows webMethods.io Integration to make connections to your servers (in order to SSH or to access services like MySQL) and run the integrations successfully. Software AG Cloud products are available in several geographical regions, operated by different infrastructure providers. Currently, webMethods.io Integration is available on Amazon Web Services (AWS) and Microsoft Azure. Based on the infrastructure provider and the associated region selected by you at the time of creating your tenant, you need to allow relevant IPs to establish connectivity. Once you add the allowed IP addresses available on the Software AG Cloud Regions website, you should be able to connect to your resources from webMethods.io Integration. This section helps you to identify the Software AG Cloud Region based on your tenant URL. Once you identify your cloud region, go to the Software AG Cloud Regions website and click the Show IP option for information on the allowed IP addresses. The following table describes the IPs to be allowed and the ports to open for cloud connectivity. Locate the region your tenant belongs to and allow the relevant IP addresses. To have the Mysql/MSSQL/FTP connectivity working for webMethods.io Integration US1 Oregon AWS based tenants, allow the following IPs: To have the Mysql/MSSQL/FTP connectivity working for webMethods.io Integration EU2 Frankfurt AWS based tenants, allow the following IPs: Once you add these addresses to your firewall, you should be able to connect to your resources from webMethods.io Integration easily. If not, contact Software AG Global Support and the Software AG Cloud Operations teams with the required details.
https://docs.webmethods.io/integration/data_access_and_security/firewall_friendly_ips/
2022-09-24T20:30:00
CC-MAIN-2022-40
1664030333455.97
[]
docs.webmethods.io
This tutorial shows how to write a simple Display plugin for RViz. RViz does not currently have a way to display sensor_msgs/Imu messages directly. The code in this tutorial implements a subclass of rviz::Display to do so. the new ImuDisplay output looks like, showing a sequence of sensor_msgs/Imu messages from the test script: The code for ImuDisplay is in these files: src/imu_display.h, src/imu_display.cpp, src/imu_visual.h, and src/imu_visual.cpp. The full text of imu_display.h is here: src/imu_display.h Here we declare our new subclass of rviz::Display. Every display which can be listed in the “Displays” panel is a subclass of rviz::Display. ImuDisplay will show a 3D arrow showing the direction and magnitude of the IMU acceleration vector. The base of the arrow will be at the frame listed in the header of the Imu message, and the direction of the arrow will be relative to the orientation of that frame. It will also optionally show a history of recent acceleration vectors, which will be stored in a circular buffer. The ImuDisplay class itself just implements the circular buffer, editable parameters, and Display subclass machinery. The visuals themselves are represented by a separate class, ImuVisual. The idiom for the visuals is that when the objects exist, they appear in the scene, and when they are deleted, they disappear. class ImuDisplay: public rviz::MessageFilterDisplay<sensor_msgs::Imu> { Q_OBJECT public: Constructor. pluginlib::ClassLoader creates instances by calling the default constructor, so make sure you have one. ImuDisplay(); virtual ~ImuDisplay(); Overrides of protected virtual functions from Display. As much as possible, when Displays are not enabled, they should not be subscribed to incoming data and should not show anything in the 3D view. These functions are where these connections are made and broken. protected: virtual void onInitialize(); A helper to clear this display back to the initial state. virtual void reset(); These Qt slots get connected to signals indicating changes in the user-editable properties. private Q_SLOTS: void updateColorAndAlpha(); void updateHistoryLength(); Function to handle an incoming ROS message. private: void processMessage( const sensor_msgs::Imu::ConstPtr& msg ); Storage for the list of visuals. It is a circular buffer where data gets popped from the front (oldest) and pushed to the back (newest) boost::circular_buffer<boost::shared_ptr<ImuVisual> > visuals_; User-editable property variables. rviz::ColorProperty* color_property_; rviz::FloatProperty* alpha_property_; rviz::IntProperty* history_length_property_; }; The full text of imu_display.cpp is here: src/imu_display.cpp The constructor must have no arguments, so we can’t give the constructor the parameters it needs to fully initialize. ImuDisplay::ImuDisplay() { color_property_ = new rviz::ColorProperty( "Color", QColor( 204, 51, 204 ), "Color to draw the acceleration arrows.", this, SLOT( updateColorAndAlpha() )); alpha_property_ = new rviz::FloatProperty( "Alpha", 1.0, "0 is fully transparent, 1.0 is fully opaque.", this, SLOT( updateColorAndAlpha() )); history_length_property_ = new rviz::IntProperty( "History Length", 1, "Number of prior measurements to display.", this, SLOT( updateHistoryLength() )); history_length_property_->setMin( 1 ); history_length_property_->setMax( 100000 ); } After the top-level rviz::Display::initialize() does its own setup, it calls the subclass’s onInitialize() function. This is where we instantiate all the workings of the class. We make sure to also call our immediate super-class’s onInitialize() function, since it does important stuff setting up the message filter. Note that “MFDClass” is a typedef of MessageFilterDisplay<message type>, to save typing that long templated class name every time you need to refer to the superclass. void ImuDisplay::onInitialize() { MFDClass::onInitialize(); updateHistoryLength(); } ImuDisplay::~ImuDisplay() { } Clear the visuals by deleting their objects. void ImuDisplay::reset() { MFDClass::reset(); visuals_.clear(); } Set the current color and alpha values for each visual. void ImuDisplay::updateColorAndAlpha() { float alpha = alpha_property_->getFloat(); Ogre::ColourValue color = color_property_->getOgreColor(); for( size_t i = 0; i < visuals_.size(); i++ ) { visuals_[ i ]->setColor( color.r, color.g, color.b, alpha ); } } Set the number of past visuals to show. void ImuDisplay::updateHistoryLength() { visuals_.rset_capacity(history_length_property_->getInt()); } This is our callback to handle an incoming message. void ImuDisplay::processMessage( const sensor_msgs::Imu::ConstPtr& msg ) { Here we call the rviz::FrameManager to get the transform from the fixed frame to the frame in the header of this Imu message. If it fails, we can’t do anything else so we return. Ogre::Quaternion orientation; Ogre::Vector3 position; if( !context_->getFrameManager()->getTransform( msg->header.frame_id, msg->header.stamp, position, orientation )) { ROS_DEBUG( "Error transforming from frame '%s' to frame '%s'", msg->header.frame_id.c_str(), qPrintable( fixed_frame_ )); return; } We are keeping a circular buffer of visual pointers. This gets the next one, or creates and stores it if the buffer is not full boost::shared_ptr<ImuVisual> visual; if( visuals_.full() ) { visual = visuals_.front(); } else { visual.reset(new ImuVisual( context_->getSceneManager(), scene_node_ )); } Now set or update the contents of the chosen visual. visual->setMessage( msg ); visual->setFramePosition( position ); visual->setFrameOrientation( orientation ); float alpha = alpha_property_->getFloat(); Ogre::ColourValue color = color_property_->getOgreColor(); visual->setColor( color.r, color.g, color.b, alpha ); And send it to the end of the circular buffer visuals_.push_back(visual); } } // end namespace rviz_plugin_tutorials Tell pluginlib about this class. It is important to do this in global scope, outside our package’s namespace. #include <pluginlib/class_list_macros.h> PLUGINLIB_EXPORT_CLASS(rviz_plugin_tutorials::ImuDisplay,rviz::Display ) The full text of imu_visual.h is here: src/imu_visual.h Declare the visual class for this display. Each instance of ImuVisual represents the visualization of a single sensor_msgs::Imu message. Currently it just shows an arrow with the direction and magnitude of the acceleration vector, but could easily be expanded to include more of the message data. class ImuVisual { public: Constructor. Creates the visual stuff and puts it into the scene, but in an unconfigured state. ImuVisual( Ogre::SceneManager* scene_manager, Ogre::SceneNode* parent_node ); Destructor. Removes the visual stuff from the scene. virtual ~ImuVisual(); Configure the visual to show the data in the message. void setMessage( const sensor_msgs::Imu::ConstPtr& msg ); Set the pose of the coordinate frame the message refers to. These could be done inside setMessage(), but that would require calls to FrameManager and error handling inside setMessage(), which doesn’t seem as clean. This way ImuVisual is only responsible for visualization. void setFramePosition( const Ogre::Vector3& position ); void setFrameOrientation( const Ogre::Quaternion& orientation ); Set the color and alpha of the visual, which are user-editable parameters and therefore don’t come from the Imu message. void setColor( float r, float g, float b, float a ); private: The object implementing the actual arrow shape boost::shared_ptr<rviz::Arrow> acceleration_arrow_; A SceneNode whose pose is set to match the coordinate frame of the Imu message header. Ogre::SceneNode* frame_node_; The SceneManager, kept here only so the destructor can ask it to destroy the frame_node_. Ogre::SceneManager* scene_manager_; }; The full text of imu_visual.cpp is here: src/imu_visual.cpp Ogre::SceneNode s form a tree, with each node storing the transform (position and orientation) of itself relative to its parent. Ogre does the math of combining those transforms when it is time to render. Here we create a node to store the pose of the Imu’s header frame relative to the RViz fixed frame. frame_node_ = parent_node->createChildSceneNode(); We create the arrow object within the frame node so that we can set its position and direction relative to its header frame. acceleration_arrow_.reset(new rviz::Arrow( scene_manager_, frame_node_ )); } ImuVisual::~ImuVisual() { Destroy the frame node since we don’t need it anymore. scene_manager_->destroySceneNode( frame_node_ ); } void ImuVisual::setMessage( const sensor_msgs::Imu::ConstPtr& msg ) { const geometry_msgs::Vector3& a = msg->linear_acceleration; Convert the geometry_msgs::Vector3 to an Ogre::Vector3. Ogre::Vector3 acc( a.x, a.y, a.z ); Find the magnitude of the acceleration vector. float length = acc.length(); Scale the arrow’s thickness in each dimension along with its length. Ogre::Vector3 scale( length, length, length ); acceleration_arrow_->setScale( scale ); Set the orientation of the arrow to match the direction of the acceleration vector. acceleration_arrow_->setDirection( acc ); } Position and orientation are passed through to the SceneNode. void ImuVisual::setFramePosition( const Ogre::Vector3& position ) { frame_node_->setPosition( position ); } void ImuVisual::setFrameOrientation( const Ogre::Quaternion& orientation ) { frame_node_->setOrientation( orientation ); } Color is passed through to the Arrow object. void ImuVisual::setColor( float r, float g, float b, float a ) { acceleration_arrow_->setColor( r, g, b, a ); } an ImuDisplay by clicking the “Add” button at the bottom of the “Displays” panel (or by typing Control-N), then scrolling down through the available displays until you see “Imu” under your plugin package name (here it is “rviz_plugin_tutorials”). If “Imu” is not in your list of Display Types, look through RViz’s console output for error messages relating to plugin loading. Some common problems are: Once you’ve added the Imu display to RViz, you just need to set the topic name of the display to a source of sensor_msgs/Imu messages. If you don’t happen to have an IMU or other source of sensor_msgs/Imu messages, you can test the plugin with a Python script like this: scripts/send_test_msgs.py. The script publishes on the “/test_imu” topic, so enter that. The script publishes both Imu messages and a moving TF frame (“/base_link” relative to “/map”), so make sure your “Fixed Frame” is set to “/map”. Finally, adjust the “History Length” parameter of the Imu display to 10 and you should see something like the picture at the top of this page. Note: If you use this to visualize messages from an actual IMU, the arrows are going to be huge compared to most robots: (Note the PR2 robot at the base of the purple arrow.) This is because the Imu acceleration units are meters per second squared, and gravity is 9.8 m/s^2, and we haven’t applied any scaling or gravity compensation to the acceleration vectors. This ImuDisplay is not yet a terribly useful Display class. Extensions to make it more useful might be: To add a gravity compensation option, you might take steps like these: Since ImuVisual takes complete Imu messages as input, adding visualizations of more of the Imu data only needs modifications to ImuVisual. Imu data displays might look like: As all this might be visually cluttered, it may make sense to include boolean options to enable or disable some of them.
http://docs.ros.org/en/indigo/api/rviz_plugin_tutorials/html/display_plugin_tutorial.html
2022-09-24T20:42:38
CC-MAIN-2022-40
1664030333455.97
[]
docs.ros.org
SessionsSessions Altis includes support for PHP sessions using Redis as the storage backend. In order to activate PHP sessions support you need to have Redis activated as well, eg: { "extra": { "altis": { "modules": { "cloud": { "redis": true } } } } } Note: The cookie name is altis_session, rather than the default PHPSESSID in order to bypass CloudFront page caching.
https://docs.altis-dxp.com/v12/cloud/sessions/
2022-09-24T20:42:21
CC-MAIN-2022-40
1664030333455.97
[]
docs.altis-dxp.com
Brush¶ Painting needs paint brushes and Blender provides a Brush panel within the Toolbar when in Weight Paint Mode. - Brush In the Data-Block menu you find predefined Brush presets. And you can create your own custom presets as needed. - Radius The radius defines the area of influence of the brush. - Strength This is the amount of paint to be applied per brush stroke. - Use Falloff When enabled, use Strength falloff for the brush. Brush Strength decays with the distance from the center of the brush. - Weight The weight (visualized as a color) to be used by the brush.
https://docs.blender.org/manual/en/2.93/grease_pencil/modes/weight_paint/tool_settings/brush.html
2022-09-24T19:20:07
CC-MAIN-2022-40
1664030333455.97
[]
docs.blender.org
#Node RPC You can view node RPC methods here #Default RPC endpoint #RPC call example #Address nonce Address nonce is a transaction counter in each Idena address. This prevents replay attacks where a transaction sending eg. 20 coins from A to B can be replayed by B over and over to continually drain A's balance. The nonce keeps track of how many transactions the sender has sent during the current epoch. Address nonce starts from 0 each epoch. Nonce is the transaction counter only of the sending address. It doesn't include transactions received by the address. You can get the current account nonce by calling dna_getBalance method. Example: Response: #Epoch You can get the current epoch using dna_epoch method. Example: Response: #Transaction nonce and epoch When sending transaction the current epoch number and subsequent nonce value should be specified for the sender address ( address nonce+1). Example: . You can specify the maximum fee limit for the transaction maxFee. #Dust All addresses with balances less than dust are cleaned every time a new epoch starts. Dust coins are burnt to prevent spam and minimize the size of the blockchain state. You can calculate the dust size using the following formula: #The smallest unit of iDNA The smallest unit of iDNA is 1e-18 iDNA ( 0.000000000000000001 iDNA). #Raw transactions You can build and sign raw transaction offline. See js examples. Actual protobuf model of transactions see here.
https://docs.idena.io/docs/developer/node/node-rpc
2022-09-24T18:43:00
CC-MAIN-2022-40
1664030333455.97
[]
docs.idena.io
Create a Hosted AWS Connection Prerequisites You will be asked for your AWS account number. From the AWS Management Console, click your user name and then select My Account. Your account ID is at the top of the page under Account Settings. Create a connection Log in to the PacketFabric portal. From the dashboard, click Hosted Cloud under Create New Service:NOTE: Read Only users do not see this action. If you need to create a connection and have Read Only permissions, contact your account administrator. Select AWS. Complete the following fields: Select Source Port Select the source port. The source port is the PacketFabric access port directly connected to your network. If there is nothing to select, provision a new port. Rather than creating a connection from AWS to a PacketFabric access port that you own, you can build a virtual circuit from your cloud to a third-party network via the PacketFabric marketplace. Click Switch to Source Marketplace Service: Use the drop-down menus to select a marketplace member and location, and then complete the remaining fields as described below. Choose AWS OnRamp and Capacity - Select AWS OnRamp - The physical on-ramp location you are using. This cannot be changed after it is provisioned. - Select Capacity - The bandwidth you want for your connection. - NOTE: Transit Gateway virtual interfaces are only supported on the following connection speeds: 1G, 2G, 5G, and 10G. - Select Availability Zone - Select a zone. - The zone refers to the physical interconnect diversity between PacketFabric and AWS (e.g. different routers). - Allocating connections within different zones supports redundancy. Configure Your Connection - Amazon Account ID - Enter your Amazon account ID. - This allows PacketFabric to send API requests to Amazon on your behalf. - Source VLAN ID - This. - NOTE: You cannot specify the VLAN ID facing AWS; it is automatically configured on your behalf. However, this does not affect your ability to use the AWS hosted connection. - NOTE: This field is not available if provisioning a marketplace-to-cloud connection. - Connection Description - Enter a description for the connection. - This description appears in the Name column when viewing your connections in the AWS portal: Product Confirmation Select the appropriate billing account to associate with this service. Review your information. When everything is correct, click Place Order. Accept the connection From the AWS Management Console, click the Services menu and select Networking & Content Delivery > Direct Connect. Click Connections. Locate and select the connection you created in the PacketFabric portal. Click Accept in the upper right. Click Confirm. Create an AWS virtual interface You will need to create a virtual interface (VIF) to associate with this connection. For more information, see the following AWS documentation: Amazon - Creating a Virtual Interface Next steps (marketplace-to-cloud users) Your connection remains disconnected until the other party accepts your request. Billing does not begin until the other party accepts and your circuit is provisioned. You can cancel the request or view status under Network > Connection Requests. Click Sent Requests. For more information, see Connection Requests.
https://docs.packetfabric.com/cloud/aws/hosted/create/
2022-09-24T19:27:02
CC-MAIN-2022-40
1664030333455.97
[array(['../../../../cloud/cloud_images/marketplace_source.png', 'Marketplace Source screenshot of the marketplace source option'], dtype=object) array(['../../../../eco/a/images/third_party_sent_request.png', 'alt_text'], dtype=object) ]
docs.packetfabric.com
Hardware requirements Reconmap is optimised to run efficiently even on low specs devices. The minimum and recommended hardware requirements to run the whole software are: Considering the minimum requirements above you could install Reconmap on an cheap 5USD droplet in Digital Ocean, altough better specs would bring you and your team increased performance.
https://docs.reconmap.com/admin-manual/hardware-requirements.html
2022-09-24T19:57:08
CC-MAIN-2022-40
1664030333455.97
[]
docs.reconmap.com
Once you create the configuration file, you are ready to initialize the database root directory, using the voltdb init command. You issue this command on each node of the cluster, specifying the configuration file and the location for the root directory. For example: On the command line, you specify two arguments: When you initialize the root directory, VoltDB: Creates the root directory (voltdbroot) as a subfolder of the specified parent directory Saves the configuration options in the new root directory Note that you only need to initialize the root directory once. Once the root directory is initialized, you can start and stop the database as needed. VoltDB uses the root directory to manage the current configuration options and backups of the data — if those features are selected — in command logs and snapshots. If the root directory already exists or has been initialized before, you cannot re-initialize the directory unless you include the --force argument. This is to protect you against accidentally deleting data from a previous database session.
https://docs.voltdb.com/v7docs/AdminGuide/OpsInit.php
2022-09-24T19:59:26
CC-MAIN-2022-40
1664030333455.97
[]
docs.voltdb.com
Input parameters This article describes the way Officient expects data input via the API There are 3 main categories of parameters for each endpoint in the Officient API: path, query string and request body. The API Reference includes a list of all available parameters for each possible request, but these sections offer an overview of the 3 main categories. Path parameters In an API URL, we include resource names and unique identifiers to help you figure out how to structure your requests. Resource names are immutable, but resource identifiers are required, so you need to replace them with real values from your Officient account. Let’s look at an example:{person_id}/detail In that URL, there is 1 path parameters that you need to replace with real values from your Officient account: person_id. When you replace those values with actual data, your final URL should look something like this: Query string parameters We use query string parameters for pagination in the Officient API. The format for query string parameters is the full resource URL followed by a question mark, and the optional parameters: Paginate your API requests to limit response results and make them easier to work with. Page defaults to 0, so if you use page=1, you’ll get the second page in the dataset. Request body parameters For PATCH, PUT, and POST requests, you may need to include a request body in JSON format. The API Reference shows you all the available request parameters for each endpoint, including required fields. The following example shows you the JSON data required to create a new person in Officient. { "name": "John Malkovich", "email": "[email protected]" } The Officient API only supports JSON. So instead of XML, HTTP POST parameters, or other serialization formats, most POST and PATCH requests require a valid JSON object for the body. Updated almost 4 years ago
https://apidocs.officient.io/docs/input-parameters
2022-09-24T19:16:41
CC-MAIN-2022-40
1664030333455.97
[]
apidocs.officient.io
generates an "X-Ray trace" and provides many useful pieces of debugging information. OverviewOverview The overview table provides quick information about the request, including requested URL, response time, response code, and external API calls to the database and remote servers. FlamegraphsFlamegraphs The flamegraph tab provides you with a full performance profile of the request, helping you to understand how the request was processed and executed. The flamegraph is a graph showing the request time on the X axis, and the call stack / depth on the Y axis. Each entry in the call-stack forms a "segment", and you can hover over any segment to view more details or click it to zoom to just that segment. (Click on the top segment to zoom back out.) Flamegraphs are taken from sampled profiles of the PHP process, at 5 millisecond intervals. Tall segments indicate a deep call-stack, but do not necessarily indicate a problem, while wide segments indicate slower operations. When investigating performance problems, start with the wide segments to understand what is taking a long time to run, and use the deeper segments to drill down and understand the operations taking place. Note that as the resolution of the whole flamegraph is 5ms, all times are rounded. Fast functions may still appear in the flamegraph if they happened to be running when the sample was taken, and for this reason, flamegraphs are more valuable for slower requests. (A 5ms interval is used as the sampling rate to ensure negligible performance overhead when collecting the profiles.) Request DataRequest Data The request tab displays data sent to and served from the server, including the client IP, request and response headers, and any other statistics generated by the request. This can assist in debugging unexpected behaviour by replicating the conditions, or finding unexpected header values. Some information is redacted from the request data, for privacy reasons, including user passwords. You can redact additional information by filtering aws_xray.redact_metadata_keys or aws_xray.redact_metadata: // Redact keys from the metadata: add_filter( 'aws_xray.redact_metadata_keys', function ( $keys ) { $keys['$_POST'][] = 'rcp_user_pass'; return $keys; } ); // Or, manually alter or remove metadata: add_filter( 'aws_xray.redact_metadata', function ( $metadata ) { foreach ( $metadata['response']['headers'] as &$header ) { if ( strpos( $header, 'secret_val' ) !== -1 ) { unset( $header ); } } return $metadata; } ); TimelineTimeline The timeline shows all database queries and remote requests chronologically, along with their duration. Click any entry to view more details, including the duration and start time (time since the request started). For database queries, the full query and the database server name (e.g. primary or read replica) are displayed. For remote requests, the HTTP response code is displayed. Only requests made through the WordPress HTTP API or the AWS SDK will be shown. Use the buttons in the top right to filter by type, or the search field to find specific queries or requests. Note: The response time indicated is the time from the start of the request until the request is sent to the user and finished. When sending an early response (e.g. via fastcgi_finish_request()), queries and requests may occur after the request is finished; this will not count towards page load times experienced by your end users. ErrorsErrors Any PHP errors will be recorded for each X-Ray request and displayed in the Errors tab. Any errors triggered via trigger_error() will also be displayed here.
https://docs.altis-dxp.com/v12/cloud/dashboard/x-ray/
2022-09-24T20:21:06
CC-MAIN-2022-40
1664030333455.97
[array(['/cloud/assets/xray-summary.png', 'Example XRay Summary'], dtype=object) array(['/cloud/assets/xray-overview.png', None], dtype=object) array(['/cloud/assets/xray-flamegraph.png', 'Example Flamegrapth'], dtype=object) array(['/cloud/assets/xray-request.png', 'Example request data'], dtype=object) array(['/cloud/assets/xray-timeline.png', 'Example Timeline'], dtype=object) array(['/cloud/assets/xray-errors.png', 'Example Remote Requests'], dtype=object) ]
docs.altis-dxp.com
Before deploying application that use an application tier or an application cluster, you must define a tier map to associate the application with an environment. Defining an application tier map In the Application Editor for the application that you want to deploy: Click Environment. Example: Select Amazon ECS Production. Map the application tier to the environment tier: In the application tier, click in the corresponding environment tiers column and select an environment tier. Select one from the list or enter the search criteria in the Search field. Click OK when the application tier is mapped to an environment tier.
https://docs.cloudbees.com/docs/cloudbees-cd/10.1/deploy-automation/environment-tiers
2022-09-24T19:45:26
CC-MAIN-2022-40
1664030333455.97
[]
docs.cloudbees.com
Tracking Access for Confluence Tracking Access for Confluence Track content access anonymously via "hit counting" Search Installation Guide This guide will get you up and running in no time. → Macro Reference A complete lexicon of all the macros provided with Tracking Access. → Knowledge Base Need some pointers? Check our FAQs and support documentation. → User Guide A repository of guides for Tracking Access. → Tracking Data A supplier for use with the Reporting app. →
https://docs.servicerocket.com/tracking-access
2022-09-24T19:08:48
CC-MAIN-2022-40
1664030333455.97
[]
docs.servicerocket.com
component renders a mesh. It works with a Mesh FilterA mesh component that takes a mesh from your assets and passes it to the Mesh Renderer for rendering on the screen. More info See in Glossary component on the same GameObjectThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info See in Glossary; the Mesh Renderer renders the mesh that the Mesh Filter references. To render a deformable mesh, use a Skinned Mesh Renderer instead. In C# code, the MeshRenderer class represents a Mesh Renderer component. The MeshRenderer class inherits much of its functionality from the Renderer class. As such, this component has a lot in common with other components that inherit from Renderer, such, and Trail RendererA visual effect that lets you to make trails behind GameObjects in the Scene as they move. More info See in Glossary. A: MaterialsAn asset that defines how a surface should be rendered. More info See in Glossary B: Lighting C: Lightmapping D: Probes E: Additional Settings The Materials section lists all the materials that this component uses. Note: If there are more materials than there are sub-meshes, Unity renders the last sub-mesh with each of the remaining materials, one on top of the next. If the materials are not fully opaque, you can layer different materials and create interesting visual effects. However, fully opaque materials overwrite previous layers, so any additional opaque materials that Unity applies to the last sub-mesh negatively affect performance and produce no benefit. The Lighting section contains properties that relate to lighting. The Lightmapping section contains properties relating to baked and real-time lightmapsA pre-rendered texture that contains the effects of light sources on static objects in the scene. Lightmaps are overlaid on top of scene geometry to create the effect of lighting. More info See in Glossary. This section is visible only if only if Receive Global IlluminationA group of techniques that model both direct and indirect lighting to provide realistic lighting results. See in Glossary is set to Lightmaps. When you’ve baked your lighting data (menu: Window > Rendering > Lighting > Generate Lighting ), this section also shows the baked lightmaps and real-time lightmaps in the current scene that this Renderer uses. The Probes section contains properties relating to Light Probe and Reflection ProbesA rendering component that captures a spherical view of its surroundings in all directions, rather like a camera. The captured image is then stored as a Cubemap that can be used by objects with reflective materials. More info See in Glossary. The Additional Settings section contains additional properties.
https://docs.unity3d.com/Manual/class-MeshRenderer.html
2022-09-24T19:49:54
CC-MAIN-2022-40
1664030333455.97
[]
docs.unity3d.com
The configuration file describes the physical configuration of a VoltDB database cluster at runtime, including the number of sites per hosts and K-safety value, among other things. This appendix describes the syntax for each component within the configuration file. The configuration file is a fully-conformant XML file. If you are unfamiliar with XML, see Section E.1, “Understanding XML Syntax” for a brief explanation of XML syntax. The configuration file is a fully-conformant XML file. XML files consist of a series of nested elements identified by beginning and ending "tags". The beginning tag is the element name enclosed in angle brackets and the ending tag is the same except that the element name is preceded by a slash. For example: <deployment> <cluster> </cluster> </deployment> Elements can be nested. In the preceding example cluster is a child of the element deployment. Elements can also have attributes that are specified within the starting tag by the attribute name, an equals sign, and its value enclosed in single or double quotes. In the following example the hostcount and sitesperhost attributes of the cluster element are assigned values of "2" and "4", respectively. <deployment> <cluster hostcount="2" sitesperhost="4"> </cluster> </deployment> Finally, as a shorthand, elements that do not contain any children can be entered without an ending tag by adding the slash to the end of the initial tag. In the following example, the cluster and heartbeat tags use this form of shorthand: <deployment> <cluster hostcount="2" sitesperhost="4"/> <heartbeat timeout="10"/> </deployment> For complete information about the XML standard and XML syntax, see the official XML site at.
https://docs.voltdb.com/v7docs/UsingVoltDB/AppxConfigFile.php
2022-09-24T18:56:34
CC-MAIN-2022-40
1664030333455.97
[]
docs.voltdb.com
Creating your own custom close trigger for your Divi Attention popup/overlay elements is easy. Simply apply the class ‘close-divi-attention‘ to any element in your popup/overlay to automatically make it a close trigger. Check out the video below for a demo on how to create your own close trigger elements in Divi Attention.
https://docs.wpcreatorsclub.com/custom-close-triggers-for-overlays/
2022-09-24T20:30:16
CC-MAIN-2022-40
1664030333455.97
[]
docs.wpcreatorsclub.com
VMware vCloud Director - 1 minute to read - - DarkLight VMware vCloud Director is a cloud service-delivery platform to operate and manage cloud-service businesses. The username has to be in a specific format as described here: so , if this is a global user it would be something like admin@system but if its an org user it can be something like [email protected]@acme-org-id The VMware vCloud Director adapter connection requires the following parameters: vCloud Director Domain – Your VMware vCloud Director domain. User Name and Password - Provide the user name and password for a read-only user. You must append either "@system" to the user name or "@orgname". This depends on the role you have set for the user. System administrator users must use "@system" because admin user is not bound to a special organization. org admin or org user must append the organization name. For example: - superadmin@system - orgadmin@acme - [email protected]@system (email format)<<
https://docs.axonius.com/docs/vmware-vcloud-director
2019-11-12T01:24:43
CC-MAIN-2019-47
1573496664469.42
[array(['https://cdn.document360.io/95e0796d-2537-45b0-b972-fc0c142c6893/Images/Documentation/image%28815%29.png', 'image.png'], dtype=object) ]
docs.axonius.com
List subscriptions¶ Subscriptions API v2 GET*customerId*/subscriptions Authentication:API keysOrganization access tokensApp access tokens Retrieve all subscriptions of a customer. Parameters¶ Replace customerId in the endpoint URL by the customer’s ID, for example cst_8wmqcHMN4U. Access token parameters¶ If you are using organization access tokens or are creating an OAuth app, the only mandatory extra query string parameter is the profileId parameter. With it, you can specify for which profile you want to retrieve subscriptions. Organizations can have multiple profiles for each of their websites. See Profiles API for more information.
https://docs.mollie.com/reference/v2/subscriptions-api/list-subscriptions
2019-11-12T02:18:43
CC-MAIN-2019-47
1573496664469.42
[]
docs.mollie.com
Managing concurrency in asynchronous query execution The DSE drivers support sending multiple concurrent requests on a single connection to improve overall query performance. The DSE drivers support sending multiple concurrent requests on a single connection to improve overall query performance. This is also known as request pipelining. These requests are processed by the server concurrently and responses are sent back to the client driver without strict ordering, allowing improved overall performance when a single operation is slow but the rest of the operations can be processed without any delay. For example, a query that involves consolidating data from multiple partitions will be much slower than a query that only retrieves data from a single partition. DSE deployments should be planned and provisioned to support the maximum number of parallel requests required for the desired latency of an application. For a given deployment, introducing more load to the system above a minimum threshold will increase overall latency. On the client side, the driver limits the amount of in-flights requests (or simultaneous requests that haven't completed yet) to between 1024 and 2048 per connection by default, depending on the driver language. Above that limit, the driver will immediately throw an exception indicating that the connections to the cluster are busy. You may reach this limit as a result of handling incoming load to your application. If your application is hitting the limit of in-flight requests add additional capacity to your DSE cluster. Limiting simultaneous requests in your application code When submitting several requests in parallel, the requests are queued at one of three levels: on the driver side, on the network stack, or on the server side. Excessive queueing on any of these levels affects the total time it takes each operation to complete. Adjust the concurrency level, or number of simultaneous requests, to reduce the amount of queuing and get high throughput and low latency. - the server cluster size - the number of instances of the application accessing the database - the complexity of the queries When implementing an application, launch a fixed number of asynchronous operations using the concurrency level as the maximum. As each operation completes, add a new one. This ensures your application's asynchronous operations will not exceed the concurrency level. The following code examples show how to launch asynchronous operations in a loop and controlling the concurrency level. Using specialized tools to avoid problems in custom applications Unbounded concurrency issues often arise when performing bulk operations in custom code. Avoid them by using the appropriate tool for the task. If you are importing data from other sources use dsbulk. If you are performing transformations from external data sources use Apache Spark.
https://docs.datastax.com/en/devapp/doc/devapp/driverManagingConcurrency.html
2019-11-12T01:59:52
CC-MAIN-2019-47
1573496664469.42
[]
docs.datastax.com
This Generation Task lets you insert Dimension text. Each property will set the text to the alignment in the name of the property. For example the Dimension Left Text will put the text on the left side of the dimension. You can choose whether to use only one Dimension Text alignment property or multiple but the Task must have a Dimension Name to locate which dimension to insert the text next to. To be able to set text for a Dimension below the Leader Line you must make sure you have the Document Option in SOLIDWORKS set to Solid Leader Align Text. To do this Navigate to Document Properties in Options and expand Dimensions. On each of these dimension types make sure it is set to Solid Leader Align Text. When this task is added the properties are static by default. See How To: Change A Static Property To A Dynamic Property to enable rules to be built on these properties.
https://docs.driveworkspro.com/Topic/GTSetDimensionText
2019-11-12T02:05:19
CC-MAIN-2019-47
1573496664469.42
[]
docs.driveworkspro.com
Configuring Portfolios and Applications Portfolios and Applications are available as part of the Enterprise branches Once your application is populated with projects, you can create application branches by choosing long-lived branches from the application's component projects. This option is available in the Application's Administration > Edit Definition interface, or from the global administration interface..
https://docs.sonarqube.org/7.3/ConfiguringPortfoliosandApplications.html
2019-11-12T01:50:36
CC-MAIN-2019-47
1573496664469.42
[]
docs.sonarqube.org
Agent With Tideflow's Agent, you to run commands in you computers as part of your processes. This means, even if you are running Tideflow in a cloud server, you can run commands as part of your workflow tasks in your office or home computers. Tideflow's agent can take data from processes tasks connected to it, and will report its results to the next tasks. Note it does requires to have NodeJS installed in the machine intended to execute the agent Read tideflow-agent's cli README to learn more about the cli tool and how to use it. The cli is also available on npmjs.com. @tideflowio/tideflow-agent
https://docs.tideflow.io/docs/services-agent
2019-11-12T01:22:18
CC-MAIN-2019-47
1573496664469.42
[]
docs.tideflow.io
All content with label article+as5+async+cache+gridfs+gui_demo+infinispan+jpa+listener+repeatable_read+transaction+userguide+whitepaper. Related Labels: podcast, expiration, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, release, query, deadlock, intro, archetype, pojo_cache, lock_striping, jbossas, nexus, guide, schema, s3, amazon, grid, jcache, test, api, xsd, ehcache, maven, documentation, youtube, write_behind, 缓存, ec2, hibernate, interface, custom_interceptor, clustering, setup, eviction, fine_grained, concurrency, out_of_memory, jboss_cache, index, events, configuration, hash_function, batch, buddy_replication, loader, xa, pojo, write_through, cloud, mvcc, notification, tutorial, presentation, jbosscache3x, read_committed, xml, distribution, meeting, cachestore, data_grid, resteasy, hibernate_search, cluster, development, websocket, interactive, xaresource, build, searchable, demo, scala, installation, ispn, client, non-blocking, migration, filesystem, tx, user_guide, eventing, client_server, infinispan_user_guide, standalone, snapshot, webdav, hotrod, docs, batching, consistent_hash, store, jta, faq, spring, 2lcache, jsr-107, jgroups, lucene, locking, rest, hot_rod more » ( - article, - as5, - async, - cache, - gridfs, - gui_demo, - infinispan, - jpa, - listener, - repeatable_read, - transaction, - userguide, - whitepaper ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/article+as5+async+cache+gridfs+gui_demo+infinispan+jpa+listener+repeatable_read+transaction+userguide+whitepaper
2019-11-12T02:30:50
CC-MAIN-2019-47
1573496664469.42
[]
docs.jboss.org
All content with label batch+client+consistent_hash+distribution+gridfs+infinispan+migration+query+standalone+testng+xaresource. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, rehash, replication, recovery, transactionmanager, dist, release, partitioning, deadlock, lock_striping, jbossas, nexus, guide, schema, listener, state_transfer, cache, amazon, s3, grid, memcached, test, jcache, api, xsd, ehcache, wildfly, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, getting_started, custom_interceptor, clustering, setup, eviction, out_of_memory, concurrency, jboss_cache, import, index, events, configuration, hash_function, buddy_replication, loader, colocation, write_through, cloud, jsr352, remoting, mvcc, notification, tutorial, murmurhash2, xml, jbosscache3x, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, development, transaction, async, interactive, build, hinting, searchable, demo, installation, scala, command-line, mod_cluster, jberet, non-blocking, rebalance, filesystem, jpa, tx, gui_demo, eventing, shell, client_server, murmurhash, infinispan_user_guide, webdav, hotrod, snapshot, repeatable_read, docs, batching, store, jta, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - batch, - client, - consistent_hash, - distribution, - gridfs, - infinispan, - migration, - query, - standalone, - testng, - xaresource ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/batch+client+consistent_hash+distribution+gridfs+infinispan+migration+query+standalone+testng+xaresource
2019-11-12T02:05:12
CC-MAIN-2019-47
1573496664469.42
[]
docs.jboss.org
5. Building C and C++ Extensions on Windows¶ This. 5.1. A Cookbook Approach¶ (Legacy version). If you find you really need to do things manually, it may be instructive to study the project file for the winsound standard library module. 5.2. Differences Between Unix and Windows¶. 5.3. Using DLLs in Practice¶.
https://docs.python.org/3.8/extending/windows.html
2019-11-12T01:25:08
CC-MAIN-2019-47
1573496664469.42
[]
docs.python.org
PanGesture super: UIPanGestureRecognizer (on iOS) PanGesture is a concrete gesture recognizer that looks for panning (dragging) gestures. The user must be pressing one or more fingers on a view while they pan it. Clients implementing the action method for this gesture recognizer can ask it for the current translation and velocity of the gesture. A panning gesture is continuous. It begins (GestureRecognizerState.Began) when the minimum number of fingers allowed (minimumNumberOfTouches) has moved enough to be considered a pan. It changes (GestureRecognizerState.Changed) when a finger moves while at least the minimum number of fingers are pressed down. It ends (GestureRecognizerState.Ended) when maximumNumberOfTouches: Int The maximum number of fingers that can be touching the view for this gesture to be recognized. var minimumNumberOfTouches: Int The minimum number of fingers that can be touching the view for this gesture to be recognized. The default value is 1. var objectName: String The name of the object. Methods func translationInView(view: Object): Point The translation of the pan gesture in the coordinate system of the specified view. The x and y values report the total translation over time. func setTranslationInView(translation: Point, view: Object) Sets the translation value in the coordinate system of the specified view. Changing the translation value resets the velocity of the pan. func velocityInView(view: Object): Point The velocity of the pan gesture, expressed in points per second, in the coordinate system of the specified view. The velocity is broken into horizontal and vertical components.
https://docs.creolabs.com/classes/PanGesture.html
2019-11-12T01:59:43
CC-MAIN-2019-47
1573496664469.42
[]
docs.creolabs.com
Bondysecurity.allow_anonymous_user = off Notice that for every option not provided by your configuration, Bondy will define a default value (also specified in the following sections). Within the bondy.conf file you can use the following variables which Bondy will substitute before running. The following is an example of how to use variable substitution. broker_bridge.config_file = $(platform_etc_dir)/broker_bridge_config.json Notice these mechanism cannot be used to do OS environment variables substitution. However, Bondy provides a tool for OS variable substitution that is automatically used by the Bondy Docker image start script. To understand how to use OS environment variables substitution in Docker read this section, otherwise take a look at how the start.sh script uses it in the official docker images. Some features and/or subsystems in Bondy allow providing an additional JSON configuration file e.g. the Security subsystem. In those cases, we need to let Bondy know where to find those specific files. This is done in the bondy.conf under the desired section e.g. the following configuration file adds the location for the security_conf.json file. nodename = [email protected]_cookie = bondysecurity.allow_anonymous_user = offsecurity.config_file = /bondy/etc/security_conf.json In addition to the bondy.conf file , you can place a vm.args configuration file in the same path in which you find bondy.conf to configure Bondy's Erlang VM. Notice that providing your own vm.args works differently than providing a bondy.conf file. While your bondy.conf options are merged with the defaults, thus overriding the defaults for the keys you provide but leaving intact the others, the vm.args options are a full replacement of the dynamically generated vm.args by Bondy. So only use this option if you really know what you are doing. If you really need to do this, we suggest using Bondy's generated vm.args as a base for your customisations.
https://docs.getbondy.io/configuring/configuration-reference
2019-11-12T01:17:54
CC-MAIN-2019-47
1573496664469.42
[]
docs.getbondy.io
sp_pdw_remove_network_credentials (SQL Data Warehouse) SQL Server Azure SQL Database Azure Synapse Analytics (SQL DW) Parallel Data Warehouse This removes network credentials stored in SQL Data Warehouse to access a network file share. For example, use this stored procedure to remove permission for SQL Data Warehouse to perform backup and restore operations on a server that resides within your own network. Transact-SQL Syntax Conventions (Transact-SQL) Syntax -- Syntax for Azure SQL Data Warehouse and Parallel Data Warehouse sp_pdw_remove_network_credentials 'target_server_name' Arguments 'target_server_name' Specifies the target server host name or IP address. Credentials to access this server will be removed from SQL Data Warehouse. This does not change or remove any permissions on the actual target server which is managed by your own team. target_server_name is defined as nvarchar(337). Return Code Values 0 (success) or 1 (failure) Permissions Requires ALTER SERVER STATE permission. Error Handling An error occurs if removing credentials does not succeed on the Control node and all Compute nodes. General Remarks This stored procedure removes network credentials from the NetworkService account for SQL Data Warehouse. The NetworkService account runs each instance of SMP SQL Server on the Control node and the Compute nodes. For example, when a backup operation runs, the Control node and each Compute node will use the NetworkService account credentials to access the target server. Metadata To list all credentials and to verify the credentials have been removed, use sys.dm_pdw_network_credentials (Transact-SQL). To add credentials, use sp_pdw_add_network_credentials (SQL Data Warehouse). Examples: Azure Synapse Analytics (SQL DW) and Parallel Data Warehouse A. Remove credentials for performing a database backup The following example removes user name and password credentials for accessing the target server which has an IP address of 10.192.147.63. EXEC sp_pdw_remove_network_credentials '10.192.147.63'; Feedback
https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-pdw-remove-network-credentials-sql-data-warehouse?view=aps-pdw-2016-au7&viewFallbackFrom=sql-server-ver15
2019-11-12T01:19:35
CC-MAIN-2019-47
1573496664469.42
[]
docs.microsoft.com
Configuration method failed to access ODM As part of the Blueworx Voice Response startup procedure, several hardware devices are configured. To configure a device, the configuration method for that device is used. The configuration method needs to access ODM to find parameters with which the device should be configured. The odm_initialize
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.probdet.doc/dtxprobdet450.html
2019-08-17T14:33:27
CC-MAIN-2019-35
1566027313428.28
[]
docs.blueworx.com
Service methods are made available to more finely control how CPS integrates with your software. Typical usage is for creating your own activation forms rather than using the pre-built ones available in the CPS dll. They can also be used to control logic flow more finely. All service methods can be instantiated as follows. C# var cpsService = new CpsService(); VB.NET Dim cpsService = New CpsService And all return a CpsResult object Properties - bool Result - string Description - string LicenseId (v.1.5) - string LicenseHash (v.1.5)
https://docs.copyprotectsoftware.com/docs/net-integration/service-methods/
2019-08-17T15:32:19
CC-MAIN-2019-35
1566027313428.28
[]
docs.copyprotectsoftware.com
An overview of Mozilla’s Data Pipeline This post describes the architecture of Mozilla’s data pipeline, which is used to collect Telemetry data from our users and logs from various services. One of the cool perks of working at Mozilla is that most of what we do is out in the open and because of that I can do more than just show you some diagram with arrows of our architecture; I can point you to the code, script & configuration that underlies it! To make the examples concrete, the following description is centered around the collection of Telemetry data. The same tool-chain is used to collect, store and analyze data coming from disparate sources though, such as service logs. Firefox There are different APIs and formats to collect data in Firefox, all suiting different use cases: - histograms – for recording multiple data points; - scalars – for recording single values; - timings – for measuring how long operations take; - events – for recording time-stamped events. These are commonly referred to as probes. Each probe must declare the collection policy it conforms to: either in preferences. A session begins when Firefox starts up and ends when it shuts down. As a session could be long-running and last weeks, it gets sliced into smaller logical units called subsessions. Each subsession generates a batch of data containing the current state of all probes collected so far, i.e. a main ping, which is sent to our servers. The main ping is just one of the many ping types we support. Developers can create their own ping types if needed. Pings are submitted via an API that performs a HTTP POST request to our edge servers. If a ping fails to successfully submit (e.g. because of missing internet connection), Firefox will store the ping on disk and retry to send it until the maximum ping age is exceeded. Kafka HTTP submissions coming in from the wild hit a load balancer and then an NGINX module. The module accepts data via a HTTP request which it wraps in a Hindsight protobuf message and forwards to two places: a Kafka cluster and a short-lived S3 bucket (landfill) which acts as a fail-safe in case there is a processing error and/or data loss within the rest of the pipeline. The deployment scripts and configuration files of NGINX and Kafka live in a private repository. The data from Kafka is read from the Complex Event Processors (CEP) and the Data Warehouse Loader (DWL), both of which use Hindsight. Hindsight Hindsight, an open source stream processing software system developed by Mozilla as Heka’s successor, is useful for a wide variety of different tasks, such as: - converting data from one format to another; - shipping data from one location to another; - performing real time analysis, graphing, and anomaly detection. Hindsight’s core is a lightweight data processing kernel written in C that controls a set of Lua plugins executed inside a sandbox. The CEP are custom plugins that are created, configured and deployed from an UI which produce real-time plots like the number of pings matching a certain criteria. Mozilla employees can access the UI and create/deploy their own custom plugin in real-time without interfering with other plugins running. The DWL is composed of a set of plugins that transform, convert & finally shovel pings into S3 for long term storage. In the specific case of Telemetry data, an input plugin reads pings from Kafka, pre-processes them and sends batches to S3, our data lake, for long term storage. The data is compressed and partitioned by a set of dimensions, like date and application. The data has traditionally been serialized to Protobuf sequence files which contain some nasty “free-form” JSON fields. Hindsight gained recently the ability to dump data directly in Parquet form though. The deployment scripts and configuration files of the CEP & DWL live in a private repository. Spark Once the data reaches our data lake on S3 it can be processed with Spark.. We have a Github repository telemetry-batch-view that showcases this. A dedicated Spark job feeds daily aggregates to a Postgres database which powers a HTTP service to easily retrieve faceted roll-ups. The service is mainly used by TMO, a dashboard that visualizes distributions and time-series, and cerberus, an anomaly detection tool that detects and alerts developers of changes in the distributions. Originally the sole purpose of the Telemetry pipeline was to feed data into this dashboard but in time its scope and flexibility grew to support more general use-cases. Presto & re:dash We maintain a couple of Presto clusters and a centralized Hive metastore to query Parquet data with SQL. The Hive metastore provides an universal view of our Parquet dataset to both Spark and Presto clusters. Presto, and other databases, are behind a re:dash service (STMO) which provides a convenient & powerful interface to query SQL engines and build dashboards that can be shared within the company. Mozilla maintains its own fork of re:dash to iterate quickly on new features, but as good open source citizen we push our changes upstream. Is that it? No, not really. For example, the DWL pushes some of the Telemetry data to Redshift and Elasticsearch but those tools satisfy more niche needs. The pipeline ingests logs from services as well and there are many specialized dashboards out there I haven’t mentioned. We also use Zeppelin as a means to create interactive data analysis notebooks that supports Spark, SQL, Scala and more. There is a vast ecosystem of tools for processing data at scale, each with their pros & cons. The pipeline grew organically and we added new tools as new use-cases came up that we couldn’t solve with our existing stack. There are still scars left from that growth though which require some effort to get rid of, like ingesting data from schema-less format.
https://docs.telemetry.mozilla.org/concepts/data_pipeline.html
2018-01-16T15:04:55
CC-MAIN-2018-05
1516084886437.0
[array(['../assets/pipeline_flowchart.jpeg', 'Pipeline Flowchart'], dtype=object) array(['../assets/CEP_custom_plugin.jpeg', 'CEP – a custom plugin in action CEP Custom Plugin'], dtype=object) array(['../assets/ATMO_example.jpeg', 'ATMO – monitoring clusters ATMO'], dtype=object) array(['../assets/TMO_example.jpeg', 'TMO – timeseries TMO'], dtype=object) array(['../assets/STMO_example.jpeg', 'STMO – who doesn’t love SQL? STMO'], dtype=object) ]
docs.telemetry.mozilla.org
Magento Commerce, 1.14.x Shopping Cart Thumbnails The thumbnail images in the shopping cart give customers a quick overview of each item. However, for products with multiple options, the standard product image may not match the actual item purchased. If the customer purchased a pair of red shoes, ideally, the thumbnail in the shopping cart should show the product in the same color. Thumbnails for grouped and configurable products can display either the image from the “parent” or associated “child” product in the current store view. To configure shopping cart thumbnails: - Product Thumbnail Itself - Parent Product Thumbnail - Product Thumbnail Itself - Parent Product Thumbnail
http://docs.magento.com/m1/ee/user_guide/catalog/product-image-shopping-cart-thumbnails.html
2018-01-16T15:10:02
CC-MAIN-2018-05
1516084886437.0
[]
docs.magento.com
Changing a Data Set In two situations, changes to a data set might cause concern. One is if you deliberately edit the data set. The other is if your data source has changed so much that it affects the analyses based on it. Important Analyses that are in production usage should be protected so they continue to function correctly. We recommend the following when you're dealing with data changes: Carefully document your data sources and data sets, and the visuals that rely upon them. Documentation should include screenshots, fields used, placement in field wells, filters, sorts, calculations, colors, formatting, and so on. Record everything that you need to recreate the visual. When you edit a data set, try not to make changes that might break existing visuals. For example, don't remove columns that are being used in a visual. If you must remove a column, create a calculated column in its place. The replacement column should have the same name and data type as the original. If your data source or data set changes in your source database, adapt your visual to accommodate the change, as described previously. Or you can try to adapt the source database. For example, you might create a view of the source table (document). Then if the table changes, you can adjust the view to include or exclude columns (attributes), change data types, fill null values, and so on. Or, in another circumstance, if your data set is based on a slow SQL query, you might create a table to hold the results of the query. If you can't sufficiently adapt the source of the data, recreate the visuals based on your documentation of the analysis. If you no longer have access to a data source, your analyses based on that source are empty. The visuals you created still exist, but they can't display until they have some data to show. This result can happen if permissions are changed by your administrator. If you remove the data set a visual is based on, you might need to recreate it from your documentation. You can edit the visual and select a new data set to use with it. If you need to consistently use a new file to replace an older one, store your data in a location that is consistently available. For example, you might store your .csv file in S3 and create an S3 data set to use for your visuals. For more information on access files stored in S3, see Creating a Data Set Using Amazon S3 Files. Alternatively, you can import the data into a table, and base your visual on a query. This way, the data structures don't change, even if the data contained in them changes.
https://docs.aws.amazon.com/quicksight/latest/user/change-a-data-set.html
2018-01-16T15:42:01
CC-MAIN-2018-05
1516084886437.0
[]
docs.aws.amazon.com
Creating Shipping Labels You can easily create shipping labels for new and existing orders from the Admin, click the Add Package button. - To delete a package, click the Delete Package button. If you use a package type other than the default, or require a signature, the cost of shipping might differ from what you have charged the customer. Any difference in the cost of shipping is not reflected in your store. - If you need to cancel an order, click the Cancel button. A shipping label will not be created, and the Create Shipping Label checkbox is cleared. - If the label is successfully created, the shipment is submitted, the tracking number appears in the form, and the label is ready to print. - If the carrier cannot create the label due to the problems with connection, or for any other reason, the shipment is not processed. . Shipping labels are generated in PDF format, and can be printed from the Admin. Each label includes the order number and package number. Because an individual shipment order for each package is created, multiple shipping labels might be received for a single shipment. - Select Sales > Orders. Find the order in the list, and click to open the record. In the Order View panel on the left, select Shipments. Then, click to open the shipment record. - Select Sales > Shipments. Find the order in the list, and click to open the record. The Print Shipping Label button appears only after the carrier has generated labels for the shipment. If the button is missing, click the Create Shipping Label button. The button will appear after Magento receives the label from the carrier. - Select Sales > Orders. - Select Sales > Shipments. A complete set of shipping labels is printed for each shipment that is related to the selected orders.
http://docs.magento.com/m1/ee/user_guide/shipping/shipping-labels-create.html
2018-01-16T15:24:07
CC-MAIN-2018-05
1516084886437.0
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.magento.com
kubeletArguments: pods-per-core: - "10" In addition to pod traffic, the most-used data-path in an OpenShift Origin infrastructure is between the OpenShift Origin master hosts and etcd. The OpenShift Origin API server (part of the master binary) consults etcd for node status, network configuration, secrets, and more. Optimize this traffic path by: Co-locating master hosts and etcd servers. Ensuring an uncongested, low latency LAN communication link between master hosts. The OpenShift Origin Origin Origin cluster. The recommended sizing accounts for OpenShift Origin and Docker coordination for container status updates. This coordination puts CPU pressure on the master and docker processes, which can include writing a large amount of log data. etcd is a distributed key-value store that OpenShift Origin. Upgrades from previous versions of OpenShift Origin. In order to provide customers time to prepare for migrating the etcd schema from v2 to v3 (and associated downtime and verification), OpenShift Origin 3.6 does not enforce this upgrade. However, based on extensive test results Red Hat strongly recommends migrating existing OpenShift Origin clusters to etcd 3.x storage mode v3. This is particularly relevant in larger clusters, or in scenarios where SSD storage is not available. In addition to changing the storage mode for new installs to v3, OpenShift Origin 3.6 also begins enforcing quorum reads for all OpenShift Origin. Please see the etcd 3.1 announcement for more information on performance improvements. It is important to note that OpenShift Origin uses etcd for storing additional information beyond what Kubernetes itself requires. For example, OpenShift Origin stores information about images, builds, and other components in etcd, as is required by features that OpenShift Origin adds on top of Kubernetes. Ultimately, this means that guidance around performance and sizing for etcd hosts will differ from Kubernetes and other recommendations in salient ways. Red Hat tests etcd scalability and performance with the OpenShift Origin use-case and parameters in mind to generate the most accurate recommendations. Performance improvements were quantified using a 300-node OpenShift Origin:=pt to the end of the GRUB_CMDLINX_LINUX line, and nova-scheduler Origin Origin system, the findings delivered by TuneD will be the union of throughput-performance (the default for RHEL) and atomic-openshift-guest. TuneD will determine if you are running OpenShift Origin.
https://docs.openshift.org/3.6/scaling_performance/host_practices.html
2018-01-16T15:42:16
CC-MAIN-2018-05
1516084886437.0
[]
docs.openshift.org
Delta E Equations¶ Delta E equations are used to put a number on the visual difference between two LabColor instances. While different lighting conditions, substrates, and physical condition can all introduce unexpected variables, these equations are a good rough starting point for comparing colors. Each of the following Delta E functions has different characteristics. Some may be more suitable for certain applications than others. While it’s outside the scope of this module’s documentation to go into much detail, we link to relevant material when possible. Example¶ from colormath.color_objects import LabColor from colormath.color_diff import delta_e_cie1976 # Reference color. color1 = LabColor(lab_l=0.9, lab_a=16.3, lab_b=-2.22) # Color to be compared to the reference. color2 = LabColor(lab_l=0.7, lab_a=14.2, lab_b=-1.80) # This is your delta E value as a float. delta_e = delta_e_cie1976(color1, color2)
http://python-colormath.readthedocs.io/en/latest/delta_e.html
2018-01-16T15:07:32
CC-MAIN-2018-05
1516084886437.0
[]
python-colormath.readthedocs.io
Entity Framework¶ The BrightstarDB Entity Framework is the main way of working with BrightstarDB instances. For those of you wanting to work with the underlying RDF directly please see the section on RDF Client API. BrightstarDB allows developers to define a data model using .NET interface definitions. BrightstarDB tools introspect these definitions to create concrete classes that can be used to create, and update persistent data. If you haven’t read the Getting Started section then we recommend that you do. The sample provided there covers most of what is required for creating most data models. The following sections in the developer guide provide more in-depth explanation of how things work along with more complex examples. Basics¶ The BrightstarDB Entity Framework tooling is very simple to use. This guide shows how to get going, the rest of this section provides more in-depth information. The process of using the Entity Framework is to: - Include the BrightstarDB Entity Context item into a project. - Define the interfaces for the data objects that should be persistent. - Run the custom tool on the Entity Context text template file. - Use the generated context to create, query or get and modify objects. Creating a Context¶ Include the BrightstarDB Entity Context The Brightstar Entity Context is a text template that when run introspects the other code elements in the project and generates a number of classes and a context in a single file that can be found under the context file in Visual Studio. The simplest way to get the latest version of this file is to add the BrightstarDB NuGet package to your project: You can also install this package from the NuGet console with the command: Install-Package BrightstarDB Alternatively if you have used the BrightstarDB Windows Installer that installer will provide two additional ways to access this text template. Firstly, if the machine you installed onto has Visual Stuio Professional (or above) then the text template will be installed as a Visual Studio C# item template which makes it possible to simply select “Add Item...” and then choose “BrightstarDB Entity Context” from the list of C# items. Secondly, the installer will also place a copy of the text template in [INSTALLDIR]\\SDK\\EntityFramework. The default name of the entity context template file is MyEntityContext.tt - this will generate a code file named MyEntityContext.cs and the context class will be named MyEntityContext. By renaming the text template file you will change both the name of the generated C# source file and the name of the entity context class. You can also move this text template into a subfolder to change the namespace that the class is generated in. Define Interfaces Interfaces are used to define a data model contract. Only interfaces marked with the Entity attribute will be processed by the text template. The following interfaces define a model that captures the idea of people working for an company. [Entity] public interface IPerson { string Name { get; set; } DateTime DateOfBirth { get; set; } string CV { get; set; } ICompany Employer { get; set; } } [Entity] public interface ICompany { string Name { get; set; } [InverseProperty("Employer")] ICollection<IPerson> Employees { get; } } Note If you have installed with the Windows Installer, you will have the option to add the Visual Studio integration into Visual Studio Professional and above. This integration adds a simple C# item template for an entity definition which makes it possible to simply select “Add Item...” on your project and then choose “BrightstarDB Entity Definition” from the list of C# items. Run the MyEntityContext.tt Custom Tool To ensure that the generated classes are up to date right click on the .tt file in the solution explorer and select Run Custom Tool. This will ensure that the all the annotated interfaces are turned into concrete classes. Note The custom tool is not run automatically on every rebuild so after changing an interface remember to run it. Using a Context¶ A context can be thought of as a connection to a BrightstarDB instance. It provides access to the collections of domain objects defined by the interfaces. It also tracks all changes to objects and is responsible for executing queries and committing transactions. A context can be opened with a connection string. If the store named does not exist it will be created. See the connection strings section for more information on allowed configurations. The following code opens a new context connecting to an embedded store: var dataContext = new MyEntityContext("Type=embedded;StoresDirectory=c:\\brightstardb;StoreName=test"); The context exposes a collection for each entity type defined. For the types we defined above the following collections are exposed on a context: var people = dataContext.Persons; var companies = dataContext.Companies; Each of these collections are in fact IQueryable and as such support LINQ queries over the model. To get an entity by a given property the following can be used: var brightstardb = dataContext.Companies.Where( c => c.Name.Equals("BrightstarDB")).FirstOrDefault(); Once an entity has been retrieved it can be modified or related entities can be fetched: // fetching employees var employeesOfBrightstarDB = brightstardb.Employees; // update the company brightstardb.Name = "BrightstarDB"; New entities can be created either via the main collection; by using the new keyword and attaching the object to the context; or by passing the context into the constructor: // creating a new entity via the context collection var bob = dataContext.Persons.Create(); bob.Name = "bob"; // or created using new and attached to the context var bob = new Person() { Name = "Bob" }; dataContext.Persons.Add(bob); // or created using new and passing the context into the constructor var bob = new Person(dataContext) { Name = "Bob" }; // Add multiple items from any IEnumerable<T> with AddRange var newPeople = new Person[] { new Person() { Name = "Alice" }, new Person() { Name = "Carol" }, new Person() { Name = "Dave"} } dataContext.Persons.AddRange(newPeople); In addition to the Add and AddRange methods on each entity set, there are also Add and AddRange methods on the context. These methods introspect the objects being added to determine which of the entity interfaces they implement and then add them to the appropriate collections: var newItems = new object[] { new Person() { Name = "Edith" }, new Company() { Name = "BigCorp" }, new Product() { Name = "BrightstarDB" } } dataContext.AddRange(newItems); Note If you pass an item to the Add or AddRange methods on the context object that does not implement one of the supported entity interfaces, the Add method will raise an InvalidOperationException and the AddRange method will raise an AggregateException containing one InvalidOperationException inner exception for each item that could not be added. In the case of AddRange, all items are processed, even if one item cannot be added. Remember that at this stage, no changes are committed to the server, you still can choose whether or not to call SaveChanges to persist the items that were successfully added. Once a new object has been created it can be used in relationships with other objects. The following adds a new person to the collection of employees. The same relationship could also have been created by setting the Employer property on the person: // Adding a new relationship between entities var bob = dataContext.Persons.Create(); bob.Name = "bob"; brightstardb.Employees.Add(bob); // The relationship can also be defined from the 'other side'. var bob = dataContext.Persons.Create(); bob.Name = "bob"; bob.Employer = brightstardb; // You can also create relationships to previously constructed // or retrieved objects in the constructor var brightstardb = new Company(dataContext) { Name = "BrightstarDB" }; var bob = new Person(dataContext) { Name = "Bob; Employer = brightstardb }; Saving the changes that have occurred is easily done by calling a method on the context: dataContext.SaveChanges(); Example LINQ Queries¶ LINQ provides you with a flexible query language with the added advantage of Intellisense type-checking. In this section we show a few LINQ query patterns that are commonly used with the BrightstarDB entity framework. All of the examples assume that the context variable is a BrightstarDB Entity Framework context. To retrieve several entities by their IDs¶ var people = context.Persons.Where( x=>new []{"bob", "sue", "rita"}.Contains(x.Id)); Sorting results¶ var byAge = context.Persons.OrderBy(x=>x.Age); var byAgeDescending = context.Persons.OrderByDescending(x=>x.Age); Return complex values as anonymous objects¶ var stockInfo = from x in context.Companies select new {x.Name, x.TickerSymbol, x.Price}; Annotations¶ The BrightstarDB entity framework relies on a few annotation types in order to accurately express a data model. This section describes the different annotations and how they should be used. The only required attribute annotation is Entity. All other attributes give different levels of control over how the object model is mapped to RDF. TypeIdentifierPrefix Attribute¶ BrightstarDB makes use of URIs to identify class types and property types. These URI values can be added on each property but to improve clarity and avoid mistakes it is possible to configure a base URI that is then used by all attributes. It is also possible to define models that do not have this attribute set. The type identifier prefix can be set in the AssemblyInfo.cs file. The example below shows how to set this configuration property: [assembly: TypeIdentifierPrefix("")] Entity Attribute¶ The Entity attribute is used to indicate that the annotated interface should be included in the generated model. Optionally, a full URI or a URI postfix can be supplied that defines the identity of the class. The following examples show how to use the attribute. The example with just the value ‘Person’ uses a default prefix if one is not specified as described above: // example 1. [Entity] public interface IPerson { ... } // example 2. [Entity("Person")] public interface IPerson { ... } // example 3. [Entity("")] public interface IPerson { ... } Example 3. above can be used to map .NET models onto existing RDF vocabularies. This allows the model to create data in a given vocabulary but it also allows models to be mapped onto existing RDF data. Identity Property¶ The Identity property can be used to get and set the underlying identity of an Entity. The following example shows how this is defined: // example 1. [Entity("Person")] public interface IPerson { string Id { get; } } No annotation is required. It is also acceptable for the property to be called ID, {Type}Id or {Type}ID where {Type} is the name of the type. E.g: PersonId or PersonID. Identifier Attribute¶ Id property values are URIs, but in some cases it is necessary to work with simpler string values such as GUIDs or numeric values. To do this the Id property can be decorated with the identifier attribute. The identifier attribute requires a string property that is the identifier prefix - this can be specified either as a URI string or as {prefix}:{rest of URI} where {prefix} is a namespace prefix defined by the Namespace Declaration Attribute (see below): // example 1. [Entity("Person")] public interface IPerson { [Identifier("")] string Id { get; } } // example 2. [Entity] public interface ISkill { [Identifier("ex:skills#")] string Id {get;} } // NOTE: For the above to work there must be an assembly attribute declared like this: [assembly:NamespaceDeclaration("ex", "")] The Identifier attribute has additional arguments that enable you to specify a (composite) key for the type. For more information please refer to the section Key Properties and Composite Keys. From BrightstarDB release 1.9 it is possible to specify an empty string as the identifier prefix. When this is done, the value assigned to the Id property MUST be a absolute URI as it is used unaltered in the generated RDF triples. This gives your application complete control over the URIs used in the RDF data, but it also requires that your application manages the generation of those URIs: [Entity] public interface ICompany { [Identifier("")] string Id {get;} } Note When using an empty string identifier prefix like this, the Create() method on the context collection will automatically generate a URI with the prefix. To avoid this, you should instead create the entity directly using the constructor and add it to the context. There are several ways in which this can be done: // This will get a BrightstarDB genid URI var co1 = context.Companies.Create(); // Create an entity with the URI var co2 = new Company { Id = "" }; // ...then add it to the context context.Companies.Add(co2); // Create and add in a single line var co3 = new Company(context) { Id = "" }; // Alternate single-line approach context.Companies.Add( new Company { Id = "" } ); Property Inclusion¶ Any .NET property with a getter or setter is automatically included in the generated type, no attribute annotation is required for this: // example 1. [Entity("Person")] public interface IPerson { string Id { get; } string Name { get; set; } } Property Exclusion¶ If you want BrightstarDB to ignore a property you can simply decorate it with an [Ignore] attribute: [Entity("Person")] public interface IPerson { string Id {get; } string Name { get; set; } [Ignore] int Salary {get;} } Note Properties that are ignored in this way are not implemented in the partial class that BrightstarDB generates, so you will need to ensure that they are implemented in a partial class that you create. Note The [Ignore] attribute is not supported or required on methods defined in the interface as BrightstarDB does not implement interface methods - you are always required to provide method implementations in your own partial class. Inverse Property Attribute¶ When two types reference each other via different properties that in fact reflect different sides of the same association then it is necessary to declare this explicitly. This can be done with the InverseProperty attribute. This attribute requires the name of the .NET property on the referencing type to be specified: [Entity("Person")] public interface IPerson { string Id { get; } ICompany Employer { get; set; } } [Entity("Company")] public interface ICompany { string Id { get; } [InverseProperty("Employer")] ICollection<IPerson> Employees { get; set; } } The above example shows that the inverse of Employees is Employer. This means that if the Employer property on P1 is set to C1 then getting C1.Employees will return a collection containing P1. Namespace Declaration Attribute¶ When using URIs in annotations it is cleaner if the complete URI doesn’t need to be entered every time. To support this the NamespaceDeclaration assembly attribute can be used, many times if needed, to define namespace prefix mappings. The mapping takes a short string and the URI prefix to be used. The attribute can be used to specify the prefixes required (typically assembly attributes are added to the AssemblyInfo.cs code file in the Properties folder of the project): [assembly: NamespaceDeclaration("foaf", "")] Then these prefixes can be used in property or type annotation using the CURIE syntax of {prefix}:{rest of URI}: [Entity("foaf:Person")] public interface IPerson { ... } Namespace declarations defined in this way can also be retrieved programatically. The class BrightstarDB.EntityFramework.NamespaceDeclarations provides methods for retrieving these declarations in a variety of formats: // You can just iterate them as instances of // BrightstarDB.EntityFramework.NamespaceDeclarationAttribute foreach(var nsDecl in NamespaceDeclarations.ForAssembly( Assembly.GetExecutingAssembly())) { // prefix is in nsDecl.Prefix // Namespace URI is in nsDecl.Reference } // Or you can retrieve them as a dictionary: var dict = NamespaceDeclarations.ForAssembly( Assembly.GetExecutingAssembly()); foafUri = dict["foaf"]; // You can omit the Assembly parameter if you are calling from the // assembly containing the delcarations. // You can get the declarations formatted for use in SPARQL... // e.g. PREFIX foaf: <> sparqlPrefixes = NamespaceDeclarations.ForAssembly().AsSparql(); // ...or for use in Turtle (or TRiG) // e.g. @prefix foaf: <> . turtlePrefixes = NamespaceDeclarations.ForAssembly().AsTurtle(); Property Type Attribute¶ While no decoration is required to include a property in a generated class, if the property is to be mapped onto an existing RDF vocabulary then the PropertyType attribute can be used to do this. The PropertyType attribute requires a string property that is either an absolute or relative URI. If it is a relative URI then it is appended to the URI defined by the TypeIdentifierPrefix attribute or the default base type URI. Again, prefixes defined by a NamespaceDeclaration attribute can also be used: // Example 1. Explicit type declaration [PropertyType("")] string Name { get; set; } // Example 2. Prefixed type declaration. // The prefix must be declared with a NamespaceDeclaration attribute [PropertyType("foaf:name")] string Name { get; set; } // Example 3. Where "name" is appended to the default namespace // or the one specified by the TypeIdentifierPrefix in AssemblyInfo.cs. [PropertyType("name")] string Name { get; set; } Inverse Property Type Attribute¶ Allows inverse properties to be mapped to a given RDF predicate type rather than a .NET property name. This is most useful when mapping existing RDF schemas to support the case where the .NET data-binding only requires the inverse of the RDF property: // Example 1. The following states that the collection of employees // is found by traversing the "" // predicate from instances of Person. [InversePropertyType("")] ICollection<IPerson> Employees { get; set; } Additional Custom Attributes¶ Any custom attributes added to the entity interface that are not in the BrightstarDB.EntityFramework namespace will be automatically copied through into the generated class. This allows you to easily make use of custom attributes for validation, property annotation and other purposes. As an example, the following interface code: [Entity("")] public interface IFoafPerson : IFoafAgent { [Identifier("")] string Id { get; } [PropertyType("")] [DisplayName("Also Known As")] string Nickname { get; set; } [PropertyType("")] [Required] [CustomValidation(typeof(MyCustomValidator), "ValidateName", ErrorMessage="Custom error message")] string Name { get; set; } } would result in this generated class code: public partial class FoafPerson : BrightstarEntityObject, IFoafPerson { public FoafPerson(BrightstarEntityContext context, IDataObject dataObject) : base(context, dataObject) { } public FoafPerson() : base() { } public System.String Id { get {return GetIdentity(); } set { SetIdentity(value); } } #region Implementation of BrightstarDB.Tests.EntityFramework.IFoafPerson [System.ComponentModel.DisplayNameAttribute("Also Known As")] public System.String Nickname { get { return GetRelatedProperty<System.String>("Nickname"); } set { SetRelatedProperty("Nickname", value); } } [System.ComponentModel.DataAnnotations.RequiredAttribute] [System.ComponentModel.DataAnnotations.CustomValidationAttribute(typeof(MyCustomValidator), "ValidateName", ErrorMessage="Custom error message")] public System.String Name { get { return GetRelatedProperty<System.String>("Name"); } set { SetRelatedProperty("Name", value); } } #endregion } It is also possible to add custom attributes to the generated entity class itself. Any custom attributes that are allowed on both classes and interfaces can be added to the entity interface and will be automatically copied through to the generated class in the same was as custom attributes on properties. However, if you need to use a custom attribute that is allowed on a class but not on an interface, then you must use the BrightstarDB.EntityFramework.ClassAttribute attribute. This custom attribute can be added to the entity interface and allows you to specify a different custom attribute that should be added to the generated entity class. When using this custom attribute you should ensure that you either import the namespace that contains the other custom attribute or reference the other custom attribute using its fully-qualified type name to ensure that the generated class code compiles successfully. For example, the following interface code: [Entity("")] [ClassAttribute("[System.ComponentModel.DisplayName(\\"Person\\")]")] public interface IFoafPerson : IFoafAgent { // ... interface definition here } would result in this generated class code: [System.ComponentModel.DisplayName("Person")] public partial class FoafPerson : BrightstarEntityObject, IFoafPerson { // ... generated class code here } Note that the DisplayName custom attribute is referenced using its fully-qualified type name ( System.ComponentModel.DisplayName), as the generated context code will not include a using System.ComponentModel; namespace import. Alternatively, this interface code would also generate class code that compiles correctly: // import the System.ComponentModel namespace // this will be copied into the context class code using System.ComponentModel; [Entity("")] [ClassAttribute("[DisplayName(\\"Person\\")]")] public interface IFoafPerson : IFoafAgent { // ... interface definition here } Patterns¶ This section describes how to model common patterns using BrightstarDB Entity Framework. It covers how to define one-to-one, one-to-many, many-to-many and reflexive relationships. Examples of these relationship patterns can be found in the Tweetbox sample. One-to-One¶ Entities can have one-to-one relationships with other entities. An example of this would be the link between a user and a the authorization to another social networking site. The one-to-one relationship would be described in the interfaces as follows: [Entity] public interface IUser { ... ISocialNetworkAccount SocialNetworkAccount { get; set; } ... } [Entity] public interface ISocialNetworkAccount { ... [InverseProperty("SocialNetworkAccount")] IUser TwitterAccount { get; set; } ... } One-to-Many¶ A User entity can be modeled to have a one-to-many relationship with a set of Tweet entities, by marking the properties in each interface as follows: [Entity] public interface ITweet { ... IUser Author { get; set; } ... } [Entity] public interface IUser { ... [InverseProperty("Author")] ICollection<ITweet> Tweets { get; set; } ... } Many-to-Many¶ The Tweet entity can be modeled to have a set of zero or more Hash Tags. As any Hash Tag entity could be used in more than one Tweet, this uses a many-to-many relationship pattern: [Entity] public interface ITweet { ... ICollection<IHashTag> HashTags { get; set; } ... } [Entity] public interface IHashTag { ... [InverseProperty("HashTags")] ICollection<ITweet> Tweets { get; set; } ... } Behaviour¶ The classes generated by the BrightstarDB Entity Framework deal with data and data persistence. However, most applications require these classes to have behaviour. All generated classes are generated as .NET partial classes. This means that another file can contain additional method definitions. The following example shows how to add additional methods to a generated class. Assume we have the following interface definition: [Entity] public interface IPerson { string Id { get; } string FirstName { get; set; } string LastName { get; set; } } To add custom behaviour the new method signature should first be added to the interface. The example below shows the same interface but with an added method signature to get a user’s full name: [Entity] public interface IPerson { string Id { get; } string FirstName { get; set; } string LastName { get; set; } // new method signature string GetFullName(); } After running the custom tool on the EntityContext.tt file there is a new class called Person. To add additional methods add a new .cs file to the project and add the following class declaration: public partial class Person { public string GetFullName() { return FirstName + " " + LastName; } } The new partial class implements the additional method declaration and has access to all the data properties in the generated class. Key Properties and Composite Keys¶ The Identity Property provides a simple means of accessing the key value of an entity, this key value is concatenated with the base URI string for the entity type to generate the full URI identifier of the RDF resource that is created for the entity. In many applications the exact key used is immaterial, and the default strategy of generating a GUID-based key works well. However in some cases it is desirable to have more control over the key assigned to an entity. For this purpose we provide a number of additional arguments on the Identifier attribute. These arguments allow you to specify that the key for an entity type is generated from one or more of its properties Specifying Key Properties¶ The KeyProperties argument accepts an array of strings that name the properties of the entity that should be combined to create a key value for the entity. The value of the named properties will be concatenated in the order that they are named in the KeyProperties array, with a slash (‘/’) between values: // An entity with a key generated from one of its properties. [Entity] public interface IBook { [Identifier("", KeyProperties=new [] {"Isbn"}] public string Id { get; } public string Isbn {get;set;} } // An entity with a composite key [Entity] public interface IWidget { [Identifier("", KeyProperties=new [] {"Manufacturer", "ProductCode"}] public string Id { get; } public string Manufacturer {get;set;} public string ProductCode {get;set;} } // In use... var book = context.Books.Create(); book.Isbn = "1234567890"; // book URI identifier will be var widget = context.Widgets.Create(); widget.Manufacturer = "Acme"; widget.ProductCode = "Grommet" // widget identifier will be Key Separator¶ The KeySeparator argument of the Identifier attribute allows you to change the string used to concatenate multiple values into a single key: // An entity with a composite key [Entity] public interface IWidget { [Identifier("", KeyProperties=new [] {"Manufacturer", "ProductCode"}, KeySeparator="_"] public string Id { get; } public string Manufacturer {get;set;} public string ProductCode {get;set;} } var widget = context.Widgets.Create(); widget.Manufacturer = "Acme"; widget.ProductCode = "Grommet" // widget identifier will be Key Converter¶ The values of the key properties are converted to a string by a class that implements the BrightstarDB.EntityFramework.IKeyConverter interface. The default implementation implements the following rules: - Integer and decimal values are converted using the InvariantCulture (to eliminate culture-specific separators) - Properties whose value is another entity will yield the key of that entity. That is the part of the URI identifier that follows the base URI string. - Properties whose value is NULL are ignored. - If all key properties are NULL, a NULL key will be generated, which will result in a BrightstarDB.EntityFramework.EntityKeyRequiredExceptionbeing raised. - The converted string value is URI-escaped using the .NET method Uri.EscapeUriString(string). - Multiple non-null values are concatenated using the separator specified by the KeySeparator property. You can create your own key conversion rules by implementing the IKeyConverter interface and specifying the implementation type in the KeyConverterType argument of the Identifier attribute. Hierarchical Key Pattern¶ Using the default key conversion rules it is possible to construct hierarchical identifier schemes: [Entity] public interface IHierarchicalKeyEntity { [Identifier(BaseAddress = "", KeyProperties = new[]{"Parent", "Code"})] string Id { get; } IHierarchicalKeyEntity Parent { get; set; } string Code { get; set; } } // Example: var parent = context.HierarchicalKeyEntities.Create(); parent.Code = "parent"; // URI will be var child = context.HierarchicalKeyEntities.Create(); child.Parent = parent; child.Code = "child"; // URI will be Note Although this example uses the same type of entity for both parent and child object, it is equally valid to use different types of entity for parent and child. Key Constraints¶ When using the Entity Framework with the BrightstarDB back-end, entities with key properties are treated as having a “class-unique key constraint”. This means that it is not allowed to create an RDF resource with the same URI identifier and the same RDF type. This form of constraint means that it is possible for one resource to have multiple types, but it still ensures that for any given type all of its identifiers are unique. The constraint is checked as part of the update transaction and if it fails a BrightstarDB.EntityFramework.UniqueConstraintViolationException will be raised. The constraint is also checked when creating new entities, but in this case the check is only against the entities currently loaded into the context - this allows your code to “fail fast” if a uniqueness violation occurs in the collection of entities loaded in the context. Warning Key constraints are not checked when using the Entity Framework with a DotNetRDF or generic SPARQL back-end, as the SPARQL UPDATE protocol does not allow for such transaction pre-conditions to be checked. Note Key constraints are not validated if you use the AddOrUpdate method to add an item to the context. In this case, an existing item with the same key will simply be overwritten by the item being added. Changing Identifiers¶ With release 1.7 of BrightstarDB, it is now possible to alter the URI identifier of an entity. Currently this is only supported on entities that have generated keys and is achieved by modifying any of the properties that contribute to the key. A change of identifier is handled by the Entity Framework as a deletion of all triples where the old identifier is the subject or object of the triple, followed by the creation of a new set of triples equivalent to the deleted set but with the old identifier replaced by the new identifier. Because the triples where the identifier is used as the object are updated, all “links” in the data set will be properly maintained when an identifier is modified in this way. Warning When using another entity ID as part of the composite key for an entity please be aware that currently the entity framework code does not automatically change the identifiers of all dependencies when a dependent ID property is changed. This is done to avoid a large amount of overhead in checking for ID dependencies in the data store when changes are saved. The supported use case is that the dependency ID (e.g. the ID of the parent entity) is not modified once it is used to construct other identifiers. Optimistic Locking¶ The Entity Framework provides the option to enable optimistic locking when working with the store. Optimistic locking uses a well-known version number property (the property predicate URI is) to track the version number of an entity, when making an update to an entity the version number is used to determine if another client has concurrently updated the entity. If this is detected, it results in an exception of the type BrightstarDB.Client.TransactionPreconditionsFailedException being raised. Enabling Optimistic Locking¶ Optimistic locking can be enabled either through the connection string (giving the user control over whether or not optimistic locking is enabled) or through code (giving the control to the programmer). To enable optimistic locking in a connection string, simply add “optimisticLocking=true” to the connection string. For example: type=rest;endpoint=;storeName=myStore;optimisticLocking=true To enable optimistic locking from code, use the optional optimisticLocking parameter on the constructor of the context class: var myContext = new MyEntityContext(connectionString, true); Note The programmatic setting always overrides the setting in the connection string - this gives the programmer final control over whether optimistic locking is used. The programmer can also prevent optimistic locking from being used by passing false as the value of the optimisticLocking parameter of the constructor of the context class. Handling Optimistic Locking Errors¶ Optimistic locking errors only occur when the SaveChanges() method is called on the context class. The error is notified by raising an exception of the type BrightstarDB.Client.TransactionPreconditionsFailedException. When this exception is caught by your code, you have two basic options to choose from. You can apply each of these options separately to each object modified by your update. - Attempt the save again but first update the local context object with data from the server. This will save all the changes you have made EXCEPT for those that were detected on the server. This is the “store wins” scenario. - Attempt the save again, but first update only the version numbers of the local context object with data from the server. This will keep all the changes you have made, overwriting any concurrent changes that happened on the server. This is the “client wins” scenario. To attempt the save again, you must first call the Refresh() method on the context object. This method takes two paramters - the first parameter specifies the mode for the refresh, this can either be RefreshMode.ClientWins or RefreshMode.StoreWins depending on the scenario to be applied. The second parameter is the entity or collection of entities to which the refresh is to be applied. You apply different refresh strategies to different entities within the same update if you wish. Once the conflicted entities are refreshed, you can then make a call to the SaveChanges() method of the context once more. The code sample below shows this in outline: try { myContext.SaveChanges(); } catch(TransactionPreconditionsFailedException) { // Refresh the conflicted object(s) myContext.Refresh(RefreshMode.StoreWins, conflictedEntity); // Attempt the save again myContext.SaveChanges(); } Note On stores with a high degree of concurrent updates it is possible that the second call to SaveChanges() could also result in an optimistic locking error because objects have been further modified since the initial optimistic locking failure was reported. Production code for highly concurrent environments should be written to handle this possibility. LINQ Restrictions¶ Supported LINQ Operators¶ The LINQ query processor in BrightstarDB has some restrictions, but supports the most commonly used core set of LINQ query methods. The following table lists the supported query methods. Unless otherwise noted the indexed variant of LINQ query methods are not supported. Supported Class Methods and Properties¶ In general, the translation of LINQ to SPARQL cannot translate methods on .NET datatypes into functionally equivalent SPARQL. However we have implemented translation of a few commonly used String, Math and DateTime methods as listed in the following table. The return values of these methods and properties can only be used in the filtering of queries and cannot be used to modify the return value. For example you can test that foo.Name.ToLower().Equals("somestring"), but you cannot return the value foo.Name.ToLower(). The static method Regex.IsMatch() is supported when used to filter on a string property in a LINQ query. For example: context.Persons.Where( p => Regex.IsMatch(p.Name, "^a.*e$", RegexOptions.IgnoreCase)); However, please note that the regular expression options that can be used is limited to a combination of IgnoreCase, Multiline, Singleline and IgnorePatternWhitespace. Casting Entities¶ One of the nicest features of RDF is its flexibility - an RDF resource can be of multiple types and can support multiple (possibly conflicting) properties according to different schemas. It allows you to record different aspects of the same thing all at a single point in the data store. In OO programming however, we tend to prefer to separate out different representations of the same thing into different classes and to use those classes to encapsulate a specific model. So there is a tension between the freedom in RDF to record anything about any resource and the need in traditional OO programming to have a set of types and properties defined at compile time. In BrightstarDB the way we handle is is to allow you to convert an entity from one type to any other entity type at runtime. This feature is provided by the Become<T>() method on the entity object. Calling Become<T>() on an entity has two effects: - It adds one or more RDF type statements to the resource so that it is now recorded as being an instance of the RDF class that the entity type Tis mapped to. When Tinherits from a base entity type both the RDF type for Tand the RDF type for the base type is added. - It returns an instance of Twhich is bound to the same underlying DataObject as the entity you call Become<T>()on. This feature gives you the ability to convert and extend resources at runtime with almost no overhead. You should note that Become<T>() does nothing to ensure that the resource conforms to the constraints that the type T might imply, so your code should be written to robustly handle missing properties. Once you call SaveChanges() on the context, the new type statements (and any new properties you created) are committed to the store. You will now find the object can be accessed through the context entity set for T. There is also an Unbecome<T>() method. This method can be used to remove RDF type statements from an entity so that it no longer appears in the collection of entities of type T on the context. Note that this does not remove the RDF type statements for super-types of T, but you can explicitly do this by making further calls to Unbecome<T>() with the appropriate super-types. OData¶ The Open Data Protocol (OData) is an open web protocol for querying data. An OData provider can be added to BrightstarDB Entity Framework projects to allow OData consumers to query the underlying data in the store. Note Identifier Attributes must exist on any BrightstarDB entity interfaces in order to be processed by an OData consumer For more details on how to add a BrightstarDB OData service to your projects, read Adding Linked Data Support in the MVC Nerd Dinner samples chapter OData Restrictions¶ The OData v2 protocol implemented by BrightstarDB does not support properties that contain a collection of literal values. This means that BrightstarDB entity properties that are of type ICollection<literal type> are not supported. Any properties of this type will not be readable via the OData service. An OData provider connected to the BrightstarDB Entity Framework as a few restrictions on how it can be queried. Expand - Second degree expansions are not currently supported. e.g. Department('5598556a-671a-44f0-b176-502da62b3b2f')?$expand=Persons/Skills Filtering - The arithmetic filter Modis not supported - The string filter functions int indexof(string p0, string p1), string trim(string p0)and trim(string p0, string p1)are not supported. - The type filter functions bool IsOf(type p0)and bool IsOf(expression p0, type p1)are not supported. Format Microsoft WCF Data Services do not currently support the $format query option. To return OData results formatted in JSON, the accept headers can be set in the web request sent to the OData service. SavingChanges Event¶ The generated EntityFramework context class exposes an event, SavingChanges. This event is raised during the processing of the SaveChanges() method before any data is committed back to the Brightstar store. The event sender is the context class itself and in the event handler you can use the TrackedObjects property of the context class to iterate through all entities that the context class has retrieved from the BrightstarDB store. Entities expose an IsModified property which can be used to determine if the entity has been newly created or locally modified. The sample code below uses this to update a Created and LastModified timestamp on any entity that implements the ITrackable interface.: private static void UpdateTrackables(object sender, EventArgs e) { // This method is invoked by the context. // The sender object is the context itself var context = sender as MyEntityContext; // Iterate through just the tracked objects that // implement the ITrackable interface foreach(var t in context.TrackedObjects .Where(x=>x is ITrackable && x.IsModified) .Cast<ITrackable>()) { // If the Created property is not yet set, it will have // DateTime.MinValue as its defaulft value. We can use // this fact to determine if the Created property needs setting. if (t.Created == DateTime.MinValue) t.Created = DateTime.Now; // The LastModified property should always be updated t.LastModified = DateTime.Now; } } Note The source code for this example can be found in [INSTALLDIR]\Samples\EntityFramework\EntityFrameworkSamples.sln INotifyPropertyChanged and INotifyCollectionChanged Support¶ The classes generated by the Entity Framework provide support for tracking local changes. All generated entity classes implement the System.ComponentModel.INotifyPropertyChanged interface and fire a notification event any time a property with a single value is modified. All collections exposed by the generated classes implement the System.Collections.Specialized.INotifyCollectionChanged interface and fire a notification when an item is added to or removed from the collection or when the collection is reset. There are a few points to note about using these features with the Entity Framework: Firstly, although the generated classes implement the INotifyPropertyChanged interface, your code will typically use the interfaces. To attach a handler to the PropertyChanged event, you need an instance of INotifyPropertyChanged in your code. There are two ways to achieve this - either by casting or by adding INotifyPropertyChanged to your entity interface. If casting you will need to write code like this: // Get an entity to listen to var person = _context.Persons.Where(x=>x.Name.Equals("Fred")) .FirstOrDefault(); // Attach the NotifyPropertyChanged event handler (person as INotifyPropertyChanged).PropertyChanged += HandlePropertyChanged; Alternatively it can be easier to simply add the INotifyPropertyChanged interface to your entity interface like this: [Entity] public interface IPerson : INotifyPropertyChanged { // Property definitions go here } This enables you to then write code without the cast: // Get an entity to listen to var person = _context.Persons.Where(x=>x.Name.Equals("Fred")) .FirstOrDefault(); // Attach the NotifyPropertyChanged event handler person.PropertyChanged += HandlePropertyChanged; When tracking changes to collections you should also be aware that the dynamically loaded nature of these collections means that sometimes it is not possible for the change tracking code to provide you with the object that was removed from a collection. This will typically happen when you have a collection one one entity that is the inverse of a collection or property on another entity. Updating the collection at one end will fire the CollectionChanged event on the inverse collection, but if the inverse collection is not yet loaded, the event will be raised as a NotifyCollectionChangedAction.Reset type event, rather than a NotifyCollectionChangedAction.Remove event. This is done to avoid the overhead of retrieving the removed object from the data store just for the purpose of raising the notification event. Finally, please note that event handlers are attached only to the local entity objects, the handlers are not persisted when the context changes are saved and are not available to any new context’s you create - these handlers are intended only for tracking changes made locally to properties in the context before a SaveChanges() is invoked. The properties are also useful for data binding in applications where you want the user interface to update as the properties are modified. Graph Targeting¶ The Entity Framwork supports updating a specific named graph in the BrightstarDB store. The graph to be updated is specified when creating the context object using the following optional parameters in the context constructor: - updateGraph: The identifier of the graph that new statements will be added to. Defaults to the BrightstarDB default graph () - defaultDataSet: The identifier of the graphs that statements will be retrieved from. Defaults to all graphs in the store. - versionGraph: The identifier of the graph that contains version information for optimistic locking. Defaults to the same graph as updateGraph. Please refer to the section Default Data Set for more information about the default data set and its relationship to the defaultDataSet, updateGraph, and versionGraph parameters. To create a context that reads properties from the default graph and adds properties to a specific graph (e.g. for recording the results of inferences), use the following: // Set storeName, prefixes and inferredGraphUri here var context = new MyEntityContext( connectionString, enableOptimisticLocking, "", new string[] { Constants.DefaultGraphUri }, seperate from the graphs that store the rest of the data (and define a constant for that graph URI). LINQ and Graph Targeting¶ For LINQ queries to work, the triple that assigns the entity type must be in one of the graphs in the default data set or in the graph to be updated. This makes the Entity Framework a bit more difficult to use across multiple graphs. When writing an application that will regularly deal with different named graphs you may want to consider using the Data Object Layer API and SPARQL or the low-level RDF API for update operations. Roslyn Code Generation¶ From version 1.11, BrightstarDB now includes support for generating an entity context class using the .NET Roslyn compiler library. The Roslyn code generator has a number of benefits over the TextTemplate code generator: - It can generate both C# and VB code. - It allows you to use the nameof operator in InverseProperty attributes:[InverseProperty(nameof(IParentEntity.Children))] - It supports generating the code either through a T4 template or from the command-line, which makes it possible to generate code without using Visual Studio. - It will support code generation in Xamarin Studio / MonoDevelop Note The Roslyn code generation features are dependent upon .NET 4.5 and in VisualStudio require VS2015 CTP5 release or later. Console-based Code Generation¶ The console-based code generator can be added to your solution by installing the NuGet package BrightstarDB.CodeGeneration.Console. You can do this in the NuGet Package Manager Console with the following command: Install-Package BrightstarDB.CodeGeneration.Console Installing this package adds a solution-level tool to your package structure. You can then run this tool with the following command: BrightstarDB.CodeGeneration.Console [/EntityContext:ContextClassName] [/Language:VB|CS] [/InternalEntityClasses] ``path/to/MySolution.sln`` ``My.Context.Namespace`` ``Output.cs`` This will scan the code in the specified solution and generate a new BrightstarDB entity context class in the namespace provided, writing the generated code to the specified output file. By default, the name of the entity context class is EntityContext, but this can be changed by providing a value for the optional /EntityContext parameter (short name /CN). The language used in the output file will be based on the file extension, but you can override this with the optional /Langauge parameter. To generate entity classes with internal visibility for public interfaces, you can add the optional /InternalEntityClasses flag (short name /IE) to the command-line (see Generated Class Visibility for more information about this feature). T4 Template-based Generation¶ We also provide a T4 template which acts as shim to invoke the code generator. This can be more convenient when working in a development environment such as Visual Studio or Xamarin Studio. To use the T4 template, you should install the NuGet package BrightstarDB.CodeGeneration.T4: Install-Package BrightstarDB.CodeGeneration.T4 This will add a file named EntityContext.tt to your project. You can move this file around in the project and it will automatically use the appropriate namespace for the generated context class. You can also rename this file to change the name of the generated context class. Generated Class Visibility¶ By default the Entity Framework code generators will generate entity classes that implement each entity interface with the same visibility as the interface. This means that by default a public interface will be implemented by a public generated class; whereas an internal interface will be implemented by an internal generated class. In some cases it is desirable to restrict the visibility of the generated entity classes, having a public entity interface and an internal implementation of that interface. This is now supported through a flag that can be either passed to the Roslyn console-based code generator or set by editing the T4 text template used for code generation. If you are using a T4 template to generate the entity context and entity classes, you can set this flag by finding the following code in the template: var internalEntityClasses = false; and change it to: var internalEntityClasses = true; This code is the same in both the standard and the Roslyn-based T4 templates.
http://brightstardb.readthedocs.io/en/latest/Entity_Framework/
2018-01-16T15:11:01
CC-MAIN-2018-05
1516084886437.0
[array(['../_images/getting-started-add-nuget-package.png', '../_images/getting-started-add-nuget-package.png'], dtype=object)]
brightstardb.readthedocs.io
UDN Search public documentation: Steam SteamDocument Summary: How to use and implement the Steam Integration in UnrealEngine3. Document Changelog: Created by John Scott, updated for SteamPipe by Josh Markiewicz. - Steam - Using the Steam client - Using SteamPipe for Content Delivery - Deprecated Using the Steam Content Tool - Deprecated Running the Steam Content Server - Using Steam in Game - SteamPipe content delivery - Make 'steam_dev.cfg' in the same directory as the client and add @LocalContentServer "content server ip" - ContentTool deprecated - SteamPipe for Content DeliverySteamPipe is Valve's newer and more efficient means of delivering content to the Steam Client. It consists of two parts; the "Content Builder", an executable SteamCmd.exe and all the config scripts necessary to describe agame and its content, and the "Content Server", a web service that can be run locally to server the game images. Before getting started, make sure to talk to your Steam Technical Account Manager to get an AppId and set of DepotIds for your game. All the documentation for SteamPipe can be found at Valve's partner website - Follow the directions under "Steam Build Account" to create a build account that will make the game - Follow the directions "Initial Setup for New SteamPipe Apps" to configure the metadata related to the game Steam SDK directory structure and the \sdk\tools\steampipe\ directoryThe SteamPipe tool resides under the \ContentBuilder\ directory structure - \Builder\ directory contains steamcmd.exe the main tool used to create builds - \Scripts\ contains the build scripts for the game depots (this can be changed, Epic stores the script (.vdf) files in the \MyGame\Build\Steam directory for organizational purposes) - \Content\ contains all the game files to build into depots (this can be changed, Epic uses a root \SteamContent\ directory on the build machine) - \Output\ contains the build logs, depot cache and intermediate files (this can be changed, Epic uses a root \SteamContent\ directory on the build machine) Creating some local directory structureCreate a \SteamContent\ directory off of the build machine root (although it can technically be anywhere). This folder should contain two additional folders - \ContentBuilder\ - Make a directory called 'output' where the game image cache will reside - Contains *.csd and *.csm files which are the actual caches, they are ok to delete from time to time but will make the next build slower - \ContentServer\ - This is part of the directory structure of the Mongoose webserver, contains the actual build image - Make a directory called 'htdocs' where the actual game images and the files related to serving content - This is the root directory displayed if you go to the webserver via a browser Run SteamCmd.exeThe first time SteamCmd.exe is run, it will download and populate its directory with all the files it needs, much like the Steam client NOTE: If the tool errors and complains about proper login, the build account is protected by Steam Guard. Check the email address specified for the build account for a Steam Guard code, and run steamcmd.exe "set_steam_guard_code emailedcodehere", before trying again. NOTE: If running steamcmd.exe results in the following error: "SteamUpdater: Error: Steam needs to be online to update. Please confirm your network connection and try again." Resolution: Go to Internet Options->Connections->Lan Settings and check Automatically detect settings. (This is on their FAQ but hard to find and bit me personally) Creating Depot Build Config FilesThere are sample scripts in the sdk \ContentBuilder\scripts folder, but it looks something like this: "DepotBuildConfig" { "DepotID" "<yourdepotid>" // include all files recursively "FileMapping" { "LocalPath" "*" "DepotPath" "." "recursive" "1" } // but exclude all symbol files // "FileExclusion" "*.pdb" } Create App Build FileThe app build file connects all the depot build configs into one game image and looks something like this: "appbuild" { "appid" "<yourappid>" "desc" "<descriptive build name>" // description for this build "buildoutput" "D:\SteamContent\ContentBuilder\output\MyGame\" // build output folder for .log, .csm & .csd files, relative to location of this file "contentroot" "D:\Builds\UE3\" // root content folder, relative to location of this file "setlive" "local" // branch to set live after successful build, none if empty "preview" "0" // to enable preview builds "nobaseline" "0" // build without using baseline manifest "local" "D:\SteamContent\ContentServer\htdocs" // set to file path of local content server "depotsskipped" "0" // if not partial build, fail if not all depots are included "depots" { "<firstdepotid>" "depot_build_<depotid>.vdf" } }There are three build types support by SteamPipe: - "Preview" builds only output logs and file manifests and are used for rapid iteration when setting up the build - "Local" Builds for the Local Content Server (LCS "the Mongoose server") for faster downloads and local storage of the game image - "SteamPipe" Builds and uploads to Valve servers for content delivery Running the App Build File ScriptThe example below would run the above app build script to create the game image with one depot defined builder\steamcmd.exe +login account password +run_app_build ..\scripts\app_build.vdf +quit The first time you may want to run with "preview" set to "1" to ensure the manifest is correct, after that has been verified set it back to "0" NOTE: If the build errors and complains about proper login, the build account is protected by Steam Guard. Check the email address specified for the build account for a Steam Guard code, and run steamcmd.exe "set_steam_guard_code emailedcodehere", before trying again. Managing the BuildAfter a successful run of the app build script, the build can be managed via Valve's admin website (under App Admin, Technical Data, then Builds tabs) Follow the directions in the SteamPipe documentation to setup the metadata on the backend further Steam Local Content Server (Mongoose)The Local Content Server (LCS) is a webserver hosted by the build machine to deliver content to the Steam Client, documented on the Steam partner website here. The mongoose web server can be found in the \ContentBuilder\ directory of the sdk directory structure - Configure the webserver to know where the output files will be hosted via the mongoose.conf file - Set the document_root variable to the directory that was created locally on the build machine C:\SteamContent\ContentServer\htdocs\ for example - This will match the "local" value specified in the app build file - Launch the webserver mongoose-3.1.exe and there will appear an "m" icon in the system tray - The executable can be run independently or as a service. To run it as a service, right click the system tray icon once the server has launched and select "Install Service" - Connect to 127.0.0.1 and verify the server is up and running, click on the depot directory reference in the webpage to verify it is accessing the right location - Once it is running as a service, you can run the executable again to get the system tray icon to appear so that you can "Uninstall Service" Deprecated Using the Steam Content ToolThere are two methods, using the GUI and using Steam Script. Deprecated Deprecated. Deprecated Running the Steam Content ServerThere are two methods to running the content server; from a command prompt or as a service. Deprecated Deprecated Deprecated Notes - The content server generally works very well and seamlessly handles a lot of the typical network nightmares. - I have found two notable exceptions - The content server does not detect new versions, to register a new version you need to restart the service. - The depot location when running as a service cannot be set from the commandline.: [ClientContent] DepotRootPath = "\LocalContentServer"
https://docs.unrealengine.com/udk/Three/Steam.html
2018-01-16T14:59:51
CC-MAIN-2018-05
1516084886437.0
[]
docs.unrealengine.com
StatefulSets are a beta feature in 1.5. This feature replaces the PetSets feature from 1.4. Users of PetSets are referred to the 1.5 Upgrade Guide for further information on how to upgrade existing PetSets to StatefulSets. A StatefulSet is a Controller that provides a unique identity to its Pods. It provides guarantees about the ordering of deployment and scaling. StatefulSets are valuable for applications that require one or more of the following. In the above, stable is synonymous with persistence across Pod (re)schedulings. If an application doesn’t require any stable identifiers or ordered deployment, deletion, or scaling, you should deploy your application with a controller that provides a set of stateless replicas. Controllers such as Deployment or ReplicaSet may be better suited to your stateless needs. --runtime-configoption passed to the apiserver. storage class, or pre-provisioned by an admin. The example below demonstrates the components of a StatefulSet. --- StatefulSet Pods have a unique identity that is comprised of an ordinal, a stable network identity, and stable storage. The identity sticks to the Pod, regardless of which node it’s (re)scheduled on. For a StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, in the range [0,N), that is unique over the Set. Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal of the Pod. The pattern for the constructed hostname is $(statefulset name)-$(ordinal). The example above will create three Pods named web-0,web-1,web-2. A StatefulSet can use a Headless Service to control the domain of its Pods. The domain managed by this Service takes the form: $(service name).$(namespace).svc.cluster.local, where “cluster.local” is the cluster domain. As each Pod is created, it gets a matching DNS subdomain, taking the form: $(podname).$(governing service domain), where the governing service is defined by the serviceName field on the StatefulSet. Here are some examples of choices for Cluster Domain, Service name, StatefulSet name, and how that affects the DNS names for the StatefulSet’s Pods. Note that Cluster Domain will be set to cluster.local unless otherwise configured. Kubernetes creates one PersistentVolume for each VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume with a storage class of anything and 1 Gib of provisioned storage. When a Pod is (re)scheduled onto a node, its volumeMounts mount the PersistentVolumes associated with its PersistentVolume Claims. Note that, the PersistentVolumes associated with the Pods’ PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be done manually. The StatefulSet should not specify a pod.Spec.TerminationGracePeriodSeconds of 0. This practice is unsafe and strongly discouraged. For further explanation, please refer to force deleting StatefulSet Pods. When the nginx example above is created, three Pods will be deployed in the order web-0, web-1, web-2. web-1 will not be deployed before web-0 is Running and Ready, and web-2 will not be deployed until web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and becomes Running and Ready. If a user were to scale the deployed example by patching the StatefulSet such that replicas=1, web-2 would be terminated first. web-1 would not be terminated until web-2 is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and is completely shutdown, but prior to web-1’s termination, web-1 would not be terminated until web-0 is Running and Ready.
https://v1-5.docs.kubernetes.io/docs/concepts/workloads/controllers/statefulset/
2018-01-16T15:10:09
CC-MAIN-2018-05
1516084886437.0
[]
v1-5.docs.kubernetes.io
Base Template Base Parser. Get User Control Type(String) Template Base Parser. Get User Control Type(String) Template Base Parser. Get User Control Type(String) Template Method Parser. Get User Control Type(String) Definition Compiles and returns the type of the UserControl object that is specified by the virtual path. This API supports the product infrastructure and is not intended to be used directly from your code. protected public: Type ^ GetUserControlType(System::String ^ virtualPath); protected internal Type GetUserControlType (string virtualPath); member this.GetUserControlType : string -> Type Protected Friend Function GetUserControlType (virtualPath As String) As Type Parameters The virtual path of the UserControl. Returns Exceptions The UserControl specified by virtualPath is marked as no compile. -or- The parser does not permit a virtual reference to the UserControl. Remarks.
https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.basetemplateparser.getusercontroltype?view=netframework-4.8
2019-10-14T08:26:43
CC-MAIN-2019-43
1570986649841.6
[]
docs.microsoft.com
Retrieving the Effective Endpoint Protection Settings Applies To: Forefront Endpoint Protection This task applies to the following feature: - The FEP Security Management Pack To retrieve endpoint settings by using the FEP Security Management Pack In the Operations Manager console, navigate to the Monitoring view, and then expand the Monitoring tree. In the Monitoring tree, under Forefront Endpoint Protection, click Endpoints with FEP. In the Endpoints with FEP pane, click the name of the endpoint from which you want to retrieve settings. Note In order to search for an endpoint by name, enter the name (FQDN) of the endpoint in the Look for text box, and then click Find Now. In the Actions pane, expand Protected Server Tasks, and then click Retrieve Endpoint Settings. In the Run Task dialog box, verify that the target is the endpoint that you want to retrieve settings from and that the check box next to the target name is selected, and then click Run.
https://docs.microsoft.com/en-us/previous-versions/tn-archive/gg398038%28v%3Dtechnet.10%29
2019-10-14T08:53:10
CC-MAIN-2019-43
1570986649841.6
[]
docs.microsoft.com
All content with label as5+buddy_replication+concurrency+datagrid+gridfs+infinispan+loader+lock_striping+read_committed+store+xaresource. Related Labels: expiration, publish, coherence, interceptor, server, replication, recovery, transactionmanager, dist, release, partitioning, query, deadlock, jbossas, nexus, guide, schema, listener, cache, amazon, s3, grid, jcache, api, xsd, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, out_of_memory, jboss_cache, import, index, events, batch, configuration, hash_function, xa, write_through, cloud, mvcc, tutorial, notification, xml, jbosscache3x, distribution, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, development, websocket, async, transaction, interactive, build, searchable, demo, installation, scala, client, non-blocking, filesystem, jpa, tx, article, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, hotrod, webdav, repeatable_read, snapshot, docs, consistent_hash, batching, whitepaper, jta, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - as5, - buddy_replication, - concurrency, - datagrid, - gridfs, - infinispan, - loader, - lock_striping, - read_committed, - store, - xaresource ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/as5+buddy_replication+concurrency+datagrid+gridfs+infinispan+loader+lock_striping+read_committed+store+xaresource
2019-10-14T09:03:56
CC-MAIN-2019-43
1570986649841.6
[]
docs.jboss.org
Working with XPaths How can we control which content is used in our search?We made the Site Search 360 crawlers as intelligent as possible when it comes to analyzing your website and picking the title, image, and content from the right place. Nonetheless, sometimes it is still necessary to tune the crawlers by pointing them directly to the desired content's position. This is done via XPath expressions placed in the Site Search 360 control panel. In the following video, we will show you how to configure your search engine to present the content your visitors are searching for. - First we will search for a Google Chrome extension called "XPath Helper", which allows us to easily define XPaths right from your own site. - Navigate to one of your website's content pages. Press the XPath Helper icon in the top right corner of your browser to open the black overlay which will present your content highlighted in yellow. - You may fine tune, e.g. "Include Content XPath(s)" or "Image XPath(s)". - Press the "Test" button and enter your webpage URL to test the XPath query. If everything is fine you will see the extracted content, headline, or image URL below. - You may define XPath expressions for - Include Content XPath(s): Only content found by these XPaths will be indexed. Leave empty if everything should be indexed. - Title XPath(s): The XPath pointing to the main title of the page. Default is - Image XPath(s): The XPath(s) pointing to the main image. Leave empty if you trust our crawler to find it. For example, - Default Image XPath: The XPath pointing to the default image to be used when no other image is found. For example, - Exclude Content XPath(s): One XPath per line. Content found by these XPaths will not be indexed. Leave empty if everything should be indexed. - Search Snippet XPath: The XPath pointing to the content that you want to be shown in the search results. Note that you have to change the Search Snippet setting under "Search Settings". - After you set all your XPaths don't forget to save the new settings. - For the new settings to take effect, you have to re-index your entire site under the "Index Control" section in the SS360 control panel.
https://docs.sitesearch360.com/working-with-xpath
2019-10-14T08:09:24
CC-MAIN-2019-43
1570986649841.6
[]
docs.sitesearch360.com
6.2.0 - 2018/01/23¶ Important changes¶ Improved search¶ By setting plugin.tx_news.settings.search.splitSearchWord = 1 the search is performed by searching by each search word, not only by the full search phrase. This feature has been sponsored by. Thanks a lot! Avoid duplicate page titles in pagination¶ A pagination information (“Page 3 of 9”) is added to the page title to avoid duplicate page titles. All Changes¶ This is a list of all changes in this release: 2018-01-23 [TASK] Check mod.web_list.allowedNewTables/deniedNewTables in admin module (#248) (Commit 9efa4d37 by Marc Bastian Heinrichs) 2018-01-23 [TASK] Respect overwriteDemand parameters on menu creation (#439) (Commit 9cb38263 by Oliver Baran) 2018-01-23 [TASK] Allow limit and offset for tags (#497) (Commit 0e2e2fa9 by Steffen Kamper) 2018-01-23 [FEATURE] Disable Localization in backend module via TSConfig (#420) (#421) (Commit 5e0541f2 by ayacoo) 2018-01-23 Apply fixes from StyleCI (#537) (Commit 8a3122a0 by Georg Ringer) 2018-01-23 [BUGFIX] Check tablenames in mm queries (Commit 1cb623ca by Georg Ringer) 2018-01-23 [BUGFIX] Set cHash also in workspace mode (Commit f5290765 by Georg Ringer) 2018-01-23 [BUGFIX] Fix failing test (Commit bb12a525 by Georg Ringer) 2018-01-22 [DOC] Add linkhandler info (Commit 02de4016 by Georg Ringer) 2018-01-22 [TASK] Add tstamp & crdate fields to TCA for fluid (Commit f9c1222a by Georg Ringer) 2018-01-22 [TASK] Make 2 VH compatible with 9 (Commit 2edb20d3 by Georg Ringer) 2018-01-22 [TASK] Replace ... with … (Commit e349354a by Georg Ringer) 2018-01-22 [TASK] Make PaginateVh 9 compatible (Commit 8b53482f by Georg Ringer) 2018-01-22 [DOC] Improve module vhs example (Commit 345e54a8 by Georg Ringer) 2018-01-22 [DOC] Improve doc example for ical rendering (Commit edc31bf7 by Georg Ringer) 2018-01-22 Validated output for List.xml (#457) (Commit ab01a824 by Dandy Umlauft) 2018-01-22 [BUGFIX] Add missing counter increment #524 (Commit e75bab78 by Georg Ringer) 2018-01-19 [DOC] Add snippet for cHash in preview links (Commit de0eec81 by Georg Ringer) 2018-01-19 [BUGFIX] Change invalid HTML of time tag (Commit 9b795285 by Georg Ringer) 2018-01-19 [TASK] Change license in composer.json (Commit 19ab9ad1 by Georg Ringer) 2018-01-19 [BUGFIX] Fix spelling (#519) (Commit ce0bcf39 by Michael Stucki) 2018-01-17 Feature/php56 (#517) (Commit 436e2c5d by Georg Ringer) 2018-01-16 [FEATURE] Split search words (Commit 47629cc1 by Georg Ringer) 2018-01-16 [TASK] Revert travis changes (Commit 557b6924 by Georg Ringer) 2018-01-16 [FEATURE] Avoid duplicate title for pagination (Commit 7d9b3dc0 by Georg Ringer) 2018-01-16 [TASK] Respect type in sitemap (Commit 277966b8 by Georg Ringer) 2018-01-16 [TASK] Raise testing-framework version (Commit 0f9cc40b by Georg Ringer) 2018-01-16 [DOC] Improve documentation snippet (Commit a4d11b02 by Georg Ringer) 2018-01-16 Apply fixes from StyleCI (#511) (Commit b684e4c7 by Georg Ringer) 2018-01-16 [TASK] Migrate SitemapGenerator todoctrine (Commit 699ff734 by Georg Ringer) 2018-01-16 [TASK] Remove PHP 5.5 from travis tests (Commit 0e41c332 by Georg Ringer) 2018-01-16 [TASK] Remove phpcs fixes in favor of styleci (Commit 808ace42 by Georg Ringer) 2018-01-16 [BUGFIX] Fix failing test in NewsRepositoryTest (Commit 3beba818 by Georg Ringer) 2018-01-16 [BUGFIX] Remove deprecated usage of buildQueryParametersPostProcess (Commit a2a585a4 by Georg Ringer) 2018-01-16 [BUGFIX] Use proper navigationComponentId for 9+ (Commit 81d6b2f1 by Georg Ringer) 2017-12-17 [TASK] Use $GLOBALS['SIM_EXEC_TIME'] for building queries (Commit b5c87806 by Georg Ringer) 2017-12-17 [TASK] Remove unused palettes (Commit fa7f38ac by Georg Ringer) 2017-12-17 Apply fixes from StyleCI (#491) (Commit 858e3abd by Georg Ringer) 2017-12-17 [TASK] Document related variants in template (Commit 9c638a2f by Georg Ringer) 2017-12-17 [BUGFIX] Make option list.paginate.prevNextHeaderTags work (#481) (Commit 9dcbff50 by Christian Futterlieb) 2017-12-17 Change datatype for tag in newsDemand to string (#483) (Commit cc9f0793 by Torben Hansen) 2017-12-17 fix search by percent or underscore (#486) (Commit a4cde7d4 by Esteban Marin) 2017-12-05 [TASK] Hide shariff namespace (Commit 93e1b60f by Georg Ringer) 2017-11-21 Fixed Typo (#456) (Commit cb98c935 by Fritz the Cat) 2017-11-21 [TASK] Let it work with 9 (Commit 9c920f19 by Georg Ringer) 2017-11-21 [BUGFIX] Fix category usage in Sitemap generator (Commit cac5cd8f by Georg Ringer) 2017-11-21 [TASK] Use sorting column in ItemsProcFunc (Commit d9a1aa96 by Georg Ringer) 2017-11-06 Fix PHP Notice in ext_localconf.php (#458) (Commit a8e68933 by Tymoteusz Motylewski) 2017-10-26 Apply fixes from StyleCI (#453) (Commit a9350093 by Georg Ringer) 2017-10-26 [FEATURE] Get translated content element id list (Commit 574d93e8 by Georg Ringer) 2017-10-26 Apply fixes from StyleCI (#452) (Commit 9f3e4c0d by Georg Ringer) 2017-10-26 [TASK] Migrate AdministrationController to Doctrine (Commit ecc3af5c by Georg Ringer) 2017-10-26 [TASK] Remove not needed test (Commit ef33512a by Georg Ringer) 2017-10-26 [TASK] Use FAL API in AbstractImportService (Commit 0e32944e by Georg Ringer) 2017-10-26 [TASK] Migrate ItemsProcFunc to Doctrine (Commit b71bf638 by Georg Ringer) 2017-10-26 [TASK] Remove unused methods (Commit 266154aa by Georg Ringer) 2017-10-26 [TASK] Remove unused getter of DatabaseConnection (Commit 13fa94da by Georg Ringer) 2017-10-26 [!!!][TASK] Remove ObjectViewHelper (Commit 2061386e by Georg Ringer) 2017-10-26 Remove incorrect closing bracket (#446) (Commit e5a13dba by Boris Schauer) 2017-10-01 Update Index.rst (#429) (Commit f908c159 by Stefan Isak) 2017-09-27 [BUGFIX] Re-enable settings.detailPid in selectedList's flexform (#436) (Commit ab55ea71 by Rémy DANIEL) 2017-09-22 [BUGFIX] #357 Add tt_content ctype labels to pagelayoutview to get rid of error message (#432) (Commit bb050b02 by Kevin Purrmann) 2017-09-18 Update Readme.md (Commit 2fd3f1eb by Georg Ringer) 2017-09-13 Moved XML-NameSpace-declaration from div- or span-tags into separate html-tags to achieve valid HTML5 output (#415) (Commit 226951a5 by Sebastian Wolfertz) 2017-09-13 [TASK] Changing PhpDoc type for tsstamp (#418) (Commit 3eca0e57 by Thomas Deuling) 2017-09-12 Some minor fixes (#423) (Commit 5f81c827 by Cedric Ziel) 2017-09-11 [TASK] Remove unnecessary else branch (#422) (Commit 906cd336 by Cedric Ziel) This list has been created by using git log 6.1.1..HEAD –abbrev-commit –pretty=’%ad %s (Commit %h by %an)’ –date=short.
https://docs.typo3.org/p/georgringer/news/master/en-us/Misc/Changelog/6-2-0.html
2019-10-14T09:39:11
CC-MAIN-2019-43
1570986649841.6
[]
docs.typo3.org
Celebrate World Water Day Share the journeys of our inspiring award winners. You won’t want to miss this year’s award winners, two of which are premieres, in Canada and Ontario: Water 2 | USA Alice's Garden | USA The Weight of Water | USA Emcee: Dr. Stephen Scharper Associate Professor, University of Toronto Water 2 Morgan Maassen USA | 2017 | 5 min Award Winner | Best Short Film Weight of Water Michael Brown USA | 2018 | 80 min Ontario Premiere Award Winner | Best Feature Film THE WEIGHT OF WATER is an exhilarating and inspiring film that follows blind adventurer, Erik Weihenmayer, as he kayaks the Colorado River through the Grand Canyon. Erik, however, is no stranger to hard-earned success, having been the first blind person to reach the summit of Mount Everest in 2001. This time around, filmmaker Michael Brown captures Erik’s increasingly difficult fight through turbulent and dangerous rapids, while taking audiences on an emotional journey through struggle, despair, determination, and achievement. SWAG Please support Ecologos initiatives by purchasing Water Docs T-Shirts ($25) and Water Bottles ($20) available at all Water Docs Film Festival screenings.
https://water-docs.squarespace.com/waterdocs2019/the-weight-of-water
2019-10-14T08:58:57
CC-MAIN-2019-43
1570986649841.6
[array(['https://images.squarespace-cdn.com/content/v1/57d6c9f5d2b85728ed469f50/1549788705877-CARSYSLSQ4AOOWIXVX3E/ke17ZwdGBToddI8pDm48kFmfxoboNKufWj-55Bgmc-J7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z4YTzHvnKhyp6Da-NYroOW3ZGjoBKy3azqku80C789l0iXS6XmVv7bUJ418E8Yoc1hjuviiiZmrL38w1ymUdqq4JaGeFUxjM-HeS7Oc-SSFcg/Stephen+Scharper+recent+UofT+photo.jpg', 'Stephen Scharper recent UofT photo.jpg'], dtype=object) array(['https://images.squarespace-cdn.com/content/v1/57d6c9f5d2b85728ed469f50/1549941441213-QV3MKSH4GN9ZI5G7ULA7/ke17ZwdGBToddI8pDm48kFjVbw7Xb5C4uyqZXL2-5TkUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKce6ctYZGPQ_JMsv6wm6kzqn4jMlS-3Rlzs89xtJ-lL0oJY_YlVGflAp9HXg117DC7/Selling+T-Shirts+by+a+wonderful+volunteer-Ben+Marans.jpg', 'Selling T-Shirts by a wonderful volunteer-Ben Marans.jpg'], dtype=object) array(['https://images.squarespace-cdn.com/content/v1/57d6c9f5d2b85728ed469f50/1549943401485-5FN54V6JR4EXAC32ME6X/ke17ZwdGBToddI8pDm48kPXf0mLkO8bNtbYnjxg5jJBZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZUJFbgE-7XRK3dMEBRBhUpyTf0qhkhKyqZKqksipo58QkHMaM3yID5U_5-echKUA2vlq45je1ka43qO4z1WxZfM/Water+Docs+Reusable+Water+Bottles-downsized+to+10%25.jpg', 'Water Docs Reusable Water Bottles-downsized to 10%.jpg'], dtype=object) ]
water-docs.squarespace.com
Marathon-LB provides load balancing tool for Marathon-orchestrated applications. Marathon-LB leverages the core features of the HAProxy program. For DC/OS clusters, Marathon-LB reads the Marathon task information and dynamically generates the required HAProxy configuration details. To gather this task information, you must specify the location of one or more Marathon instances. Marathon-LB can then use the service configuration details stored globally in templates or defined in app definition labels to route traffic to the appropriate nodes and service ports. Using Marathon-LB templates and app labelsUsing Marathon-LB templates and app labels Marathon-LB provides templates and application labels that enable you to use default or set custom HAProxy configuration parameters. Configuration parameters provide details such as the algorithm you want to use for how workload is distributed. For example, you can set a configuration parameter to distribute processing for app access requests by using a “round-robin” model or to the server with the fewest current connections. The configuration templates can be set either globally for all apps, or overridden on a service-port or per-app basis. Overriding template and app label valuesOverriding template and app label values You can override the values set in Marathon-LB global templates by: - Creating an environment variable in the Marathon-LB container. - Placing configuration files in the templatesdirectory where the path is relative to the location from which the Marathon-LB script runs. - Specifying labels in the app definition file. Overriding settings using environment variablesOverriding settings using environment variables One way you can override global settings is by modifying the definition for a default template setting. For example, you might modify the HAPROXY_HTTPS_FRONTEND_HEAD template to specify the following content: frontend new_frontend_label bind *:443 ssl crt /etc/ssl/cert.pem mode http You could then add this setting as an environment variable for the Marathon-LB configuration by specifying the following: “HAPROXY_HTTPS_FRONTEND_HEAD”: “\nfrontend new_frontend_label\n bind *:443 ssl {sslCerts}\n mode http” Overriding settings using files in the templates directoryOverriding settings using files in the templates directory Alternatively, you could place a file called HAPROXY_HTTPS_FRONTEND_HEAD in the templates directory through the use of an artifact URI. At periodic intervals, Marathon-LB checks the templates directory for new or changed configuration settings. You can add your own custom templates to the Docker image directly, or provide them in the templates directory that Marathon-LB reads at startup. Overriding settings using app labelsOverriding settings using app labels Most of the Marathon-LB template settings can be overridden using app labels. By using app labels, you can override template settings per service port. App labels are specified in the Marathon app definition. For example, the following app definition excerpt uses app labels to specify the external load balancing group for an application with a virtual host named service.mesosphere.com: { "id": "http-service", "labels": { "HAPROXY_GROUP":"external", "HAPROXY_0_VHOST":"service.mesosphere.com" } } The following example illustrates settings for a service called http-service that requires http-keep-alive to" } } Specifying strings in app labelsSpecifying strings in app labels In specifying labels for load balancing, keep in mind that strings are interpreted as literal HAProxy configuration parameters, with substitutions respected. The HAProxy configuration file settings are validated before reloading the HAProxy program after you make changes. Because the configuration is checked before reloading, problems with HAProxy labels can prevent the HAProxy service from restarting with the updated configuration. Specifying an index identifier in app labelsSpecifying an index identifier in app labels Settings that you can specify per service port include the port index identifier {n} in the label name, where {n} corresponds to the service port index, beginning at zero (0). Setting global default optionsSetting global default options As a shortcut for adding global default options without overriding the global template, you can specify a comma-separated list of options using the HAPROXY_GLOBAL_DEFAULT_OPTIONS environment variable. The default value for the HAPROXY_GLOBAL_DEFAULT_OPTIONS environment variable is: Redispatch,http-server-close,dontlognull To add the httplog option and keep the existing defaults, you could specify: HAPROXY_GLOBAL_DEFAULT_OPTIONS=redispatch,http-server-close,dontlognull,httplog. The setting takes effect the next time Marathon-LB checks for configuration changes. The setting does not take effect if the HAPROXY_HEAD template has been overridden. Creating a sample global templateCreating a sample global template Templates and app definition labels enable you to set custom HAProxy configuration parameters. Templates can be set either globally for all apps, or defined on a per-app basis using labels. The following steps summarize how to create a sample global template, add it as an archive file to the templates directory, and restart load balancing to use the new global template. To create a custom global template: On your local computer, create a file called HAPROXY_HEADin a directory called templatesusing commands similar to the following: mkdir -p templates cat > templates/HAPROXY_HEAD Open the HAPROXY_HEADfile and add content similar to the following: global log /dev/log local0 log /dev/log local1 notice spread-checks 5 max-spread-checks 15000 maxconn 4096 tune.ssl.default-dh-param 2048-bind-options no-sslv3 no-tlsv10 no-tls-tickets ssl-default-server-server-options no-sslv3 no-tlsv10 no-tls-tickets stats socket /var/run/haproxy/socket expose-fd listeners server-state-file global server-state-base /var/state/haproxy/ lua-load /marathon-lb/getpids.lua lua-load /marathon-lb/getconfig.lua lua-load /marathon-lb/getmaps.lua lua-load /marathon-lb/signalmlb.lua defaults load-server-state-from-file global log global retries 3 backlog 10000 maxconn 3000 timeout connect 5s timeout client 20s timeout server 40s timeout tunnel 3600s timeout http-keep-alive 1s timeout http-request 15s timeout queue 30s timeout tarpit 60s option dontlognull option http-server-close option redispatch listen stats bind 0.0.0.0:9090 balance mode http stats enable monitor-uri /_haproxy_health_check acl getpid path /_haproxy_getpids http-request use-service lua.getpids if getpid acl getvhostmap path /_haproxy_getvhostmap http-request use-service lua.getvhostmap if getvhostmap acl getappmap path /_haproxy_getappmap http-request use-service lua.getappmap if getappmap acl getconfig path /_haproxy_getconfig http-request use-service lua.getconfig if getconfig acl signalmlbhup path /_mlb_signal/hup http-request use-service lua.signalmlbhup if signalmlbhup acl signalmlbusr1 path /_mlb_signal/usr1 http-request use-service lua.signalmlbusr1 if signalmlbusr1 In this example, the maxconn, timeout client, and timeout serverproperty values have changed from the default. Create a compressed archive of the HAPROXY_HEADfile using a taror zipcommand. For example, type the following to add the HAPROXY_HEADfile. mkdir -p templates cat > templates/HAPROXY_HEAD <<EOL tar czf templates.tgz templates/ Make the templates.tgzfile available by uploading the file to an HTTP server. For example, you can use FTP or another file transfer program to copy the file to a static web server URL such as Amazon S3. You can download the sample template file using this URI: Add the Marathon-LB template configuration to the Marathon-LB service definition by including the path to the template file, templatesdirectory, or URI in a custom JSON file. For example, you might create a new file called marathon-lb-template-options.jsonwith the following lines: { "marathon-lb": { "template-url":"" } } Restart Marathon-LB with the new configuration settings: dcos package install marathon-lb --options=marathon-lb-template-options.json --yes Your customized Marathon-LB instance now runs using the new template. Creating a sample per-app templateCreating a sample per-app template To create a template for an individual app, modify the application definition. In the example below, the default template for the external NGINX application definition ( nginx-external.json) has been modified to disable the HTTP keep-alive setting. While this is not a common scenario, there may be cases where you need to override certain default values on a per-application basis. Copy the following lines into the nginx-external.jsonapp definition file: { "id": "nginx-external", "container": { "type": "DOCKER", "portMappings": [ { "hostPort": 0, "containerPort": 80, "servicePort": 10000 } ], "docker": { "image": "nginx:1.7.7", "forcePullImage":true } }, "instances": 1, "cpus": 0.1, "mem": 65, "network": "BRIDGE", " } } Deploy the external NGINX app on DC/OS using the following command: dcos marathon app add nginx-external.json Other options you might want to specify using customized app definition labels include: - enabling the sticky session option - redirecting to HTTPS - specifying a virtual host For example: "labels":{ "HAPROXY_0_STICKY":"true", "HAPROXY_0_BACKEND_STICKY_OPTIONS": " cookie JSESSIONID prefix nocache " "HAPROXY_0_REDIRECT_TO_HTTPS":"true", "HAPROXY_0_VHOST":"nginx.mesosphere.com" } For more information about specifying a virtual host, see Resolving virtual hosts. For information about other configuration templates and app labels, see Marathon-LB reference. Working with SSL certificatesWorking with SSL certificates Marathon-LB supports secure socket layer (SSL) encryption and certificates. You can provide the path to your SSL certificate as a command line argument or in the frontend section of the load balancer configuration file using the --ssl-certs option. For example, if you are running the script directly, you might provide a command line similar to the following: ./marathon_lb.py --marathon --group external --ssl-certs /etc/ssl/site1.co,/etc/ssl/site2.co --health-check --strict-mode Options for specifying the SSL certificateOptions for specifying the SSL certificate To use SSL certificates, you can: - Use the default certificate path and file name specified in the HAProxyconfiguration file. In this case, you would either save the certificate as /etc/ssl/cert.pemusing the default certificate path or edit the configuration file to specify the correct path. - Provide the certificate path using the --ssl-certscommand line option and have the HAProxyconfiguration file use that path. - Provide the full SSL certificate text in the HAPROXY_SSL_CERTenvironment variable. The environment variable contents are then written to the /etc/ssl/cert.pemfile and used if you don’t specify any additional certificate paths. If you don’t specify the SSL certificate when you run Marathon-LB ( marathon_lb.py) on the command line, by using the Docker run script, or from the Docker image, HAProxy automatically creates a self-signed certificate in the default /etc/ssl/cert.pem location and the configuration file then uses the self-signed certificate. Specifying multiple SSL certificatesSpecifying multiple SSL certificates You can specify multiple SSL certificates per frontend. You can include the additional SSL certificates by passing a list of paths with the --ssl-certs command line option. You can also add multiple SSL certificates by specifying the HAPROXY_SSL_CERT environment variable in your application definition. If you do not specify at least one SSL certificate, Marathon-LB generates a self-signed certificate at startup. If you are using multiple SSL certificates, you can select the SSL certificate per app service port by specifying the HAPROXY_{n}_SSL_CERT app label that corresponds to the file path for the SSL certificates you want to use.. Applying sample configuration settingsApplying sample configuration settings The following examples illustrate some common load balancer operational behavior and corresponding configuration settings. For simplicity, the examples only provide relevant segments of JSON configuration settings rather than complete JSON application defintions. Adding HTTP headers to the health checkAdding HTTP headers to the health check The following example adds the Host header to the health check executed by HAProxy: { "id":"app", "labels": { "HAPROXY_GROUP": "external", "HAPROXY_0_BACKEND_HTTP_HEALTHCHECK_OPTIONS": " option httpchk GET {healthCheckPath} HTTP/1.1\\r\\nHost:\\ www\n timeout check {healthCheckTimeoutSeconds}s\n" } } Setting timeout for long-lived socket connectionsSetting timeout for long-lived socket connections If you’re trying to run a TCP service that uses long-lived sockets through HAProxy, such as a MySQL instance, you should set longer timeouts for the backend. The following example sets the client and server timeout to 30 minutes for the specified backend. { "id":"app", "labels":{ "HAPROXY_GROUP":"external", "HAPROXY_0_BACKEND_HEAD":"backend {backend}\n balance {balance}\n mode {mode}\n timeout server 30m\n timeout client 30m\n" } } Terminating SSL requests at an Elastic Load BalancerTerminating SSL requests at an Elastic Load Balancer In some cases, you might want to allow an Elastic Load Balancer (ELB) to terminate a secure socket connection for you, but want Marathon-LB to continue to redirect non-HTTPS requests. In this scenario, the Elastic Load Balancer uses HTTP headers to communicate that the request it received came over a secure channel and has been decrypted. Specifically, the X-Forwarded-Proto header is set to https, indicating that the request was decrypted by the Elastic Load Balancer. If HAProxy isn’t configured to look for the X-Forwarded-Proto header, the request is processed as if it is unencrypted and is redirected using the standard redirection rules. The following configuration setting illustrates how to have Marathon-LB generate a backend rule that looks for the X-Forwarded-Proto header or a regular TLS connection and redirects the request if neither are specified. "labels": { "HAPROXY_0_BACKEND_HTTP_OPTIONS": " acl is_proxy_https hdr(X-Forwarded-Proto) https\n redirect scheme https unless { ssl_fc } or is_proxy_https\n" } Disabling service port bindingDisabling service port binding If you do not want Marathon-LB to listen on service ports, the following example illustrates how you can disable the frontend definitions: { "labels": { "HAPROXY_GROUP": "external", "HAPROXY_0_FRONTEND_HEAD": "", "HAPROXY_0_FRONTEND_BACKEND_GLUE": "" } } Resolving virtual hostsResolving virtual hosts To create a virtual host or hosts the HAPROXY_{n}_VHOST label needs to be set on the given application. Applications that have a virtual host set are exposed on ports 80 and 443, in addition to their service port. You can specify multiple virtual hosts with the HAPROXY_{n}_VHOST template using a comma as a delimiter between host names. All applications are also exposed on port 9091, using the X-Marathon-App-Id HTTP header. For more information, see HAPROXY_HTTP_FRONTEND_APPID_HEAD in the templates section. You can access the HAProxy statistics using the haproxy_sta ts endpoint, and you can retrieve the current HAProxy configuration settings from the haproxy_getconfig endpoint. If you want all subdomains for a given domain to resolve to a particular backend (for example, HTTP and HTTPS), use the following labels. Note that there is a period (.) required before the {hostname} in the HAPROXY_0_HTTPS_FRONTEND_ACL label. Note that you should disable virtual host mapping by removing the --haproxy-map argument, if you have not previously removed it. { "labels": { "HAPROXY_0_BACKEND_WEIGHT": "-1", "HAPROXY_GROUP": "external", "HAPROXY_0_HTTP_FRONTEND_ACL": " acl host_{cleanedUpHostname} hdr_end(host) -i {hostname}\n use_backend {backend} if host_{cleanedUpHostname}\n", "HAPROXY_0_HTTPS_FRONTEND_ACL": " use_backend {backend} if {{ ssl_fc_sni -m end .{hostname} }}\n", "HAPROXY_0_VHOST": "example.com" } } Enabling HAProxy loggingEnabling HAProxy logging HAProxy uses socket-based logging. It is configured by default to log information to the /dev/log directory. To begin logging HAProxy messages, you must first mount the /dev/log volume in the container, then enable logging for any backends or frontends for which you want to log information. After you enable logging, you can examine the log file results with the journalctl facility. Mount the volume into your /marathon-lbapp: { "id": "/marathon-lb", "container": { "type": "DOCKER", "volumes": [ { "containerPath": "/dev/log", "hostPath": "/dev/log", "mode": "RW" } ], "docker": { "image": "mesosphere/marathon-lb:latest", "network": "HOST", "privileged": true, "parameters": [], "forcePullImage": true } } } Set option httplogon one backend to enable logging. In this example, the backend is my_crappy_website: { "id": "/my-crappy-website", "cmd": null, "cpus": 0.5, "mem": 64, "disk": 0, "instances": 2, "container": { "type": "DOCKER", "volumes": [], "docker": { "image": "brndnmtthws/my-crappy-website", "network": "BRIDGE", "portMappings": [ { "containerPort": 80, "hostPort": 0, "servicePort": 10012, "protocol": "tcp", "labels": {} } ], "privileged": false, "parameters": [], "forcePullImage": true } }, "healthChecks": [ { "path": "/", "protocol": "HTTP", "portIndex": 0, "gracePeriodSeconds": 10, "intervalSeconds": 15, "timeoutSeconds": 2, "maxConsecutiveFailures": 3, "ignoreHttp1xx": false } ], "labels": { "HAPROXY_0_USE_HSTS": "true", "HAPROXY_0_REDIRECT_TO_HTTPS": "true", "HAPROXY_GROUP": "external", "HAPROXY_0_BACKEND_HTTP_OPTIONS": " option httplog\n option forwardfor\n http-request set-header X-Forwarded-Port %[dst_port]\n http-request add-header X-Forwarded-Proto https if { ssl_fc }\n", "HAPROXY_0_VHOST": "diddyinc.com," }, "portDefinitions": [ { "port": 10012, "protocol": "tcp", "labels": {} } ] } Enabling the httplogoption only affects the backend for the service port. To enable logging for ports 80 and 443, you must modify the global HAProxy template. Open a secure shell (SSH) on any public agent node. View the logs using journalctl: journalctl -f -l SYSLOG_IDENTIFIER=haproxy Adding a custom HAProxy error responseAdding a custom HAProxy error response You can specify a custom HAProxy error response by overriding the default errorfile directive in a template or an app definition label. For example, you could customize the template to return a redirect to a different backend if no backends are available. To illustrate using a custom error response: Open the application definition file for the application. Add a template URI to your Marathon-LB app definition like this: { "id":"/marathon-lb", "fetch":[""] } This example returns a custom 503 page by updating the templates/500.http file within the templates-custom-500-response.tar.gz archive file. Alternatively, you could return a redirect to a URI by updating the templates-custom-500-response.tar.gz archive file like this: HTTP/1.1 302 Found Location: Using HAProxy maps for backend lookupUsing HAProxy maps for backend lookup You can use HAProxy maps to speed up virtual hosts to backend lookup requests. This configuration setting is very useful for large installations where the traditional virtual-host-to-backend rules comparison takes considerable time because each rule is evaluated sequentially. HAProxy map creates a hash-based lookup table so that it is faster than the traditional rules-based approach. You can add HAProxy maps for Marathon-LB by using the --haproxy-map flag. For example: ./marathon_lb.py --marathon --group external --haproxy-map This command creates a lookup dictionary for the host header (both HTTP and HTTPS) and X-Marathon-App-Id header. For path-based routing and authentication, Marathon-LB continues to use the backend rules comparison. Using internal and external groups for load balancingUsing internal and external groups for load balancing You should consider using a dedicated load balancer in front of Marathon-LB to simplify upgrades and changes. Common choices for a dedicated load balancer to work with Marathon-LB include an Elastic Load Balancer (on AWS) or a hardware load balancer for on-premise installations. Use separate Marathon-LB groups (specified with the –group option) for internal and external load balancing. On DC/OS, the default group is external. The basic configuration setting for an internal load balancer would be: { "marathon-lb": { "name": "marathon-lb-internal", "haproxy-group": "internal", "bind-http-https": false, "role": "" } } Specifying reserved ports for load-balanced applicationsSpecifying reserved ports for load-balanced applications You should use service ports within the reserved range (which is 10000 to 10100 by default). Using the reserved port identifiers: - prevents port conflicts - ensures that reloads don’t result in connection errors In general, you should define service ports and avoid using the HAPROXY_{n}_PORT label. For HTTP services, you should consider setting the virtual host and, optionally, a path to access services on ports 80 and 443. Alternatively, you can access the service on port 9091 using the X-Marathon-App-Id header. For example, if you want to configure access to an app with the ID tweeter: Open a terminal then run the following command to switch to a master node. dcos node ssh --master-proxy --leader From the master node, run the following command: curl -vH “X-Marathon-App-Id: /tweeter” marathon-lb.marathon.mesos:9091/ Review the connection result. $ curl -vH "X-Marathon-App-Id: /tweeter" marathon-lb.marathon.mesos:9091/ * Trying 10.0.5.190... * TCP_NODELAY set * Connected to marathon-lb.marathon.mesos (10.0.5.190) port 9091 (#0) > GET / HTTP/1.1 > Host: marathon-lb.marathon.mesos:9091 > User-Agent: curl/7.50.3 > Accept: */* > X-Marathon-App-Id: /tweeter > * HTTP 1.0, assume close after body < HTTP/1.0 503 Service Unavailable < Cache-Control: no-cache < Connection: close < Content-Type: text/html < <html><body><h1>503 Service Unavailable</h1> No server is available to handle this request. </body></html> * Curl_http_done: called premature == 0 * Closing connection 0 Assigning ports for IP-per-task appsAssigning ports for IP-per-task apps Marathon-LB supports load balancing for applications that are assigned an IP address and port on a per-task basis. If each task is assigned its own unique IP address, access to the task is routed directly through the application’s service discovery port. If the service ports are not defined, Marathon-LB automatically assigns port values from a configurable range. You can configure the range for port assignment values using the --min-serv-port-ip-per-task and --max-serv-port-ip-per-task options. You should note that the port assignment is not guaranteed if you change the set of deployed apps. For example, if you deploy a new app with a per-task IP address, the port assignments might change.
http://docs-staging.mesosphere.com/mesosphere/dcos/services/marathon-lb/1.14/mlb-configuration/
2019-10-14T08:26:31
CC-MAIN-2019-43
1570986649841.6
[]
docs-staging.mesosphere.com
How to Create a New SQL-Based Report in SQL Reporting Services Applies To: System Center Configuration Manager 2007 R2, System Center Configuration Manager 2007 R3 Use the Create Report Wizard in Configuration Manager 2007 R2 SQL Reporting Services to create a new SQL-based report. Note The information in this topic applies only to Configuration Manager 2007 R2 and Configuration Manager 2007 R3. To create a new SQL-based report In the Configuration Manager console, navigate to System Center Configuration Manager / Site Database / Computer Management / Reporting / Reporting Services. Right-click the report server name, and select Create Report. Note If you right-click a report folder and click Create Report, the report will be created in that report folder.. Note A dataset contains the SQL query that you want to use in the report. A report can use many datasets, but it must use at least one.. Note You might need to refresh the display in the Configuration Manager console before the new report is visible. See Also Tasks How to Create a New Model-Based Report in SQL Reporting Services How to Modify an Existing SQL Reporting Services Report Concepts Administrator Checklist for SQL Reporting Services Other Resources SQL Reporting Services in Configuration Manager 2007 R2 and Later For additional information, see Configuration Manager 2007 Information and Support. To contact the documentation team, email [email protected].
https://docs.microsoft.com/en-us/previous-versions/system-center/configuration-manager-2007/cc678412%28v%3Dtechnet.10%29
2019-10-14T09:09:40
CC-MAIN-2019-43
1570986649841.6
[]
docs.microsoft.com
WaitForMultipleObjects (Windows CE 5.0) This function returns a value when either any one of the specified objects is in the signaled state, or the time-out interval elapses. DWORD WaitForMultipleObjects(DWORDnCount, CONST HANDLE* lpHandles, BOOLfWaitAll, DWORDdwMilliseconds ); Parameters -. Return Values If the function succeeds, the return value indicates the event that caused the function to return. This value can be one of the following. WAIT_FAILED indicates failure. To get extended error information, call GetLastError. Remarks. Requirements OS Versions: Windows CE 1.01 and later. Header: Winbase.h. Link Library: Nk.lib. See Also MsgWaitForMultipleObjects | MsgWaitForMultipleObjectsEx | WaitForSingleObject | CreateEvent | CreateFile | CreateMutex | CreateProcess | CreateThread | PulseEvent | ResetEvent | SetEvent Send Feedback on this topic to the authors
https://docs.microsoft.com/en-us/previous-versions/windows/embedded/aa450987%28v%3Dmsdn.10%29
2019-10-14T08:18:31
CC-MAIN-2019-43
1570986649841.6
[]
docs.microsoft.com
Additional Revenue A single patient retained or won back for a pharmacy can be worth thousands of dollars in revenue for the year, however, time is of the essence. How can pharmacies increase their customer touch-points in creative ways in order to increase patient loyalty given their time constraints? Save Time Automated campaigns is how! It takes less than five minutes to record a message using your own voice. It takes less than 30 seconds to activate a campaign. And that's it! No need to schedule calls periodically -- the calls will automatically go out to the patients that qualify at any point in the future. Here's How The Happy Birthday campaign will automatically schedule calls to your patients on their birthday, using a message that you record in your own voice. Patients absolutely love this personal touch and thoughtfulness on their special day, especially elderly patients who may not have very many people wishing them a happy birthday. The First Fill campaign will automatically schedule calls to brand new patients after they fill for the very first time, using a message that you record in your own voice. Give them a warm "welcome and thank you", which will ensure they keep coming back for more! The Slipping Away campaign will automatically schedule calls to patients that have filled a maintenance medication within 5 months, but haven't filled anything within 4 months, using a message that you record in your own voice. That 30 day window is the interval of time when a patient that is on the verge of slipping away... a nice call in the voice of their pharmacist will prevent them from being lost forever! What's Next? If you need some inspiration for your recorded messages, check out these scripts for outreach here!
https://docs.amplicare.com/en/articles/1944622-increase-customer-loyalty-and-retention
2019-10-14T09:09:22
CC-MAIN-2019-43
1570986649841.6
[array(['https://downloads.intercomcdn.com/i/o/88485031/9f1415e6d177b792a57b5589/Screen+Shot+2018-11-30+at+2.14.32+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/88485581/fc51a2f3d9f7c1819a7266bf/Screen+Shot+2018-11-30+at+2.17.52+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/88485567/b73d9fbb3de600257148faaa/Screen+Shot+2018-11-30+at+2.17.35+PM.png', None], dtype=object) ]
docs.amplicare.com
Opening Night You’ve probably never seen water like this. Don’t miss the stunning North American premiere of the feature film, plus an Ontario and a Canadian premiere: Spring on the Strand | UK Veer | Ontario Premiere | NORWAY Elemental | THE NETHERLANDS Aquarela | RUSSIA Confluence | USA Emcee: Dr. Stephen Scharper Associate Professor, University of Toronto Spring on the Strand A D Cooper UK | 2016 | 2 min SPRING ON THE STRAND is a poetic non-narrative film which chronicles the emergence of the spring season on the River Thames, in West London, over the span of two months in 2016. It is a gorgeous experimental short that captures expansive skies and mesmerizing sunsets on nothing but an iPad, that is complemented by an original score befitting of the scenery. Veer Mariama Slattoy & Sveinung Gjessing NORWAY | 2017 | 7 min Ontario Premiere VEER's filmmaker and dancer, Mariama Slåttøy, is intimate and expressive in both her movement and her capturing of that movement. In this powerful short about ruling and being ruled, viewers are consumed by compelling imagery, stunning cinematography, and a hypnotic score. VEER is an experiment in movement that will have you immersed. Elemental Armand Dijcks NETHERLANDS | 2018 | 3.5 min In this experimental short, filmmaker Armand Dijcks transforms the award-winning still photography of Ray Collins into cinemagraphs, to create movement and showcase the beautiful interplay of water and light. ELEMENTAL is a beautiful and mesmerizing piece of filmmaking that will leave audiences in pure wonder. Aquarela Victor Kossakovsky RUSSIA | 2018 | 89 min North American Premiere Renowned Russian filmmaker, Victor Kossakovsky, showcases the magnificent power of water, in its many forms, and the fragility of humans living at the behest of that power. The film spans the globe, from the freezing and thawing of Lake Baikal in Russia which swallows cars whole, to the enormous icebergs crashing into the ocean off Greenland, to the rushing surges of water which flood Miami, Florida during Hurricane Irma. This film is a visual masterpiece that pushes viewers through awe, heartbreak, despair and joy. AQUARELA is a true recognition of the respect that water demands. Q & A discussion to follow the screenings. Join us at the Opening Night Reception … After the screening and our Q&A discussion, join us for our Opening Night Reception at CSI Annex, 720 Bathurst Street (a short walk south on the west side of Bathurst Street) from 9:00 pm onward. Your Opening Night ticket is your entrance key. Enjoy connecting with filmmakers and fellow Water Docs film enthusiasts over catered nibblies by Hey Lady! CATERING. And for a respite from the maddening crowds, we invite you to enjoy the following film which will be playing in June's Room, at the back of the event space. Confluence Amy Marquis & Dana Romanoff USA | 2018 | 55 min Canadian Premiere In 2016, an indie folk band, The Infamous Flapjack Affair, travelled the length of the Colorado River Basin, stopping along the way to write music, play gigs, work with local artists, and ultimately, hear stories by those who live in the Basin and are challenged by changes to the natural river system, such as from damming and climate change. CONFLUENCE is a testament to the power of music to bring people together, to spark conversation and to unite us over shared interests and concerns. SWAG Please support Ecologos initiatives by purchasing Water Docs T-Shirts ($25) and Water Bottles ($20) available at all Water Docs Film Festival screenings.
https://water-docs.squarespace.com/waterdocs2019/2019/2/8/the-magnificence-of-water-opening-night
2019-10-14T08:58:37
CC-MAIN-2019-43
1570986649841.6
[array(['https://images.squarespace-cdn.com/content/v1/57d6c9f5d2b85728ed469f50/1549786608811-GT84BGRLNZVT97XOEOCH/ke17ZwdGBToddI8pDm48kFmfxoboNKufWj-55Bgmc-J7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z4YTzHvnKhyp6Da-NYroOW3ZGjoBKy3azqku80C789l0iXS6XmVv7bUJ418E8Yoc1hjuviiiZmrL38w1ymUdqq4JaGeFUxjM-HeS7Oc-SSFcg/Stephen+Scharper+recent+UofT+photo.jpg', 'Stephen Scharper recent UofT photo.jpg'], dtype=object) array(['https://images.squarespace-cdn.com/content/v1/57d6c9f5d2b85728ed469f50/1549786717514-B2JWFGCG767GQSRI7KOM/ke17ZwdGBToddI8pDm48kP14P92WdBrGIzBXCLq-TU4UqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcY_SKr6PHxI-NFLCFFhZRL_o5XLpgb_eKRFfWIXO41A0ghACtGIUeSd715JH_c52L/Spring+on+the+Strand+Poster+with+WD+laurels.jpg', 'Spring on the Strand Poster with WD laurels.jpg'], dtype=object) array(['https://images.squarespace-cdn.com/content/v1/57d6c9f5d2b85728ed469f50/1549786844558-Y1Q01CY8ZT9UYKO3E4CA/ke17ZwdGBToddI8pDm48kKfRgq5ylbjT8T2J6CoeybpZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZUJFbgE-7XRK3dMEBRBhUpxnxi4XBT68jenMrJXRIPtL9nGUW8Kim0Ofq8BIoKpb2euBhBBYRHJiaR1f6QBzSl8/Veer+Poster+with+WD+laurels.jpg', 'Veer Poster with WD laurels.jpg'], dtype=object) array(['https://images.squarespace-cdn.com/content/v1/57d6c9f5d2b85728ed469f50/1549787074287-KS5APPUCPOAKD7EYM12C/ke17ZwdGBToddI8pDm48kPxIRhzwIYYJgDn0xb-82i17gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z5QPOohDIaIeljMHgDF5CVlOqpeNLcJ80NK65_fV7S1UZvQIRG7D4wlEB5x2JY-GPrqyhjGICRUsD9MI1nQmGzebMhguNV3SqPBNwLux-7ahA/Aquarela+Poster+with+WD+laurels.jpg', 'Aquarela Poster with WD laurels.jpg'], dtype=object) array(['https://images.squarespace-cdn.com/content/v1/57d6c9f5d2b85728ed469f50/1549941840834-SEIO6S9V8KU2YC9FX42A/ke17ZwdGBToddI8pDm48kFjVbw7Xb5C4uyqZXL2-5TkUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKce6ctYZGPQ_JMsv6wm6kzqn4jMlS-3Rlzs89xtJ-lL0oJY_YlVGflAp9HXg117DC7/Selling+T-Shirts+by+a+wonderful+volunteer-Ben+Marans.jpg', 'Selling T-Shirts by a wonderful volunteer-Ben Marans.jpg'], dtype=object) array(['https://images.squarespace-cdn.com/content/v1/57d6c9f5d2b85728ed469f50/1549941809104-BBDYR3MUC0TMTRZXHVHE/ke17ZwdGBToddI8pDm48kPXf0mLkO8bNtbYnjxg5jJBZw-zPPgdn4jUwVcJE1ZvWQUxwkmyExglNqGp0IvTJZUJFbgE-7XRK3dMEBRBhUpyTf0qhkhKyqZKqksipo58QkHMaM3yID5U_5-echKUA2vlq45je1ka43qO4z1WxZfM/Water+Docs+Reusable+Water+Bottles-downsized+to+10%25.jpg', 'Water Docs Reusable Water Bottles-downsized to 10%.jpg'], dtype=object) ]
water-docs.squarespace.com
Starting the HostManager… The process ID of the HostManager is XXXXX The HostManager failed to start after 60s. Please check for any error messages. DTBE Failed to start Websphere Voice Response node. sequence = xxx csec = xx error_id = 21004 Even though all of the configured Blueworx Voice Response Java nodes (defined in default.cff) are local, Java RMI still uses the TCP/IP stack for communication and hostname resolution. This problem can occur if the system is set up to use DNS for hostname resolution and either the DNS server is not responding, or the DNS resolves to the wrong hostname or IP address. If the system is configured to use DNS, check the file /etc/resolv.conf for the list of DNS servers. If any of these DNS servers, especially the first one on the list, are not responding, change the TCP/IP setup to use a working DNS server.
http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.probdet.doc/error21004.html
2019-10-14T07:46:03
CC-MAIN-2019-43
1570986649841.6
[]
docs.blueworx.com
Dr. Barbara Withers Скачать 1.49 Mb. Название Dr. Barbara Withers страница 1/37 Дата конвертации 03.02.2013 Размер 1.49 Mb. Тип Документы 1 2 3 4 5 6 7 8 9 ... 37 PRODUCTION MANAGEMENT STUDYGUIDE FOR: MDTP 825 ET *MDTP825ET* FACULTY OF ECONOMIC & BUSINESS SCIENCE Study guide compiled by: Dr CC Wessels Page layout by Marietjie Verster, Graphics Services Printing arrangements and distribution by Department Logistics (Distribution Centre) Printed by The Platinum Press (018) 294 8879 / (016) 981 9401 North-West University, Potchefstroom Campus All rights reserved. No part of this book, may be reproduced in any form or by any means without written permission from the publisher. It includes the making of photocopies of the whole or parts of this book. INTERNATIONAL MODERATOR Dr. Barbara Withers Personal Experiences Dr. Barbara Withers is an Associate Professor of Operations Management at the University of San Diego. Dr. Withers has over 15 years of private and public sector work experience, including 8 years in project management at the Prudhoe Bay oilfield in Alaska and 3 years as the Regional Economist for the Municipality of Anchorage. Her research has appeared in publications such as European Management Journal, International Journal of Manufacturing Technology Management, International Journal of Production Research, IEEE Transactions on Engineering Management, International Journal of Production Economics, International Journal of Quality and Reliability Management, Journal of Business Logistics, Industrial Marketing Management, and the European Journal of Marketing. Dr. Withers has presented her research at numerous national and international conferences. In additional, Dr. Withers has taught USD courses in Italy and Brazil. She is the recipient of the national SHINGO PRIZE for Excellence in Manufacturing Research and has received a USD University Professorship Award. Educational Background Dr. Withers received both her Ph.D. and M.B.A. in Management Science from the University of Colorado in Boulder, Colorado. Her bachelors degree, a B.S. in Experimental Psychology, was earned at Eckerd College in St. Petersburg, Florida. She grew up in New Orleans, Louisiana. Hobbies & Interests Archaeology is Dr. Withers' avocation. She has participated in summer-long digs in Sardinia, Italy and Tel Dor, Israel. She is also an active participate in the local chapter of the Explorers Club and with an educational branch of the San Diego Zoo. Module Contents Orientation xi Presentation format xii Lecturer xii AIM OF THE MODULE xii PRESCRIBED TEXTBOOK xiii ADDITIONAL LITERATURE / SOURCES xiii STUDY PLAN xiv Suggested study strategies xv STUDY ICONS xviii Study schedule xxii Action Verbs xxiii 1.Nature and Context of Operations Management 1 1.1Why Study Operations Management? 5 1.1.1Introduction 6 1.1.2Production Management Process 6 1.1.3Operations strategy 9 1.1.4Management of Services 17 1.1.5Historical Developments of Operations Management 18 1.1.6Conclusion 20 1.2Operations Strategy and Competitiveness 25 1.2.1Introduction 26 1.2.2Operations Strategy Issues 27 1.2.3Strategy 28 1.2.4Strategic and tactical Decisions of Production operations Management 29 1.2.5Developing a Production Operations Management Strategy 33 1.2.6Productions Operations Management Strategy Considerations 34 1.2.7A Framework for Operations Strategy in Manufacturing 36 1.2.8Operations Strategy in Services 37 1.2.9International issues in Production Operation Management 37 1.2.10Attacking through operations 39 1.2.11Productivity Measurements 39 1.2.12Summary 40 Solutions: Study Unit 1 43 2.Process analysis, Product Design and Process Selection 54 2.1PROCESS ANALYSIS 58 2.1.1Introduction 59 2.1.2Process Analysis 59 2.1.3Process Flowcharting 59 2.1.4Types of processes 60 2.1.5Measuring Process Performance 61 2.1.6Process throughput time reduction 61 2.1.7Summary 62 2.2Product Design and Process Selection – Manufacturing 63 2.2.1Product Selection 65 2.2.2Identifying New Product Opportunities 65 2.2.3Product Life Cycles 66 2.2.4Life cycle and strategy 69 2.2.5Product Design 71 2.2.6Product Development 74 2.2.7Linking Design and Manufacturing 78 2.2.8Concurrent Engineering 79 2.2.9Preparing for Production 80 2.2.10Defining the Product 83 2.2.11Process Selection 85 2.3Product Design and Process Selection – Services 87 2.3.1The Nature of Services 88 2.3.2The Design of Service Organisations 89 2.3.3Service Blueprinting 90 2.3.4Service Fail Safing Using Poka-Yokes 91 2.3.5Three Contrasting Service Designs 92 2.3.6Service Guarantee Design Drivers 94 2.3.7Waiting Line Management 95 Solutions: Study Unit 2 96 3.Identifying Customer Needs 107 3.1Total Quality Management 112 3.1.1Introduction 113 3.1.2Defining Quality 114 3.1.3Total Quality Management 116 3.1.4Benchmarking 117 3.1.5Quality through Just-in-Time 118 3.1.6Tools for TQM 119 3.1.7Quality Specifications & Quality Gurus 127 3.1.8Costs of Quality 130 3.1.9Continuous Improvement 131 3.1.10International Quality Standards & Awards 132 3.1.11Total Quality Management in Services 135 3.1.12Conclusion 136 3.2Statistical Quality Control 140 3.2.1Introduction 141 3.2.2The Importance of Statistical Quality Control 141 3.2.3Acceptance Sampling 142 3.2.4Process Control Procedures 143 3.2.5Variation around Us – Genichi Taguchi 145 3.2.6Summary 146 3.3Forecasting 148 3.3.1Introduction 149 3.3.2Forecasting Time Horizons 150 3.3.3Product Life-Cycle 151 3.3.4Type of Forecasts 151 3.3.5Components of Demand 152 3.3.6Qualitative Technique in Forecasting 153 3.3.7Quantitative Methods 154 3.3.8Summary 162 Solutions Study Unit 3 166 4.Strategic Decisions to meet customer needs 183 4.1Strategic Capacity management 187 4.1.1Introduction 188 4.1.2Nature of Capacity Relative to Operations Management 189 4.1.3Important Capacity Planning Concepts 189 4.1.4Capacity Planning 191 4.1.5Decision Trees 193 4.1.6Planning Service Capacity 194 4.1.7Summary 196 4.2Facility Location 199 4.2.1Introduction 200 4.2.2Issues in Facility Location 201 4.2.3The Objective of Location Strategy 201 4.2.4Plant Location Methods 202 4.2.5Locating Service Facilities 205 4.2.6Summary 207 4.3Facility Layout 209 4.3.1Introduction 210 4.3.2Types of Layout 211 4.3.3Process Orientated Layout 212 4.3.4Product Orientated Layout 213 4.3.5Group Technology (Cellular Layout) 215 4.3.6Fixed-Position Layout 216 4.3.7Retail Service Layout 217 4.3.8Summary 218 4.4Job Design, Work Measurement and Learning Curves 222 4.4.1Human Resource Strategy 223 4.4.2Objective of Human Resource Strategy 223 4.4.3Behavioural Considerations in Job Design 224 4.4.4Work Methods 225 4.4.5Financial Incentive Plans 226 4.4.6Summary 228 Solutions: Study Unit 4 232 5.Tactical Decisions in Meeting Customer Needs 255 5.1Operations Scheduling 258 5.1.1Introduction 259 5.1.2Scheduling and Control in a Job-shop 260 5.1.3Priority Rules and Techniques 261 5.1.4Shop-Floor Control 262 5.1.5Personnel Scheduling in Services 263 5.1.6Summary 264 5.2Just-in-time Production Systems 267 5.2.1Introduction 268 5.2.2The Japanese & American Approach to JIT 269 5.2.3Kanban Production System 271 5.2.4JIT Implementation Requirements 273 5.2.5JIT in Services 274 5.2.6Summary 275 5.3Synchronous Manufacturing 279 5.3.1Introduction 280 5.3.2The Goal of the Firm & Performance Measurements 281 5.3.3Capacity and Bottleneck Issues 282 5.3.4Comparing Synchronous Manufacturing with JIT and MRP 283 5.3.5Relationship with Other Functional Areas 284 5.3.6Summary 285 Solutions: Study Unit 5 288 6.Value Chain Management 305 6.1Aggregate Planning 309 6.1.1Introduction 310 6.1.2Aggregate Production Planning 310 6.1.3Aggregate Planning Techniques 312 6.1.4Conclusion 313 6.2Inventory Systems management 317 6.2.1Introduction 318 6.2.2Purpose of Inventory 318 6.2.3Inventory Costs 319 6.2.4Inventory Systems 320 6.2.5ABC Inventory Planning 321 6.2.6Conclusion 323 6.3Material Requirements Planning 326 6.3.1Introduction 327 6.3.2Dependent Inventory Model Requirements 327 6.3.3Master Production Schedule 328 6.3.4MRP Systems 329 6.3.5Manufacturing Resources Planning (MRP II) 331 6.3.6Embedding JIT into MRP 332 6.3.7Conclusion 333 6.4Supply Chain Strategy 336 6.4.1Introduction 337 6.4.2Supply Chain Strategy 337 6.4.3Measuring Supply Chain Performance 338 6.4.4Supply Chain Design Strategy 338 6.4.5Outsourcing 340 6.4.6Value Density 341 6.4.7Global Sourcing 341 6.4.8Mass Customisation 342 6.4.9Conclusion 343 Solutions: Study Unit 6 345 7.Improving the System 364 7.1Project Management 367 7.1.1Introduction 368 7.1.2Project Planning 368 7.1.3Project Control 370 7.1.4Structuring Projects 371 7.1.5Network-planning schedules 371 7.1.6Time-cost Models 373 7.1.7Cautions on Critical Path Analysis 374 7.1.8Conclusion 375 7.2Operations consulting and reengineering 378 7.2.1Introduction 379 7.2.2The nature of the management consulting industry 379 7.2.3The operations consulting process and tools 380 7.2.4The Nature of Business Process Reengineering 381 7.2.5The Principles of Reengineering 382 7.2.6Guidelines for Implementation 383 7.2.7Summary 384 Solutions: Study Unit 7 387 QUESTION 1 400 Question 2 Read the following case study and answer the question. [40] 400 Orientation Welcome to the MBA Phase 3 Module of Management of Operations and Services. This Module is designed to give students a managerial perspective of Operations and Service Management. In the past the production function was operated as an entity and it was separated from the other functions within the organisation. With the tremendous changes in the environment, it became more and more important that the production function should be managed as part of the whole system. It is also extremely important that you should know what production and service operations entail, and how they fit in with the rest of the organisation. Most of you will become or already are managers in organisations or small businesses. It is important that you should study the different aspects of Production/Operations Management and know how to use this information to create an environment that would produce products or services that would satisfy the client at the end of the day. This Module focuses on helping you to obtain a basic literacy of Operations and Service Management. The aim of this Module is to create a basic understanding of the values and issues of Operations and Service systems and to familiarise you with the theoretical background to the management of these services. It is not designed to make you a fully fledged Production or Operations Manager, although the theory and practice discussed in this Module could be of tremendous help to any person interested in the field. The Module is also designed to aid a Production Manager in managing operations more efficiently. Emphasis is placed on the management aspects of the Module and not so much on the mathematical and statistical operations that are required to maximise certain production functions. The aim of the Module is to make sure that you will be able to make a difference in your organisation by applying the techniques and practices of world renowned companies in the service and operations field. I trust that you will find this Module interesting and informative, and that it will also serve to satisfy your personal development goals. Assignments are printed at the end of this study guide.. Presentation format The presentation format of the Module is designed to accommodate effective distance learning and to optimise your learning experience. The presentation format offers you different media options for participation, that range from a conventional hard copy study guide to electronic media information sources. Whatever options you choose, the facilities to your disposal will provide you with all the information and support your need to become a successful student. The study guide will be your learning foundation. It will guide you through the learning process, give you the necessary background information, provide you with a framework to apply the theoretical principals of your organisation and highlight important information. It will also be a tool to help you integrate the use of whatever media option you choose. A Module is not only designed for you to give you a theoretical understanding of production and service systems in an organisation, but also give you the knowledge and tools to apply the knowledge in your work environment. You will be familiarised with a practical application of theoretical knowledge by means of case studies and real business problems. Your responsibility as a manager with relation to operations and services will be highlighted in the study guide. Lecturer Mr Henry Lotz Tel: (018 299 1635) Cell: 0824669713 AIM OF THE MODULE Production/Operations management has to do with the effective management of converting inputs into outputs within an organisation. Knowledge about this field will enable the student to make knowledgeable decisions relating to the design and operation of manufacturing and service processes. This module’s aim is to expose the students to a managerial view of such aspects as productivity, profitability, quality and scheduling. The module will also strive to give the student direction in managing these aspects. The module's aim is to provide theoretical and practical cases for both the manufacturing and service environments. The focus on products and services will enable the student to find this subject useful, regardless of the organisation the student finds him or herself in. After you have completed the module, you should be able to: schedule services and products in an organisation; make tactical decisions relating to operational aspects; audit and implement quality guidelines and processes; make strategic decisions relating to process and capacity planning; schedule inventory by making use of Just-in-Time and MRP concepts. PRESCRIBED TEXTBOOK CHASE,R.B., AQUILANO, N.J. & JACOBS, F.R.. 2004. Production and Operations Management for Competitive Advantage . 10 th edition. Boston: Irwin/McGraw-Hill. ADDITIONAL LITERATURE / SOURCES Dilworth, J.B. 2000 . Operations Management: Providing value in goods and services. 3 ed edition. New York: Dryden Press Hanna, M.D. & Newman, W.R. 2001. Integrated Operations Management: Adding value for customers. New Jersey: Prentice Hall. Goldratt, E. M. 1982. The Goal. New York: North River Press. Goldratt, E. M. 1990. The Haystack Syndrome. New York: North River Press. Goldratt, E. M. & FOX, R.E. 1986. The Race. New York: North River Press. Goldratt, E. M. 1990. Theory of Constraints. New York: North River Press. Knod, E.M. & Schonberger, R.J. 2001. Operations Management: Meeting customers’ demands. New York: McGraw-Hill/Irwin Martinich, J.S. 1997. Production and Operations Managemen: An applied approach. New York: John Wiley & Sons, Inc. SCHONBERGER, R.J. & KNOD, E.M. 1994. Operations management: Continuous improvement . Homewood: McGraw-Hill/Irwin. Slack, N. Chambers, S & Johnston, R. 2001. Operations Management . 3 d edition. Essex: Prentice Hall. Heizer,J & Render B. 2004. Operations Management . New Jersey Prentice Hall STUDY PLAN 1. Production and operational management strategy 2. Product Design & Process Selection – Manufacturing and Services 3. Total Quality Management 4. Forecasting techniques 5. Strategic capacity planning / Facility Location / Facility Layout 6. Job design and work measurement 7. Operations scheduling, Just-in-Time production and Theory of Constraints 8. Aggregate planning, Inventory Management and Material Requirement Planning 9. Project management and business process reengineering Suggested study strategies The main aim of this Module is to enable you to use the theoretical knowledge that you obtained in the Module in a practical sense. This means that the knowledge that will be applied to your organisation would be beneficial to either the bottom or top line of the company as these aspects are implemented. The Study Unit will go through a specific process in order to impart the knowledge to you the student. First of all the contents will be discussed in the study guide and important aspects will be highlighted in the chapter. Through this exercise you should keep the objectives that was set at the beginning of the Module in mind to make sure that the optimum use is made of this study guide unit. After each section you will be asked to evaluate yourself by means of evaluation exercises. There are different levels of evaluation that you will find in each Study Unit to make sure that you understand the concepts and know how to apply it in a practical sense. One level of evaluation is discussion and review questions. It is required that the students think through the question and give an appropriate answer. The answer might vary from student to student as organisations differ. These questions can be found throughout each respective Study Unit, or at the end of each chapter in the prescribed textbook Chase, Aquilano & Jacobs. Another level of evaluation is the practical application of the theoretical knowledge that was imparted in the Module. This is also the most important of the evaluation techniques. This is incidentally the final objective of the Module. The correct application of theoretical knowledge in practical, real life scenarios. A case study will be given either within the Study Unit, or at the end of it. The application of theoretical knowledge, common sense and ingenuity will be the measures that will be required in order to answer these case studies successfully. As no single correct answer exists, you should focus on an in depth analysis of each case study. Your recommendations for future action should then be based on the correct analysis of the case study, and the knowledge that you have obtained from the Module. The analysis of the case study is critical, as the theoretical background will provide the student with enough information to make a successful evaluation. Your evaluation at the end of the Module and in real life will centre around the evaluation of real world problems. The study guide will facilitate the learning process, but the textbook will be your main source of information. Where applicable, I will enrich the learning process with additional information. The study guide that accompanies the textbook will indicate the focus and importance of the various aspects. Let the Study Guide guide you through the learning process. Study the Study Units with the student objectives as your guidelines. Focus on the real world cases and problems. It is recommended that you do some of the assignments at an organisation in your city or town, that utilises the aspects that I mentioned in the Module. Take note that in marking the assignments and exam, the focus is on the evaluation on your line of argument and your insight into the specific topic. Glossary It is recommended that you refer to the glossary for Management of Operations and Services that is given at the end of each Study Unit on a regular basis. You can also discuss the terms with other MBA students or colleagues in the workplace. Groupwork It is recommended that you make an appointment with a Production or Operations Manager at work and discuss the various aspects that is important in his work. It would greatly enrich your knowledge of the topic. Suggested additional readings If you are not already doing so, I would recommend that you subscribe to and read some informative magazines on the subject. Magazines such as Productivity, and business magazines such as the Harvard Business Review, and Sloan Management Review, will help you greatly in obtaining the relevant information to the newest developments in the field. LIBRARY WEB SITE. The following web site could be used to find articles related to OPERATIONS MANAGEMENT. Recommended international databases are Business Source Premier and Emerald Library. Tips: Identify keywords to be searcehd on the chosen database Phrase searching can be done eg. operational management Keywords can be combined with AND (all of the terms in the article) eg.. operational management AND engineering Synonyms are combined with OR (any of the terms) eg. personnel OR staff OR employee* OR worker* Truncation symbol is usually * (asterisk), unless otherwise specified in database help screen Field searching can be done eg. AB Abstract or Author – Supplied Abstract STUDY ICONS Test your current knowledge and insight. Make sure that you are able to answer the questions on this study material before continuing Read the prescribed material. Individual exercise. Outcomes Important information Assignment. You should complete this assignment as a computer printout and submit it on the date indicated Prepare yourself (by making notes) for answering questions on this issue in a group discussion or exam. Preparation for contact session. Group work/exercise Answers/solutions. This is used after a self-evaluation exercise, where you receive information about possible answers to the activity you just completed. List of concepts. Summary of main learning points. Additional literature. Although not compulsory, it will enrich your knowledge and insight to read this. Attend the contact session. Practical example Study hints. Introductory statements Underline the main concepts. General overview. Make a summary. Study the following section carefully Translate. Rewrite this statement in your own words and explain its meaning. If you are not able to do this, or if you are not sure that you understand it correctly, make a note and come back to it once you've worked through this Study Section. If it is still unclear, you should ask for an explanation. Revison Video Case study Listen to the audio cassette and complete the accompanying exercise You need approximately X hours to complete this Study Unit successfully Mail to specify address Send in by e-mail Study schedule WEEK DATE SUBJECT PREPARATION Week 1 Production and operational management strategy / International perspective Study Unit 1 Week 2 Product Design & Process Selection – Manufacturing and Services Study Unit 2 GROUP DISCUSSION Discussion of work done so far Week 3 Total Quality Management / Statistical Quality Management Forecasting techniques Study Unit 3 Week 4 Strategic capacity planning / Facility Location / Facility Layout Study Unit 4 (Finish assignment) Group discussion 7 Sept Discussion of work done so far Individual assignment: Hand in date Week 5 Operations scheduling, Just-in-Time production and Theory of Constraints Study Unit 5 TAKE A BREATHER Week 6 Aggregate planning, Inventory Management and Material Requirement Planning Study Unit 6 GROUP DISCUSSION Discussion of work done so far Week 7 Project management and business process reengineering Study Unit 7 Week 8 Summary of the course Group discussion 19 Oct Discussion of the work done during the year and preparation for the exam Group assignment: Hand in date Prepare for EXAM Exam 14 Nov 09:00-13:00 Action Verbs These action verbs are included, in order to provide clarity of what is expected of you as a student. Please study them and make sure that you understand the meaning of each. Analyse Identify parts or elements of a concept and describe them one by one. EXAMPLE: Analyse a typical lesson structure and describe each aspect in detail. Compare Point out the similarities (things that are the same) and the differences between objects, ideas or points of view. The word “contrast” can also be used. When you compare two or more objects, you should do so systematically - completing one aspect at a time. It is always better to do this in your own words. EXAMPLE: Compare philosophical and empirical knowledge. Compare the views of Piaget and Ausubel about the nature of learning. Criticise This means that you should indicate whether you agree or disagree about a certain statement or view. You should then describe what you agree/disagree about and give reasons for your view. EXAMPLE: Write critical comments about the progressive liberal view of education. Define Give the precise meaning of something, very often definitions have to be learnt word for word. EXAMPLE: Define the concept curriculum. Demonstrate Include and discuss examples. You have to prove that you understand how a process works or how a concept is applied in real-life situations. EXAMPLE: Give a written demonstration of the application of the procedural moments of a lesson. Describe Say exactly what something is like; give an account of the characteristics or nature of something; explain how something works. No opinion or argument is needed. EXAMPLE: Describe the characteristics of philosophical thought. Discuss Comment on something in your own words. Often requires debating two viewpoints or two different possibilities. EXAMPLE: Discuss the differences between objectives and goals. Distinguish Point out the differences between objects, different ideas, or points of view. Usually requires you to use your own words. EXAMPLE: Distinguish between a positivistic and a hermeneutic view of science. Essay An extensive description of a topic is required. EXAMPLE: Write an essay about the value of Psychological Education for the teacher. Example A practical illustration of a concept is required. EXAMPLE: See our examples after every definition of a taskword. Explain Clarify or give reasons for something, usually in your own words. You must prove that you understand the content. It may be useful to use examples or illustrations. EXAMPLE: Briefly explain the following research methods: (a) The experiment (b) Correlational studies Identify Give the essential characteristics or aspects of a phenomenon e.g. a good research design. EXAMPLE: Identify the characteristics in a text about the research process which is indicative a good research design. Illustrate Draw a diagram or sketch that represent a phenomenon or idea. EXAMPLE: Explain the life cycle of a butterfly. Write a short essay and illustrate this model. List Simply provide a list of names, facts or items asked for. A particular category or order may be specified. EXAMPLE: List ten psycho-social problems associated with alcohol abuse in high school pupils. Motivate You should give an explanation of the reasons for your statements or views. You should try to convince the reader of your view. EXAMPLE: Write an essay about your own philosophical education. Motivate your views Name or mention Briefly describe without giving details. EXAMPLE: Name three research methods in Nursing Name the two major schools of thought (paradigms) on education. Outline Emphasize the major features, structures or general principles of a topic, omitting minor details. Slightly more detail than in the case of naming, listing or stating of information is required. EXAMPLE: Outline the major features of a lesson structure. State Supply the required information without discussing it. EXAMPLE: State three functions of a computer. Summarise Give a structured overview of the key (most important) aspects of a topic; must always be done in your own words. EXAMPLE: Give a summary of the core characteristics of the conservative-normative oriented school of thought on education. Formulate To set forth systematically. 1 2 3 4 5 6 7 8 9 ... 37 Добавить в свой блог или на сайт Похожие: Dove family. There is a tradition that it was formerly an Inn, the last stage at which coaches stopped on the way from Gloucester to Bristol; this is, however, also said of the house on the west end of the stream, Hambrook, now held by Mr. Withers Barbara Mary Iris Landale 12 University of California, Santa Barbara C allan e Barbara Pease, 2003 Barbara Grogan-Barone Thompson 207D New agritourism : hosting community & tourists on your farm / Barbara Berst Adams. Auburn Stephen D. Smith, William J. Tays, Michael J. Dixon, and M. Barbara Bulman-Fleming Joe Conti and John Foran Department of Sociology, uc santa Barbara January 2006 I dedicate this dictionary to my parents George and Marion Greenwald and my friends Orville and Evelyn Brynelson. I especially thank James Steckel, Barbara I dedicate this collection to my friends Orville and Evelyn Brynelson and my parents George and Marion Greenwald. I especially thank James Steckel, Barbara
http://lib.convdocs.org/docs/index-123688.html
2019-10-14T07:51:22
CC-MAIN-2019-43
1570986649841.6
[]
lib.convdocs.org
- eBay Integration - Amazon, Rakuten, Play Integrations eBay Integration Added: “Sell on Another eBay Site” option "Sell on Another eBay Site" option which allows to set up for sale on other eBay Site eBay items which you are currently selling. One of the most important features of “Sell on Another eBay Site” functionality is Translation Service. With Translation service it is possible to translate item title, subtitle, description and associate eBay source category with destination one. Item specifics of the source eBay category are associated with items specifics of the destination eBay category. Calculated Value in the Synchronization Policy Ability to set QTY rules in the Synchronization Policy basing on Calculated Value of the Price, Quantity and Format Policy. Global Shipping Program functionality for UK eBay site The Global Shipping Program simplifies selling an item to an international buyer. It is now available for UK eBay site. Ability to reduce the length of titles of child products In case the length of titles of child products is too long, it will be automatically reduced by M2E Pro. Amazon, Rakuten, Play Integrations Added: Calculated Value in the Synchronization Template Ability to set QTY rules in the Synchronization Template basing on Calculated Value of the Selling Format Template.
https://docs.m2epro.com/display/ReleaseNotes/M2E+Pro+version+-+6.2.0
2019-10-14T08:48:10
CC-MAIN-2019-43
1570986649841.6
[]
docs.m2epro.com
ANSYS HFSS Batch Tutorials This tutorial shows how to run an ANSYS HFSS simulation model in batch mode on Rescale's ScaleX platform. Once you are comfortable with the Rescale platform, you can tailor the workflow to suit your needs. For basics on Rescale batch, please refer to our guide. This tutorial is based on a cutout section of a PCI Express Gen 3 printed circuit board. The study is a single frequency sweep to calculate the S-parameters of the section edges. > ansys-hfss-electronics-example.aedtz into your account. You can also obtain the results of the job by clicking on Get Job Results. Import Job Setup Get Job Results Creating Input FileCreating Input File For running an ANSYS HFSS Job, we need to create a .aedtz archive file from the ANSYS Electronics desktop. You can create this file on your local ANSYS Electronics desktop. Rescale also offers the Remote Desktop option, where you can launch an ANSYS HFSS desktop to use the software as it HFSS HFSS Desktop. Under Choose Configuration, select the basic Windows configuration. Expand the drop-down menu for 1. Add Software. Search for or select from the menu: ANSYS HFSS Desktop. A Select License - ANSYS HFSS Desktop pop-up menu should now appear. Enter your appropriate ANSYS license information and click Ok to continue. You can now also select which version of HFSS you want to use. We will choose 2. 19.0 for this tutorial. Expand the drop-down menu for Add Jobs. Again, you can search for or select from the menu the job you cloned: ANSYS HFSS Tutorial. Click 3. HFSS file, you can upload the file to Rescale cloud and download it on the HFSS desktop you launched. Please refer to the File Transfer section for Uploading to Rescale Cloud Files and Downloading from Rescale Cloud Files. After you have made the necessary changes to your HFSS case tickbox the rescale platfrom and click on + New Job button on the top left corner. For more information on launching a basic job, please refer to the tutorial here. Input FilesInput Files First, you need to include the input files needed for the batch run. For the ANSYS HFSS HFSS software and click on it. The Analysis Option window opens up where you can select the software version and edit the command to run a ANSYS HFSS batch job. In the command line, Replace the placeholder with your HFSS .aedtzfile. NOTE : If you are using HFSS with coretypes such as Mercury,Sunstone,Ferrite etc., additional settings to the command template need to be added. Please refer to the HFSS FAQs for additional command lines. You can also specify design options in HFSS. Please refer to the HFSS FAQs for more information about design options. If you don't wish to specify any design option, simply delete the < design-options > placeholder. Additionally, you can specify to distribute the tasks manually or automatically. Please refer to the HFSS FAQs for more information on manual vs automatic task distribution. >. Additional Guide: Generally, HFSS workflows are very memory bound, so it is recommended to use high memory coretypes such as, Zinc or Melanite for your workflows. If you discover that during frequency sweeps the memory usage is low, then switch to a lower memory coretype such as Onyx or Emerald.. To view the results, open ANSYS Electronics desktop and open the .aedtfile inside the run1 folder. The results will automatically be loaded into ANSYS Electronics desktop. You can proceed to post process on the software.
https://docs.rescale.com/articles/ansys-hfss-batch-tutorial/
2019-10-14T08:22:26
CC-MAIN-2019-43
1570986649841.6
[array(['https://d33wubrfki0l68.cloudfront.net/ced67215fb4cb68aefe52e2edc6f3de25a689258/85011/images/en/ansys-resources/ansys-batch-example/ansys-hfss-example-model.139b7f35.png', 'ANSYS HFSS PCIe model'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/0ffc4ed4050680c4746738db5950a457d2e9a4e1/00e0c/images/en/ansys-resources/ansys-batch-tutorial/hfss-desktop.7a1b68c0.png', 'ANSYS HFSS desktop'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/f14ae504b294867e2edfd6723f14d1c151ac4d31/90a1a/images/en/ansys-resources/ansys-batch-tutorial/hfss-inputfile.f689f936.png', 'ANSYS HFSS Input File'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/3156f5d4480978ef1612d0915ccd4d897efe8c53/36bad/images/en/ansys-resources/ansys-batch-tutorial/hfss-cloudfile.36443f3c.png', 'ANSYS HFSS Cloud File'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/542a99e649e04d36283b7a6eb3c633a7c90404b4/c80fe/images/en/ansys-resources/ansys-batch-tutorial/hfss-jobstart.fa6ab509.png', 'ANSYS HFSS Job Start'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/59a204a3e366d256554844fe92d46ab300812f48/d93ce/images/en/ansys-resources/ansys-batch-tutorial/hfss-software.d0431a89.png', 'ANSYS HFSS Software Settings'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/7695c9518ca1898db55a4ce2f0a76d2139523c60/51730/images/en/ansys-resources/ansys-batch-tutorial/hfss-command.ec81f334.png', 'ANSYS HFSS Commands'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/f62d2db6e705be61b22018df8093bb04f3c9b46e/5fc11/images/en/ansys-resources/ansys-batch-tutorial/hfss-monitoring.0c21f7d8.png', 'ANSYS HFSS Monitoring Job'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/2278b8d9fdf2d4db85280a84ab2d56deb792d180/57c4c/images/en/ansys-resources/ansys-batch-tutorial/hfss-resultfile.d94822a3.png', 'ANSYS HFSS Result File'], dtype=object) ]
docs.rescale.com
Visual Basic for Applications Reference Property not found (Error 422) See Also Specifics Not all objects support the same set of properties. This error has the following cause and solution: This object doesn't support the specified property. Check the spelling of the property name. Also, you may be trying to access something like a "text" property when the object actually supports a "caption" or some similarly named property. Check the object's documentation. For additional information, select the item in question and press F1.
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-basic-6/aa264549(v=vs.60)?redirectedfrom=MSDN
2019-12-05T20:33:54
CC-MAIN-2019-51
1575540482038.36
[]
docs.microsoft.com
Remote Installation Services overview for GPMC Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 Remote Installation Services Remote Installation Services (RIS) is an optional component of the Windows Server 2003 operating system. It works with other Windows technologies to enable administrators or users to remotely install a copy of the Windows Server 2003, Windows XP Professional, or Windows 2000 operating systems. Administrators use the Remote Installation Services extension of Group Policy only to specify which options are presented to users by the Client Installation Wizard. Client computers that install an operating system through RIS must be equipped with network cards that either support Pre-boot Execution Environment (PXE), or are supported by the RIS remote boot floppy disk. The client computer starts (boots), connects to the network, and accesses the RIS server to install the operating system. The RIS server queries Active Directory to determine remote installation options defined for the user via policy setting. Based on that result, RIS determines which screens to send to the pre-boot RIS client code for display to the user. The RIS settings that control what the user sees are visible in the Group Policy Object Editor console tree under the Remote Installation Services node. Where? - Group Policy object name/User Configuration/Windows Settings/Remote Installation Services You can customize these settings using Group Policy Object Editor: - Automatic Setup This option supports the predefinition of the computer name and a location within Active Directory for the client computer accounts. - Custom Setup This option supports user definition of a unique name for a computer and specification of where the computer account is created within Active Directory. - Restart Setup This option restarts an operating system installation attempt if it fails prior to completion. - Tools This option supports user access to tools from the Client Installation Wizard. For each of the Client Installation Wizard options, the following choices are available: - Enable Users to whom this policy applies are offered the specific option. - Not Configured The policy settings of the parent container apply to the specific option. For example, if you choose Not Configured and the administrator for the entire domain has set Group Policy specific to Remote Installation Services, the policy that is set on the domain is applied to all users affected by that policy. - Disabled Users affected by this policy cannot access this installation option. For more information on Remote Installation Services and the options offered by the Client Installation Wizard, see the Remote Installation Services Help, which is accessible in Windows Server 2003 family, either from Group Policy Object Editor or the %systemroot%\Windows\Help folder (RISconcepts.chm). Notes Windows XP Professional does not show RIS settings in the Group Policy Object Editor (GPOE) unless the Admin Tools Pack is installed. If you are running Group Policy Management Console on a computer running Windows XP Professional, Remote Installation Services Help is not available by default. To access the Help, install Windows Help from the Windows Server 2003 installation CD onto a computer running Windows XP Professional. For more information about RIS, see Remote Operating System Installation() on the Microsoft Web site. See Also Concepts Group Policy Object Editor Extensions Security settings overview for GPMC
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc759553%28v%3Dws.10%29
2019-12-05T21:04:30
CC-MAIN-2019-51
1575540482038.36
[]
docs.microsoft.com