content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
THIS TOPIC APPLIES TO:
SQL Server (starting with 2016)
Azure SQL Database
Azure SQL Data Warehouse
Parallel Data Warehouse
This topic describes breaking changes in the SQL Server 2017 Database Engine and earlier versions of SQL Server. These changes might break applications, scripts, or functionalities that are based on earlier versions of SQL Server. You might encounter these issues when you upgrade.
Breaking Changes in SQL Server 2016
The sample_ms column of sys.dm_io_virtual_file_stats has expanded from an int to a bigint data type.
The TimeStamp column of sys.fn_virtualfilestats has expanded from an int to a bigint data type.
Using the MD2, MD4, MD5, SHA, or SHA1 hash algorithms (not recommended) requires setting the database compatibility level to earlier than 130.
Under database compatibility level 130, implicit conversions from datetime to datetime2 data types show improved accuracy by accounting for the fractional milliseconds, resulting in different converted values. Use explicit casting to datetime2 datatype whenever a mixed comparison scenario between datetime and datetime2 datatypes exists.
Previous Versions
Breaking Changes to Database Engine Features in SQL Server 2014
Breaking Changes to Database Engine Features in SQL Server 2012
Breaking Changes to Database Engine Features in SQL Server 2008
See Also
Deprecated Database Engine Features in SQL Server 2016
Discontinued Database Engine Functionality in SQL Server 2016
SQL Server Database Engine Backward Compatibility
ALTER DATABASE Compatibility Level (Transact-SQL) | https://docs.microsoft.com/en-us/sql/database-engine/breaking-changes-to-database-engine-features-in-sql-server-2016 | 2017-10-17T00:39:50 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.microsoft.com |
icationSubnetGroupResponse type exposes the following members
Creates a replication subnet group given a list of the subnet IDs in a VPC.
var response = client.CreateReplicationSubnetGroup(new CreateReplicationSubnetGroupRequest { ReplicationSubnetGroupDescription = "US West subnet group", ReplicationSubnetGroupIdentifier = "us-west-2ab-vpc-215ds366", SubnetIds = new List
{ "subnet-e145356n", "subnet-58f79200" }, Tags = new List { new Tag { Key = "Acount", Value = "145235" } } }); ReplicationSubnetGroup replicationSubnetGroup = response.ReplicationSubnetGroup;
| http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/DMS/TDMSCreateReplicationSubnetGroupResponse.html | 2017-10-17T00:35:29 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the RebuildEnvironment operation. Deletes and recreates all of the AWS resources (for example: the Auto Scaling group, load balancer, etc.) for a specified environment and forces a restart.
Namespace: Amazon.ElasticBeanstalk.Model
Assembly: AWSSDK.ElasticBeanstalk.dll
Version: 3.x.y.z
The RebuildEnvironmentRequest type exposes the following members
The following operation terminates and recreates the resources in an environment named my-env:
var response = client.RebuildEnvironment(new RebuildEnvironmentRequest { EnvironmentName = "my | http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/EB/TEBRebuildEnvironmentRequest.html | 2017-10-17T00:35:34 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.aws.amazon.com |
3.4 Expression Wrapper: #%expression
Produces the same result as expr. Using #%expression forces the parsing of a form as an expression.
The #%expression form is helpful in recursive definition contexts where expanding a subsequent definition can provide compile-time information for the current expression. For example, consider a define-sym-case macro that simply records some symbols at compile-time in a given identifier.
and then a variant of case that checks to make sure the symbols used in the expression match those given in the earlier definition:
If the definition follows the use like this, then the define-sym-case macro does not have a chance to bind id and the sym-case macro signals an error:
But if the sym-case is wrapped in an #%expression, then the expander does not need to expand it to know it is an expression and it moves on to the define-sym-case expression.
Of course, a macro like sym-case should not require its clients to add #%expression; instead it should check the basic shape of its arguments and then expand to #%expression wrapped around a helper macro that calls syntax-local-value and finishes the expansion. | http://docs.racket-lang.org/reference/__expression.html | 2015-04-18T07:14:30 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.racket-lang.org |
Message-ID: <688920575.865.1429341532258.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_864_641334524.1429341532257" ------=_Part_864_641334524.1429341532257 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Java developers benefit from using Groovy, but so can you who
don't already know Java. If you want to access the power of =
the Java Virtual Machine and Development Kit libraries when programming, bu=
t don't want to learn the Java Language, you can use Groovy instead=
. Or maybe you do want to learn Java, but do so the easy way: you =
can learn Groovy first. You'll be productive sooner, and c=
an go on to learn more about Java at your own pace.
Much of the documentation on this website at Codehaus is for those who a= lready know Java. These pages are for you who don't, so you can lea= rn enough of the Groovy basics to easily use the other documentati= on on this website. They introduce Groovy's core classes and syntax= together. All code examples have been tested using Groovy 1.0 or = later inside a script. It's aimed at you who have already programme= d before, just not in Java, maybe in PHP, Perl, or Visual Basic. D= o note that although this documentation is correct and detailed, it's still= a little raw because it's still being written.=20
Getting Started - enoug= h background to dive into the tutorials that follow=20
1. Numeric Processing
Integer Math - choose from many types of integers
Decimal Math - for high= -precision decimal math=20
Floating Point Math - for= high-speed decimal math=20
Dates and Times - enabling= complex date manipulations=20
2. Collections
Lists and Sets - group various items into a collection
Arrays - fixed-size array= s for faster collections=20
Maps - assign collected val= ues to keys=20
3. Text Processing
Characters - access the full power of Unicode
Strings - easily handle = strings of characters=20
String Pattern Matching= - find patterns within strings=20
4. Input and Output
Files - manipulate the file system easily
Streams, Readers, and Writer= s - access data as a flow of information=20
5. Control Structures
Blocks, Closures, and Functions - compose programs fr= om many building blocks
Expandos, Classes, and Categ= ories - encapsulate program complexity=20
Program Control - variou= s ways to structure program logic=20
6. Data Typing
Static Typing and Interfaces - put compile-time restrictions in= programs
Inheritance - use cl= asses and methods for many purposes=20
Exceptions - handle e= xception and error conditions simply=20
7. Meta-Programming
Interceptors - intercept method calls
MetaClasses - add an= d modify behavior of objects=20
Class Reflection - ex= amine and manipulate objects dynamically - IN PROGRESS=20
Other Topics Coming
Packages
= Multi-Threading
Networking
= Internationalization
Annotations<= br /> Enums
Builders<= br /> Class Loading
P= ermissions
To continue learning Groovy, you can now go on to:
&nb= sp; Java, the engine behind Groovy's power and performance
&nb= sp; Swing, the gr= aphical interface for Java, made easy with Groovy's own = SwingBuilder
Eclipse, the free IDE w= ith a Groovy plugin to make managing your code easy=
useful Groovy modules, such= as Gant, which extend the Groovy system
&n= bsp; Grails, bringing the power of Groovy to website development a= nd deployment | http://docs.codehaus.org/exportword?pageId=72887 | 2015-04-18T07:18:52 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.codehaus.org |
Groovy has been designed to be very lightweight and easy to embed into any Java application system.
You can use
Evaluate scripts or expressions using the shell
You can evaluate any expression or script in Groovy using the
The GroovyShell allows you to pass in and out variables via the
Dynamically loading and running Groovy code inside Java
You can use
| http://docs.codehaus.org/pages/viewpage.action?pageId=51076 | 2015-04-18T07:33:20 | CC-MAIN-2015-18 | 1429246633972.52 | [array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/emoticons/smile.png',
'(smile)'], dtype=object) ] | docs.codehaus.org |
JavaScript must be enabled in order to use this site. Please enable JavaScript in your browser and refresh the page. cqlsh commands The cqlsh commands. In this topic: cqlsh Start the CQL interactive terminal. CAPTURE Captures command output and appends it to a file. CONSISTENCY Shows the current consistency level, or given a level, sets it. COPY Imports and exports CSV (comma-separated values) data to and from Cassandra. DESCRIBE Provides information about the connected Cassandra cluster, or about the data objects stored in the cluster. EXPAND Formats the output of a query vertically. EXIT Terminates cqlsh. PAGING Enables or disables query paging. SHOW Shows the Cassandra version, host, or tracing information for the current cqlsh client session. SOURCE Executes a file containing CQL statements. TRACING Enables or disables request tracing. Attention: Be sure this document version matches your product version | http://docs.datastax.com/en/cql/3.1/cql/cql_reference/cqlshCommandsTOC.html | 2015-04-18T07:40:53 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.datastax.com |
Table of Contents
DAZ Studio 4.x
- QuickStart Guide PDF
- User Guide PDF
Save yourself some time and effort by starting the rigging process using the Transfer Utility. This tool will transfer rigging information, such as bones, weight maps, groups, etc. from a triax rigged figure (i.e. Genesis), to a static object (i.e. clothing, hair, etc) or another rigged figure1). The Transfer Utility is a great tool for content creators to begin the rigging process when it's important to have a model follow a base figure's pose and/or shape.
In order for one figure to follow another, there are some similarities in the bone hierarchy that must exist, such as, bone naming, center and end point positions, joint orientation, and joint axis rotation orders.
TIP: Be sure to choose the correct Scale Preset in the OBJ Import Options dialog. The correct scale to use in order to avoid precision errors caused by scaling is DAZ Studio (1 unit = 1 cm) - the same scale used in the Basics: Modeling for Genesis (PUBLIC) overview.
NOTE: Use of Projection Templates is not a requirement, but they can be quite helpful at times, so it is recommend that you do use them. If you do not see a match for the type of content you are creating, leave this option set to “None.”
NOTE: When sending an object that has SubDivision applied through the Transfer Utility, the tool changes the Resolution Level back to “Base” so it looks like the SubDivision is not applied. To correct this, simply select the item, and in the Parameters (WIP) Pane and change the resolution back to “High Resolution”.
That's it! Your previously static object is now able to follow the intended figure while that figure is being posed and/or morphed. To test, simply apply Pose and Shaping Presets to the target figure and see how your object follows it.
The Transfer Utility works very well most of the time. There will, however, be cases where adjustments to the rigging and/or morphs will be needed. We will cover those topics in another article.
We have seen this tool save artists days or weeks of time creating rigging for various items. We hope this will help you save time and provide you with great results in your content creation endeavors.
The next logical step is to save your item in a native format. See the basics_saving_a_figure article for more information on how to accomplish that task.
The videos below are great examples of the Transfer Utility in action. They do not follow the exact steps listed above, but they will give you a great overview of what is possible. | http://docs.daz3d.com/doku.php/public/software/dazstudio/4/userguide/creating_content/rigging/tutorials/basics_initial_rig_with_transfer_utility/start | 2015-04-18T07:14:48 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.daz3d.com |
animation
animation
This article is Ready to Use.
W3C Working Draft
Summary
Shorthand property to define a CSS animation, setting all parameters at once.
Overview table
Syntax
animation: single-animation [, single-animation]*
Values
- single-animation [, single-animation]*
- A list of values for each of the individual animation properties. The animation name and duration are required; all other values are optional. Multiple animations can be assigned as a comma-separated list.
<single-animation-name>
- Value of the animation-name property.
<single-animation-duration>
- Value of the animation-duration property.
<single-animation-timing-function>
- Value of the animation-timing-function property.
<single-animation-delay>
- Value of the animation-delay property.
<single-animation-iteration-count>
- Value of the animation-iteration-count property.
<single-animation-direction>
- Value of the animation-direction property.
<single-animation-fill-mode>
- Value of the animation-fill-mode property.
Note: The first
<time> value is assigned to the animation-duration. The second
<time> value is assigned to the animation-delay.
Compatibility
Desktop
Mobile
Examples
View live exampleSee animation-play-state for an example that uses the animation shorthand property.
CSS
nav.expanded > div.selected { animation: pulse 1s infinite; }
Usage
The
animation shorthand property combines all animation properties except animation-play-state in a single declaration. The name and duration of the animation are required, but all other values are optional. When two
<time> values are supplied, the first is assigned to the duration, and the second to the delay.
Values for a single animation are separated by spaces. Multiple animations can be assigned as a comma-separated list.
Notes
Before the advent of CSS3, most animations were performed by using Javascript to move HTML DOM elements. This was not optimal, as the browser would not know anything about the DOM element it was moving until it executed the Javascript which moved it, making hardware accelerating animations difficult for vendors. So, CSS3's animation module was born.
This module allows browser vendors to better support animations with hardware acceleration, especially important on CPU constrained devices such as mobile devices. Because the browser controls the inbetween state, or tween as it is more commonly known, between two animation states, it can fully hardware accelerate the resultant animation. This leads to lower CPU usage, smoother graphics and less battery intensive web pages on mobile devices.
Animations use keyframes to specify points of animation and timing to state when those keyframes should appear. Those keyframes exist in a separate @keyframes section in the CSS. The browser automatically handles the "tween" between each keyframe property. Animation is a shorthand property that defines all the properties of an animation in a single declaration. Animation applies to all elements. See the keyframes section linked above for a list of properties that can be animated.
Also, see this CSS animations tutorial.
Related specifications
See also
Other articles
- Making things move with CSS3 animations
- @keyframes
- animation-delay
- animation-direction
- animation-duration
- animation-fill-mode
- animation-iteration-count
- animation-name
- animation-play-state
- animation-timing-function
Attribution
This article contains content originally from external sources.
Portions of this content come from the Microsoft Developer Network: Windows Internet Explorer API reference Article | https://docs.webplatform.org/wiki/css/properties/animation/animation | 2015-04-18T07:15:24 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.webplatform.org |
Function Libraries
Overview
Function libraries are dedicated midPoint objects that contain set of reusable functions. The functions can be used in other mappings and expressions in midPoint. Function libraries can be used to group frequently-used parts of the code, therefore simplifying midPoint configuration and maintenance.
Configuration
Please see Function Libraries Configuration page for configuration details. | https://docs.evolveum.com/midpoint/reference/expressions/function-libraries/ | 2022-01-29T04:12:46 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.evolveum.com |
Auto Numbers
The Auto number document control is meant to be and easy to use controls with displays an alpha numeric set of characters which auto increment every time a form is filled out from a document.
Auto numbers can be easily added to any document simply by dragging it's control over to the form builder.
Note: Auto number does not have a 'flow control' version.
After this is placed in the document you can edit the settings and save. It will now be visible or not to your crews out in the field.
Generating Auto Numbers
Auto-numbers are actually numbers. They are generated from a globally unique sequence of numbers by the server.
To support offline unique numbers, your tablet or mobile phone will download them in batches of 1000 and keep the available for use offline. If it didn’t do this, we couldn’t have unique numbers and still support an offline capability. So all clients will have their own set of numbers to work with.
If a table/phone is running low on numbers, it will download a new batch of numbers from the server to use offline. You can leave the office for a week and go to a remote location and still generate unique numbers using this method.
The numbers are numeric, but in order to make them easy to read over the phone to customers, we converted them numbers and upper case letters like airplane reservation numbers. These reservation style numbers are still sequential (not random), but we cycle through 0-9 and then A-Z to represent bigger numbers in easy to read text.
The server always just pulls the next new number from the sequence, so if you are only creating jobs on the Web App while online, the numbers will generally be in order.
In the future we will be adding a feature to turn this off so that they stay as numbers. | https://docs.fieldsquared.com/knowledge-base/auto-numbers/ | 2022-01-29T04:02:08 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://docs.fieldsquared.com/wp-content/uploads/2016/03/Auto-Number.png',
'Auto Number'], dtype=object)
array(['https://docs.fieldsquared.com/wp-content/uploads/2016/04/Auto-number-WA.png',
'Auto number WA'], dtype=object)
array(['https://docs.fieldsquared.com/wp-content/uploads/2016/04/Auto-Number-iPad-600x400.png',
'Auto Number iPad'], dtype=object) ] | docs.fieldsquared.com |
Account configuration
You can access this area by clicking on the user icon at the top right corner of the screen.
Sections in this area will allow you to configure account settings like profile, API keys, billing, and users.
1. Profile
Before using Snipcart in Live mode, you'll need to define your business information here.
The information entered here will be displayed on the order invoices sent to customers.
1.1 Import settings from Test mode
Under Account → Profile, in Live mode, you can import settings from Test mode.
Once you're ready to go Live, you'd toggle the Live mode using the button in the header of the dashboard and copy your Live API key on your website. Then, you'd have to go through all the settings and enter them again.
To save time, you can use the
Import settings from Test mode button to import each setting defined in Test mode to your Live mode. Be careful, it may override your actual Live settings..
2. Users
This section lets you add users to your Snipcart account. It also allows you to change transfer the ownership of your Snipcart account to an existing user.
To add users to your account, hit the Create new user button.
To change your own password, hit the Change password button.
By default, the Owner is the email address that created your Snipcart account in the first place.
The Owner receives Snipcart billing emails (fees paid to use the service) and can delete other users in the account. To change the owner, click on the key next to an existing user.
3. API keys
This section is used to manage both your public and secret API keys.
Public API key
Whether you're in Test or Live mode, this is where you'll find your API key. It's already included in the code snippet you need to inject on your website to integrate Snipcart. Please see the Live/Test environments section for important details.
Secret API keys
Secret API keys are used to access all of your Snipcart account data. They should never be accessible nor visible to anyone without proper authorization. Do not use them for your website's public key.
These secret keys give access to our REST API. When you want to allow an application to use your Snipcart data, create a secret API key for this specific third party. You will be able to revoke this access at any time afterward.
4. Billing payments
This section lets you manage billing details for your Snipcart account.
It displays the pricing plan you're on, your next billing date, and the upcoming payment amount.
This section also allows you to enter required payment information (credit card) details for your Snipcart account. Before connecting a payment gateway to your store, you'll have to enter this information.
If your credit card is expired, or you wish to use a new one, hit the Update credit card button.
A history of your previous Snipcart payments is also available here.
5. Developer logs
Whether you’re integrating Snipcart or running into post-development issues, the logs in this section will help you debug your application. Our team might also use them to help with support requests. | https://docs.snipcart.com/v3/dashboard/account-configuration | 2022-01-29T04:51:31 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://cdn.sanity.io/images/zac9i1uu/production/6bdf5b192ce7fdcd3b6b76d5203b568577120982-1281x864.png?w=1000&h=1000&fit=max',
'Filling business information in Snipcart dashboard'], dtype=object)
array(['https://snipcartweb-10f3.kxcdn.com/media/9758/import-settings.png',
'snipcart-docs-dashboard-import-settings'], dtype=object)
array(['https://cdn.sanity.io/images/zac9i1uu/production/5669c349e567936ff087f5f2cc588bd83de3bb5c-1299x706.png?w=1000&h=1000&fit=max',
'Managing account users in Snipcart dashboard'], dtype=object)
array(['https://cdn.sanity.io/images/zac9i1uu/production/dbb655d17487697979d081765e0a58f77e8ef473-1305x757.png?w=1000&h=1000&fit=max',
'Changing account ownership in Snipcart dashboard'], dtype=object)
array(['https://cdn.sanity.io/images/zac9i1uu/production/829713ed4ab2f8ac70b4d9949bcf582947c5c573-1317x876.png?w=1000&h=1000&fit=max',
'Public and secret API keys in Snipcart dashboard'], dtype=object)
array(['https://cdn.sanity.io/images/zac9i1uu/production/fb98a34aa8018d4db202577e8c3dc1479450de81-1450x1220.png?w=1000&h=1000&fit=max',
'Billing and subscription screen in Snipcart dashboard'],
dtype=object)
array(['https://cdn.sanity.io/images/zac9i1uu/production/799284fcc5d816daee86f87b5064a472f0e0800a-1294x866.png?w=1000&h=1000&fit=max',
'Developer logs inside Snipcart dashboard'], dtype=object) ] | docs.snipcart.com |
Funnelback patch 14.2.3.29
Released: 2016-04-19
Applies to: v14.2.3
Internal reference: SUPPORT-2137
Table of Contents
Description
Fixes an issue where after a Curator Rule has been set, the rule will not be shown in the Curator Rulesets list.
The Curator Rule is now shown immediately after it has been created.
Affected files
web/admin/curator/js/curator-list.js: Fixes the Curator Rulesets list to show a ruleset upon creation. | https://docs.squiz.net/funnelback/docs/latest/release-notes/patches/14.2/14.2.3.29.html | 2022-01-29T05:15:26 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.squiz.net |
Funnelback patch 15.8.0.26
Released: 2018-02-08
Applies to: v15.8.0
Internal reference: APD-3242
Table of Contents
Description
Adds time-based reloading of type-caching objects (XStream and Jackson serialisers) to avoid leaking metaspace memory when groovy classes are serialised and reloaded over time.
By default, reloading occurs every 10 minutes, and can be configured in modernui.properties. | https://docs.squiz.net/funnelback/docs/latest/release-notes/patches/15.8/15.8.0.26.html | 2022-01-29T04:59:52 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.squiz.net |
Managed entities
Overview
Managed entities are entities for which Waldur's database is considered an authoritative source of information. By means of REST API a user defines the desired state of the entities. Waldur's jobs are then executed to make the backend (OpenStack, JIRA, etc) reflect the desired state as close as possible.
Since making changes to a backend can take a long time, they are done in background tasks.
Here's a proper way to deal with managed entities:
- within the scope of REST API request:
- introduce the change (create, delete or edit an entity) to the Waldur's database;
- schedule a background job passing instance id as a parameter;
return a positive HTTP response to the caller.
within the scope of background job:
fetch the entity being changed by its instance id;
- make sure that it is in a proper state (e.g. not being updated by another background job);
- transactionally update the its state to reflect that it is being updated;
- perform necessary calls to backend to synchronize changes from Waldur's database to that backend;
- transactionally update its state to reflect that it not being updated anymore.
Using the above flow makes it possible for user to get immediate feedback from an initial REST API call and then query state changes of the entity.
Managed entities operations flow
View receives request for entity change.
If request contains any data - view passes request to serializer for validation.
View extracts operations specific information from validated data and saves entity via serializer.
View starts executor with saved instance and operation specific information as input.
Executor handles entity states checks and transition.
Executor schedules celery tasks to perform asynchronous operations.
View returns response.
Tasks asynchronously call backend methods to perform required operation.
Callback tasks changes instance state after backend method execution.
Simplified schema of operations flow
View ---> Serializer ---> View ---> Executor ---> Tasks ---> Backend | https://docs.waldur.com/developer-guide/managed-entities/ | 2022-01-29T05:08:17 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.waldur.com |
Alarm Notifier settings define where and how alarm notifications are sent. For example, you can create notifications for specific teams or team members based on the alarm types that matter most.
How to Create a New Alarm Notification
To create a new Alarm notification click Alarms in the sidebar menu and then click the + icon on the Alarm Notifications card.
Provide an Alarm Notifier name, email address, and Tag and click Next. A notification is sent to the provided email address when the alarm is triggered.
Complete the Filter settings and click Finish.
The new Alarm Notification is listed.
| https://release-2-11-0.docs.nirmata.io/day2operations/alarms/alarm_notifier/ | 2022-01-29T04:29:18 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['/images/alarm-notifier-1.png', 'image'], dtype=object)
array(['/images/alarm-notifier-2.png', 'image'], dtype=object)
array(['/images/alarm-notifier-3.png', 'image'], dtype=object)
array(['/images/alarm-notifier-4.png', 'image'], dtype=object)] | release-2-11-0.docs.nirmata.io |
Select roles to migrate
Migrate all or selected roles from the source to the destination Control Room. When you select roles, other related data, for example, licenses, users, credentials, historical activity, and schedules are also migrated.
This tab is shown only if you select Roles and associated data in the Settings tab.
Prerequisites
Before selecting the roles, review the following considerations:
- Bots and files are migrated based on users having at least one folder permission specifically, Upload, Download, or Delete.
- Migrate all system roles before migrating bots and schedules to ensure that the folder permissions are assigned properly to system roles.
- The system defined roles from source 10.x Control Room are mapped automatically to the corresponding destination Control Room.
- Similarly, user permissions from source 10.x Control Room are mapped to the destination Control Room.
- Roles that have any of the Upload, Download, or Delete permissions, are given Run/Schedule permission by default on migration.
- User-defined roles with the same name have _1 as a suffix to the name.
- Schedules from the source Control Room that are already in the destination Control Room are migrated with the same name.
- For the next migration run, the Available roles list shows all roles, regardless of whether they are migrated.
Procedure
- In the Available roles list, click the check box next to Role Name to select all roles.Alternatively, select each role from the list.Note: The Available roles show all roles (both system and user-defined) that exist in the 10.x Control Room database.
- Add roles to the Selected list.
- Click Next.
To review and verify data, see the Verify data and migrate task. | https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/control-room/administration/migration/migration-wizard-settings-roles.html | 2022-01-29T04:10:01 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.automationanywhere.com |
When a Droplet is deleted, you can recover it depending on how well protected the data was prior to the deletion.
If you had taken a snapshot of your Droplet prior to deletion, then the snapshot should still be available in your account. You can create a new Droplet based on the snapshot image in the DigitalOcean Control Panel. Snapshots contain the data your Droplet contained at the time the snapshot was taken.
If you had signed up for the automated weekly backup service, you may still have backups available. Because the weekly backup service is an add-on service, it is also scheduled for deletion when the Droplet is deleted, but unlike the Droplet, which is deleted immediately, the backups are deleted sometime within 12 to 24 hours after the Droplet is deleted. Please open a ticket with our support team as soon as possible so that we can determine if you have backups available.
If you were not using either of those services, then we do not have a backup of your Droplet’s data available on our system. In this scenario, you would need rely on any manual backups you have created and stored on your local computer. | https://docs.digitalocean.com/support/how-do-i-recover-a-deleted-droplet/ | 2022-01-29T04:39:12 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.digitalocean.com |
DigitalOcean load balancers provisioned by Kubernetes are managed by the Cloud Controller Manager (CCM) running on the control plane. Manual modifications to the load balancer through the cloud panel are overwritten by the CCM. This occurs during the CCM’s reconciliation process. This process runs to ensure that the load balancer is reflecting the state defined by the Kubernetes
LoadBalancer service object.
To make changes to your load balancer configuration that persist, modify the Kubernetes service object that provisioned the load balancer. You can do this using the Kubernetes service annotations.
Below is an example on how add an annotation to the
my-service object, which changes the default protocol to HTTPS:
kubectl annotate svc my-service service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
For information on how to apply annotations, you can use the command:
kubectl annotate --help
We provide the DigitalOcean CCM service annotations on our public GitHub repository. You can find additional documentation on the service annotations here. | https://docs.digitalocean.com/support/why-do-my-doks-load-balancer-settings-keep-reverting/ | 2022-01-29T05:09:03 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.digitalocean.com |
Method
GtkTextIterforward_char
Description [src]
Moves
iter forward by one character offset.
Note that images embedded in the buffer occupy 1 character slot, so
this function may actually move onto an image instead of a character,
if you have images in your buffer. If
iter is the end iterator or
one character before it,
iter will now point at the end iterator,
and this function returns
FALSE for convenience when writing loops. | https://docs.gtk.org/gtk4/method.TextIter.forward_char.html | 2022-01-29T04:32:12 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.gtk.org |
Overview”
Upload a Custom Icon for your Extra You can use your own icon (a PNG or JPEG image file) for your Extra on the cleaning booking form: Go to Settings -> Services -> Extras Click the edit icon for the Extra you want to add your icon to Click the Upload New Icon button Choose an image file (a PNG or JPEG image) that you want to use for your icon. Make sure it fits the size requirements noted on […]
Change the Display Order of My Extras Go to Settings -> Extras Drag and drop an Extra to a new position in the list Your Extras will be displayed in your Booking Form identically to how they appear in this list.
Extras – Overview Extras are additional items customers can select, that are outside the scope of the service you provide. For Extras to appear on your booking, you need to make sure your Extra is linked to each service that you want it available for. Some common examples of Extras are: Deep Clean Move In/Out Inside the Windows Inside the Fridge Create an Extra Go to Settings > Services > Extras Select Add New Enter your Values In the Services […] | https://docs.launch27.com/mtheme_knowledgebase-section/extras/ | 2022-01-29T03:33:21 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.launch27.com |
# Why Teamscale?
Teamscale is your platform to ensure high-quality software. It is based on a revolutionary approach to static code analysis: Teamscale is a platform not only to collect some metrics, but to combine them with essential data from your development, to aggregate them and to provide solid decision criteria for the fundamental questions during your software development. Teamscale addresses your code quality, but also your test quality, your architecture and your usage scenarios. This means Teamscale supports you throughout the entire life-cycle of your software development.
# Why Teamscale is different
Teamscale's core is the incremental analysis engine. It is directly connected to the version control system and, hence, analyzes each commit incrementally. This enables Teamscale to provide rapid feedback and reveal the root causes on commit-based for emerging problems or deteriorating trends.
With its analysis engine, Teamscale provides its own set of code quality analyses, such as clone detection, structural metrics, comment analyses, and much more.
In addition, the analysis engine integrates various kinds of data from your software development - from test coverage over usage data to external static code analysis tools that you might wish to use specifically. It can also connect to your bug and issue tracking system and can, thus, provide links between change requests, commits and changes to the system's quality status. Integrating all these data, Teamscale helps to manage your code, test and architecture quality as well as monitor your feature usages. In other words, Teamscale becomes the core platform for software intelligence.
The results of the analysis engine are kept in a NoSQL store and made accessible with a REST Service API. The web client, for example, uses this API for the user interface in the browser. In addition, Teamscale also provides several IDE clients, namely plugins for Eclipse, IntelliJ, NetBeans, and Visual Studio.
# Teamscale's Architecture
Any further questions?
In case your question is not answered in the documentation, please don't hesitate to contact the Teamscale support (opens new window). | https://docs.teamscale.com/ | 2022-01-29T04:12:40 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['/assets/img/teamscale_architecture.92e3ab79.png',
"Teamscale's Architecture"], dtype=object) ] | docs.teamscale.com |
Remote File Viewer (Windows CE 5.0)
Remote File Viewer is similar to the hierarchical file and folder viewing tool in Microsoft® Windows® OS called Windows Explorer.
On a development workstation, the Remote File Viewer tool displays a hierarchical view of the file system on a target device.
With the tool, you can manage the file system on the target device. You can also export a file to or import a file from the target device.
You can use Remote File Viewer from the Visual Studio development environment, or from Platform Builder. To display file system information in Remote File Viewer from Platform Builder, you must first connect to the target device with Platform Manager. For more information about connecting to a target device with Platform Manager, see Application Connectivity.
See Also
Send Feedback on this topic to the authors | https://docs.microsoft.com/en-us/previous-versions/windows/embedded/ms894566(v=msdn.10) | 2018-04-19T11:57:06 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.microsoft.com |
Bug Check 0xD8: DRIVER_USED_EXCESSIVE_PTES
The DRIVER_USED_EXCESSIVE_PTES bug check has a value of 0x000000D8. This indicates that there are no more system page table entries (PTE) remaining.
Important This topic is for programmers. If you are a customer who has received a blue screen error code while using your computer, see Troubleshoot blue screen errors.
DRIVER_USED_EXCESSIVE_PTES Parameters
If the driver responsible for the error can be identified, its name is printed on the blue screen and stored in memory at the location (PUNICODE_STRING) KiBugCheckDriver.
Cause
This is usually caused by a driver not cleaning up its memory use properly. Parameter 1 shows the driver which has consumed the most PTEs. The call stack will reveal which driver actually caused the bug check.
Resolution
Both drivers may need to be fixed. The total number of system PTEs may also need to be increased. | https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0xd8--driver-used-excessive-ptes | 2018-11-13T01:27:15 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.microsoft.com |
Used to extend the Application Model's Action node with properties specific to the Validation Module.
Namespace: DevExpress.ExpressApp.Validation
Assembly: DevExpress.ExpressApp.Validation.v18.2.dll
public interface IModelActionValidationContexts
Public Interface IModelActionValidationContexts
This interface is a part of the Application Model infrastructure and is not intended to be implemented by your classes. The IModelActionValidationContexts is used by the ActionValidationController, to extend the IModelAction interface. To learn more, refer to the Application Model Structure topic. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Validation.IModelActionValidationContexts | 2018-11-13T00:58:32 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.devexpress.com |
Introduction to Task Management
Note
Task Management is currently available in private beta. If you're interested in trying it out, please contact us.
Task management offers a way for localization managers to distribute and plan work to collaborators. Using tasks a localization manager can communicate better the work that needs to be done by specific due dates. Translators assigned with tasks can self-manage and prioritize work that needs to be done.
To view task management click on Tasks from the top menu and then select the Team tasks tab. This is the starting point for all your work on tasks.
To read more about the specific interface see, Managing existing tasks if you are a manager or Preview tasks assigned to me if you are a translator.
What is a task?
A task is defined as a set of strings from a project that need translation:
- In a specific target language
- by a specific due date
- assigned to a single translator
- in one status from the available: Open, In-progress, Delivered, or Closed
The definition outlined above offers a very flexible base in order to organize and manage translation work to be done.
You can read more about the task definition and examples in the Appendix.
What is a Task scheduler/packager?
Task scheduler/packager is an internal Transifex program that automatically creates and assigns tasks to translators following user defined settings.
The program always takes into account work already assigned to translators in a way to optimize for an ‘equal’ distribution of work. If a translator has more work to do (more untranslated words to work on) in the time the scheduler runs, fewer tasks will be assigned to her.
Read more about how task scheduler/packager works in the Appendix.
Appendix
More task definitions and usage examples
As stated above a task is a set of strings from a project that needs translation:
- in a specific language
- by a specific due date
- assigned to a single translator
- being in one of possible statuses: Open, In-progress, Delivered, or Closed
Having these specifications for a task mean that a task can contain strings from any resource of a project. Also, task's due date actually refers to translating strings contained in that task and not whole resources or project.
Moreover, the restriction of having a single translator assigned to a task, means that at any point only one contributor will be working on that task's strings and that person is responsible for that amount of work. As you will see below, this is important for monitoring the completion progress of a task.
Following are some examples of task states and a usage explanation:
- Task just created. Newly created tasks are in Open status until the assigned translator (assignee) accepts the task in order to start working on it. Until the assignee accepts the task, it remains in Open status.
- Task accepted by the assignee. When an assignee accepts to work on the task it transitions to In-progress status, which signifies that the assigned translator has started work.
- Task is delivered. When an assignee has completed work on a task there is an option to deliver the task. When the task enters the Delivered status it means that the assignee wants to inform the localization task manager that the work is complete. A delivered task cannot only remain in Delivered status.
- Assignee rejects a task. If the assigned translator cannot work on a task or sees that she cannot commit on completing a task that is In-progress, there is an option to Reject the task. Upon rejection the task enters Open status and the assignee is removed from the task.
- Task's due date is in the past. Localization manager would like to know of tasks that are not yet in Delivered status and their due date has past. These tasks require manual management and coordination in order to be completed.
- Task is not relevant any more. A localization manager has the option to close an open task in cases the work described in this task is not relevant any more. To signify that a task is not relevant, there is an option available to turn the task status into Closed. Tasks in that state are only visible to localization managers and no action can be performed on them.
Any changes in task status are logged and are accessible from a dedicated History page for each task.
Task scheduler/packager
The task scheduler/packager performs two different actions based on the settings:
- Identifies and groups untranslated content into a task(packager), into logical string parts based on the settings the user sets. The strings are split automatically by proximity based on resources.
- Assigning created tasks to translators (scheduler), where the algorithm takes into account the task work that each translator has to do and tries to assign new tasks so that the work load is as equally distributed as possible. | https://docs.transifex.com/managing-localization-work/introduction-to-task-management | 2018-11-13T01:40:05 | CC-MAIN-2018-47 | 1542039741176.4 | [array(['https://docs.transifex.com/uploads/TaskManagement-list.png',
'TaskManagement-list.png#asset:4985'], dtype=object) ] | docs.transifex.com |
Changelog
1.1.2¶
Thursday Oct 4 2018
- Adjust imports to work with Python 3.4 (#194)
- Adjust tests to work with older Ubuntu 14.04 (trusty) packages
- Update CI for charm-tools snap confinement change.
1.1.0¶
Friday Sep 28 2018
- Flag and handler trace logging (#191)
- Add non-destructive version of data_changed (#188)
1.0.0¶
Wednesday Aug 8 2018
- Preliminary support for operating system series upgrades (#183)
- Hotfix for Python 3.4 incompatibility (#181)
- Hotfix adding missed backwards compatibility alias (#176)
- Documentation updates, including merging in core layer docs (#186)
- Acknowledgment by version number that this is mature software (and has been for quite some time).
0.6.3¶
Tuesday Apr 24 2018
- Export endpoint_from_name as well (#174)
- Rename Endpoint.joined to Endpoint.is_joined (#168)
- Only pass one copy of self to Endpoint method handlers (#172)
- Make Endpoint.from_flag return None for unset flags (#173)
- Fix hard-coded version in docs config (#167)
- Fix documentation of unit_name and application_name on RelatedUnit (#165)
- Fix setdefault on Endpoint data collections (#163)
0.6.2¶
Friday Feb 23 2018
- Hotfix for issue #161 (#162)
- Add diagram showing endpoint workflow and all_departed_units example to docs (#157)
- Fix doc builds on RTD (#156)
0.6.1¶
- Separate departed units from joined in Endpoint (#153)
- Add deprecated placeholder for RelationBase.from_state (#148)
0.6.0¶
- Endpoint base for easier interface layers (#123)
- Public API is now only documented via the top level charms.reactive namespace. The internal organization of the library is not part of the public API.
- Added layer-basic docs (#144)
- Fix test error from juju-wait snap (#143)
- More doc fixes (#140)
- Update help output in charms.reactive.sh (#136)
- Multiple docs fixes (#134)
- Fix import in triggers.rst (#133)
- Update README (#132)
- Fixed test, order doesn’t matter (#131)
- Added FAQ section to docs (#129)
- Deprecations:
- relation_from_name (renamed to endpoint_from_name)
- relation_from_flag (renamed to endpoint_from_flag)
- RelationBase.from_state (use endpoint_from_flag instead)
0.5.0¶
- Add flag triggers (#121)
- Add integration test to Travis to build and deploy a reactive charm (#120)
- Only execute matching hooks in restricted context. (#119)
- Rename “state” to “flag” and deprecate “state” name (#112)
- Allow pluggable alternatives to RelationBase (#111)
- Deprecations:
- State
- StateList
- set_state (renamed to set_flag)
- remove_state (renamed to clear_flag)
- toggle_state (renamed to toggle_flag)
- is_state (renamed to is_flag_set)
- all_states (renamed to all_flags)
- any_states (renamed to any_flags)
- get_states (renamed to get_flags)
- get_state
- only_once
- relation_from_state (renamed to relation_from_flag)
0.4.7¶
- Move docs to ReadTheDocs because PythonHosted is deprecated
- Fix cold loading of relation instances (#106) | https://charmsreactive.readthedocs.io/en/latest/changelog.html | 2018-11-13T01:25:26 | CC-MAIN-2018-47 | 1542039741176.4 | [] | charmsreactive.readthedocs.io |
For Sharedils several “roles”).
- Upon the necessity, you can painlessly remove everything except the required data (i.e. leaving just a storage being included actually data storing, such a structure can be also efficiently utilized in case you need to share some common configuration files, that are to be used by nodes on different layers and/or environments.
Herewith, your Jelastic Shared quick access to your data from any point with the Jelastic-hosted NFS server.
By using this option, you can even build your own intercloud sharing solution and/or operate with the same data from different Jelastic installations - find out the required NFS server configurations for such an implementation within the linked doc. | https://docs.jelastic.com/dedicated-storage | 2018-11-13T01:13:50 | CC-MAIN-2018-47 | 1542039741176.4 | [array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2F1.png&x=1920&a=true&t=94fc300e8d99905eccecee8128009447&scalingup=0',
'storing with dedicated storage container'], dtype=object)
array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2F2.png&x=1920&a=true&t=94fc300e8d99905eccecee8128009447&scalingup=0',
'data export to the worldwide'], dtype=object) ] | docs.jelastic.com |
About VDMX
.
Getting started with VDMX
In this manual you will find detailed information about the workflow and user interface of VDMX.
Installing VDMX
- Download the VDMX Demo disk image
- Open the disk image and double click on the installer
- Follow the instructions in the installer
- Look in the ‘Extras’ folder for other bundled software that you may want to install
Additional Sample Media Downloads
Along with the application and included extras you may want to download some of the available Sample Media movie files and interactive generators to use with VDMX.
Using Templates in VDMX
The fastest way to get started with using VDMX is by using some of the pre-made interfaces that can be loaded from the Templates menu. See the Templates section for more details and a list of examples.
Learning the VDMX Interface
Continue reading this manual, starting with the first section of this covering the VDMX Inspectors. | https://docs.vidvox.net/ | 2018-11-13T01:37:58 | CC-MAIN-2018-47 | 1542039741176.4 | [array(['images/vdmx/VDMX5-b8.jpg', 'VDMX5 b8'], dtype=object)] | docs.vidvox.net |
This page introduces V-Ray camera topics and generally how to access them. Please click on the appropriate topic(s) below for the full documentation and proper usage information.
Overview
V-Ray makes full use of the regular Rhino camera, but can also add certain attributes to add more advanced functionality to a scene camera beyond the regular Rhino camera controls and attributes. These additional camera functions and controls are accessed in under the Camera rollout in the V-Ray Asset Editor.
Camera Settings
These options include VR and Stereo modes, tools to control the camera like a real camera with Exposure Value (EV) and White Balance in the Basic Camera Settings, and Aperture (F Number), shutter speed (1/s), Film Sensitivity (ISO) in the Advanced Camera Settings other settings; turning the camera into a dome camera; and setting up a stereoscopic camera for rendering left/right pairs of images. You can also change the camera type and settings for Depth of Field (DoF), and Motion Blur functions for the scene.
For details on these options, see Camera Settings.
Caustics are not supported in VR Camera and Stereo Camera modes.
Additional V-Ray Camera Tools
Additional tools are installed in the bin folder with the V-Ray for Rhino installation that are useful for camera work in V-Ray for Rhino. For more details on these tools, please see the Standalone Tools section of this site. | https://docs.chaosgroup.com/display/VNFR/Cameras | 2019-07-15T20:26:46 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.chaosgroup.com |
IoT Developer Kit Environment Setup
All Losant IoT Developer Kits use an ESP8266 based WiFi development board. Before you can flash the required firmware to these devices, you must setup your environment with the proper development tools. The below instructions will walk you through installing all necessary 3rd party tools. These instructions only need to be followed once, and the same environment can be used for all Losant IoT developer kits.
Install Arduino IDE
Download and install v1.8.7 of the Arduino IDE by following the instructions at:
If you already have the Arduino IDE installed, it is strongly recommend that you download and install 1.8.7. There have been issues with previous versions not working correctly.
Install USB Drivers
The microcontroller boards requires the USB to UART driver to be installed in order to program it. Download and install the driver for your platform by following the instructions at:
On a Mac, the above link downloads a disk image. Double-click the file to mount it, open the disk image, then double-click the .pkg file to install the driver.
Configure Arduino IDE
Launch the Arduino IDE. Linux users typically have to open the IDE under sudo for correct permissions. In order for the board to show up as a board in Arduino, you must add the following to the “Additional Boards Manager URLs” field in the preferences. On a Mac, the preferences can be found at
Arduino -> Preferences.
Restart the Arduino IDE.
Open the Board Manager at
Tools -> Board -> Boards Manager. Change the Type field to
Contributed and enter
esp8266 in the Search field. Select the ESP8266 entry in the list, change the version to 2.4.2, and click the
Install button.
Restart the Arduino IDE again. We can now configure the Arduino IDE to use the board we just installed.
Open the Arduino IDE, select the Tools menu, and change the Board to
Generic ESP8266 Module.
After selecting the board, additional options will appear in the Tools menu. Ensure the other options are set to the following; the bolded options typically need changed from their defaults.
- Flash Mode = QIO
- Flash Frequency = 40MHz
- CPU Frequency = 80MHz
- Flash Size = 4M 1M SPIFFS
- Reset Method = nodemcu
- Upload Speed = 115200
Install Libraries
The following workshops require a few dependencies to be installed. The first two libraries can be installed using Arduino’s Library Manager. Open the manager from the
Sketch -> Include Library -> Manage Libraries menu.
The first required library is PubSubClient. Type that in the filter field, select the entry in the list and install the latest version.
The next library is ArduinoJson. Repeat the the same process again, and install the latest version of ArduinoJson.
The last library is the Losant Arduino MQTT Client. We’ll include this library as a zip file. Download the zip from the following URL and save it somewhere convenience on your computer. You’ll use this file in the next step.
Next, add the zip you just downloaded to your libraries using the
Sketch -> Include Library -> Add .ZIP Library menu.
This will open a file browser. Simply select the zip you just downloaded.
Configure Device USB Port
If you don’t have a microcontroller yet, you can skip this step and return to it when you have the device.
Remove the microcontroller from the foam (if needed), and connect it to your computer with the supplied USB cable.
Use the
Tools -> Port menu to select the port your device is connected to. This will change depending on your operating system. On a Mac, it’s always named
SLAB_USBtoUART. On Windows it will be named
Com and then a number, for example
Com3.
If you don’t see an entry under the port menu, double check the following:
- Did you install the USB driver?
- Did you connect the microcontroller to your computer with the supplied USB cable?
Your environment is now properly setup to begin programming your dev kits with their needed firmware. Please continue to the instructions for your specific kit. | https://docs.losant.com/getting-started/losant-iot-dev-kits/environment-setup/ | 2019-07-15T21:14:36 | CC-MAIN-2019-30 | 1563195524111.50 | [array(['/images/getting-started/losant-iot-dev-kits/environment-setup/uart-driver-windows.png',
'Windows Download Windows Driver Download'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/uart-driver-mac.png',
'Mac Driver Download Mac Driver Download'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/mac-driver-disk-image.png',
'Mac Driver Disk Image Mac Driver Disk Image'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/arduino-preferences.png',
'Arduino Preferences Arduino Preferences'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/install-board.png',
'Install Board Install Board'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/select-board.png',
'Select Board Select Board'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/tools-menu.png',
'Tools Menu Tools Menu'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/manage-libraries.png',
'Manage Libraries Manage Libraries'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/install-pubsubclient.png',
'Install PubSubClient Install PubSubClient'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/install-arduinojson.png',
'Install ArduinoJson Install ArduinoJson'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/add-zip-library-menu.png',
'Add Zip Library Add Zip Library'], dtype=object)
array(['/images/getting-started/losant-iot-dev-kits/environment-setup/select-port.png',
'Select Port Select Port'], dtype=object) ] | docs.losant.com |
Compute gateway networking includes a compute network with one or more segments and the DNS, DHCP, and firewall configurations that manage network traffic for workload VMs. It can also include a layer 2 VPN and extended network that provides a single broadcast domain that spans your on-premises network and you SDDC workload network. | https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws.networking-security/GUID-B92F0840-ED79-41A5-BFE1-860115CD8EFE.html | 2019-07-15T20:39:44 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.vmware.com |
node_to_dict¶
sunpy.util.xml.
node_to_dict(node)[source] [edit on github]¶
Scans through the children of the node and makes a dictionary from the content.
Three cases are differentiated:
If the node contains no other nodes, it is a text-node and
{nodeName: text}is merged into the dictionary.
If the node has the attribute
methodset to
true, then it’s children will be appended to a list and this list is merged to the dictionary in the form:
{nodeName:list}.
Else, will call itself recursively on the nodes children (merging
{nodeName: node_to_dict()}to the dictionary).
- Parameters
node (
xml.etree.ElementTree.Element) – A XML element node.
- Returns
dict– The XML element node as a dictionary. | http://docs.sunpy.org/en/stable/api/sunpy.util.xml.node_to_dict.html | 2019-07-15T20:32:15 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.sunpy.org |
What’s New
The NetScaler SD-WAN release version 10.0 introduces the following new features and enhancements:
Application SupportApplication Support
HTML 5 Receiver and HDX Adaptive Transport Support
Support for classification of HTML5 receiver, and Adaptive Transport HDX traffic is added. When classified, these applications can be used in application rules, and to view application statistics.
Custom Application Reporting
You can create custom applications and enable reporting for them. Reporting statistics for the user created custom applications can be viewed and filtered in NetScaler SD-WAN Center.
ConfigurationConfiguration
Partial Software Upgrade Using Local Change Management
NetScaler SD-WAN 10.0 allows the network administrator to upgrade the software on the sites in the network selectively, without needing to upgrade all sites simultaneously. A specific use-case for this feature is an administrator who wants to test the new software on few branch sites before installing it on all sites in the network.
Ability to Create a GEO MCN by Cloning the MCN and Geo RCN by Cloning the RCN
You can configure a site as the secondary / geo MCN to support MCN redundancy. The secondary / geo MCN continuously monitors the health of the primary MCN. When the primary MCN fails, the secondary / geo MCN assumes the role of the MCN. In NetScaler SD-WAN 10.0, you can create a secondary MCN easily by cloning the primary MCN.
Multi-region deployment uses Regional Control Nodes (RCN) to control the client sites in a geographical or administrative region. You can also clone the RCN to create a secondary / geo RCN.
ScalabilityScalability
Ability to scale NetScaler SD-WAN deployment for 2,500 sites.
MonitoringMonitoring
Improved Application Bandwidth Usage by using Path Mapping
Path mapping and bandwidth usage enhancements are implemented. Based on the incoming traffic bandwidth demand, the traffic is processed in load balanced transmission mode, duplicate transmit mode, or persistent path transmit mode. You can monitor the path information for traffic flows in Monitoring > Flows, under Paths column. You can also hover your mouse cursor over any flow to view the DPI application name.
License ManagementLicense Management
- In the new centralized license model, an administrator can manage licensing from a central licensing server without having to access the appliance in the network. You can configure the IP address of a remote server as the licensing server and select the configuration to the appliances in the network.
- NetScaler SD-WAN Center can be configured as the licensing server.
RoutingRouting
Support for BGP Soft Reconfiguration
Support for BGP route refresh as per RFC 2918 to assist non-disruptive route-policy changes is added.
Support for Enterprise/MSP to Scale 64 K Route table for all appliances:
The number of routes supported on all appliances is scaled from 16000 to 64000.
Capability to Create Summary Routes for Sites
Route summarization reduces the number of routes that a router must maintain. A summary route is a single route that is used to represent multiple routes. In NetScaler SD-WAN 10.0, you can configure a summary route by using Local and Discard service types. The summary route is advertised to the peer SD-WAN appliances as the only route that encompasses all subnets falling as part of the summary route instead of sharing all subnet routes.
Capability to provide preference for which Data Center a site uses based on Virtual Path Route Cost
An administrator can decide which data center can be the most preferred by influencing route costs based on the new Virtual Path Route Cost feature.
- Virtual path route cost: You can configure Virtual Path route cost for individual virtual paths that are added to the route cost when a route is learnt from a remote site. With the introduction of VP route cost, the WAN to WAN forwarding cost is deprecated, because VP route cost helps influence the decision henceforth.
- OSPF route cost: You can now import OSPF route cost (type1 metric) by enabling “Copy OSPF Route Cost” in import filters. OSPF Route cost is considered in route selection instead of SD-WAN cost. Cost up to 65534 instead of 15 is supported, but it is advisable to accommodate for appropriate virtual path route cost that is added when route is learnt from a remote site.
- BGP - Copy VP Route cost to MED: You can now copy.
Capability to create Import/Export Route Policy Templates for Large-scale Deployments
You can now create multiple import or export filter templates by using various filter rules and associate the template at each site. The user created site level import/export filter rules take more precedence. The template rules follow the user created rules when associated to the site in Route learning section of Connections.
Using CLI to Access Routing Functionality
You can view additional information related to dynamic routing and the protocol status. Type the following command and syntax to access routing daemon and view the list of commands.
dynamic_routing?
This is a restricted CLI access for debugging routes.
Support added for Virtual Router Redundancy Protocol
VRRP provides device redundancy to eliminate the single point of failure inherent in the static default-routed environment. VRRP ensures a high availability default path without configuring dynamic routing or router discovery protocols on every end-host. NetScaler SD-WAN 10.0 supports VRRP version 2 and version 3 to inter-operate by using any third party routers. VRRP cannot be used between two NetScaler SD-WAN appliances. It can be used between NetScaler SD-WAN appliance and the peer routers that are standard VRRP RFC compliant routers.
Application Based Traffic Steering
You can create application routes using application objects. This application routes aids in steering the traffic based on DPI or IP infrastructure using various SD-WAN services, such as Virtual Path, Internet, Intranet, Local, GRE, or IPsec.
Support added for Multicast IGMP/MLD Proxy
By using static multicast group, network administrators can control the source and destination of the multicast traffic. In NetScaler SD-WAN 10.0, users can statically configure multicast groups and enable IGMP Proxy for updating the upstream code networks by using all the sources in the downstream networks of the edge
NetScaler SD-WAN CenterNetScaler SD-WAN Center
A hierarchical tiered network architecture is introduced to enable higher scale, and delegation of regional administration in NetScaler SD-WAN 10.0. NetScaler SD-WAN Center supports multi-region mode deployment, RCN discovery, and SD-WAN collector configuration and software upgrade.
- The NetScaler SD-WAN center 10.0 dashboard includes a multi-region summary dashboard. This dashboard provides a graphical overview of the network health at the various regions.
- You can view the network maps in either the tile view or the schematic view.
NetScaler SD-WAN Center License Server
You can now configure NetScaler SD-WAN Center to act as the remote license server for centralized license management.
PlatformsPlatforms
SD-WAN VPX-SE/VPXL-SE Platforms in VMWare ESXi – 10G NIC Support
Support added for 10G Virtual Ethernet interfaces on the SD-WAN VPX-SE/VPXL-SE appliances in the VMware ESXi deployment. You can enable this feature by changing the driver for the virtual NIC from E1000 when deploying and configuring SD-WAN VPX-SE/VPXL-SE platforms using VMware ESXi.
NetScaler SD-WAN Standard Edition – 1 Gbps support on Hyper-V Platform
Support for 1Gbps throughput for SD-WAN Standard Edition platforms deployed on Hyper-V is added.
Virtual Ethernet Ports per VPX-SE/VPXL-SE Platforms
Support for more ports for VPX-SE HA deployments between servers is added. This support would enable customers to map high availability interfaces one-to-one to real ports to avoid any hypervisor misconfiguration that would separate the virtual appliances and cause both virtual appliance to become active.
Maximum Network Interfaces Supported by SD-WAN VPX-SE appliances
In release 10.0, the number of maximum network interfaces supported by the SD-WAN VPX-SE platforms is 8 unlike in previous release versions in which the number of network interfaces supported was only 4.
NetScaler SD-WAN 2100 Enterprise Edition Appliance
- A new NetScaler SD-WAN 2100 EE edition is introduced.
REST APIREST API
- Enhancement to current appliance configuration for REST APIs to include Site, WAN Link, Virtual Path, Firewall, Quality of Service, DPI, iPerf, and Management IP.
- Support for local user and group management REST APIs.
- Capability to update the NetScaler SD-WAN Appliance OS and License files using REST APIs.
- Enhance current monitoring REST APIs to include WAN Link, Virtual Path, Firewall, Quality of Service, Applications, and Flows.
- Introduced REST APIs for NetScaler SD-WAN Center. | https://docs.citrix.com/en-us/netscaler-sd-wan/10/whats-new.html | 2019-07-15T21:15:09 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.citrix.com |
If is caused because your DNS records do not allow LetsEncrypt.org to generate the SSL certificate for your white label domain or subdomain.
You can fix this issue by adding the following DNS record to your domain:
your.whitelabel.domain. CAA 0 issue “letsencrypt.org”
(replace your.whitelabel.domain with your actual white label domain name configured on our platform)
This would allow LetsEncrypt.org to issue a SSL certificate for that subdomain.
You can read more about CAA DNS records here:
If you’re unsure about how to add this record, you should be able to ask your domain registrar or hosting provider to assist (depending where your DNS management is at).
Once this DNS record has been added and DNS changes have propagated, you can try accessing your White Label reports again, and SSL should be working. | https://docs.hetrixtools.com/ssl-error-accessing-white-label-report/ | 2019-07-15T20:19:40 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.hetrixtools.com |
Create clusters with custom images
Once you have registered your image catalog, you can use your custom image(s) when creating a cluster.
You can do this either with the web UI or CLI.
Select a custom image in Cloudbreak web UI
Perform these steps in the advanced General Configuration section of the create cluster wizard.
Steps
- Ambari and HDP/HDF repositories, or you can customize to point to specific versions of Ambari and HDP/HDF that you want to use for the cluster.
Select a custom image in the CLI
To use the custom image when creating a cluster via CLI, perform these steps.
Steps
- Obtain the image ID. For example:
cb imagecatalog images aws --imagecatalog custom-catalog [ { "Date": "2017-10-13", "Description": "Cloudbreak official base image", "Version": "2.5.1.0", "ImageID": "44b140a4-bd0b-457d-b174-e988bee3ca47" }, { "Date": "2017-11-16", "Description": "Official Cloudbreak image", "Version": "2.5.1.0", "ImageID": "3c7598a4-ebd6-4a02-5638-882f5c7f7add" } ]
- When preparing a CLI JSON template for your cluster, set the “ImageCatalog” parameter to the image catalog that you would like to use, and set the “ImageId” parameter to the uuid of the image from that catalog that you would like to use. For example:
... "name": "aszegedi-cli-ci", "network": { "subnetCIDR": "10.0.0.0/16" }, "orchestrator": { "type": "SALT" }, "parameters": { "instanceProfileStrategy": "CREATE" }, "region": "eu-west-1", "stackAuthentication": { "publicKeyId": "seq-master" }, "userDefinedTags": { "owner": "aszegedi" }, "imageCatalog": "custom-catalog", "imageId": "3c7598a4-ebd6-4a02-5638-882f5c7f7add" } | https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.1/advanced-cluster-options/content/cb_select-a-custom-image-when-creating-a-cluster.html | 2019-07-15T21:05:54 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.hortonworks.com |
a build platform use automatic graphics API choice.
By default each platform is using "automatic" graphics API detection and picks the best available one. However it is possible to change that to explicitly limit the graphics APIs used, see SetGraphicsAPIs.
See Also: GetUseDefaultGraphicsAPIs, GetGraphicsAPIs, SetGraphicsAPIs.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/PlayerSettings.SetUseDefaultGraphicsAPIs.html | 2019-07-15T20:09:55 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.unity3d.com |
This section is a simple overview of material described in greater detail in the Apache Spark documentation here and here.
ModesModes
Spark on Mesos supports two modes of operation: coarse-grained mode and fine-grained mode. Coarse-grained mode provides lower latency, whereas fine-grained mode provides higher utilization. You can find nore information here.
Coarse-grained modeCoarse-grained mode
With the Coarse-grained mode, each Spark executor is represented by a single Mesos task. As a result, executors have a constant size throughout their lifetime.
- Executor memory:
spark.executor.memory
- Executor CPUs:
spark.executor.cores, or all the cores in the offer.
- Number of Executors:
spark.cores.max/
spark.executor.cores. Executors are brought up until
spark.cores.maxis reached. Executors survive for duration of the job.
- Executors per agent: Multiple
Quota for drivers and executorsQuota for drivers and executors
Setting Mesos Quota for the drivers prevents the Dispatcher from consuming too many resources and assists queueing behavior.
To control the number of drivers the Spark service runs concurrently, you should set a quota for the drivers. The quota guarantees that the Spark Dispatcher has resources available to launch drivers and limits the total impact on the cluster due to drivers.
Optionally, you can set a quota for the drivers to consume to ensure that drivers are not starved of resources by other frameworks and make sure they do not consume too much of the cluster. For more information, see coarse-grained mode above.
Best practices for setting driversBest practices for setting drivers
The quota for the drivers allows the operator of the cluster to ensure that only a given number of drivers are concurrently running. As additional drivers are submitted, they are queued by the Spark Dispatcher.
Use the following guidelines to achieve best results:
- Set the quota conservatively, but be aware that the setting affects the number of jobs that can run concurrently.
- Decide how much of your cluster’s resources to allocate to running drivers. Allocated resources are only be used for the Spark drivers, meaning that you can decide roughly how many concurrent jobs you would like to have running at a time. As additional jobs are submitted, they are queued and run with first-in-first-out semantics.
- For the most predictable behavior, enforce uniform driver resource requirements and a particular quota size for the Dispatcher. For example, if each driver consumes 1.0 cpu and it is desirable to run up to 5 Spark jobs concurrently, you should create a quota that specifies 5 CPUs.
Setting quota for the driversSetting quota for the drivers
SSH to the Mesos master and set the quota for a role (
dispatcherin this example):
cat dispatcher-quota.json { "role": "dispatcher", "guarantee": [ { "name": "cpus", "type": "SCALAR", "scalar": { "value": 5.0 } }, { "name": "mem", "type": "SCALAR", "scalar": { "value": 5120.0 } } ] } curl -d @dispatcher-quota.json -X POST http://<master>:5050/quota
Install the Spark service with the following options (at a minimum):
cat options.json { "service": { "role": "dispatcher" } } dcos package install spark --options=options.json
Best practices for the executorsBest practices for the executors
It is recommended to allocate a quota for Spark job executors. Allocating quota for the Spark executors provides:
- A guarantee that Spark jobs receive the requested amount of resources.
- Additional assurance that even if misconfigured (for example, with a driver with
spark.cores.maxunset), Spark jobs do not consume resources that impact other tenants on the cluster.
The drawback to allocating quota to the executors is that quota resources cannot be used by other frameworks in the cluster.
Setting quota for the executorsSetting quota for the executors
Quota can be allocated for Spark executors in the same way it is allocated for Spark dispatchers. If you want to run 100 executors concurrently, each with 1.0 CPU and 4096 MB of memory, you would do the following:
cat executor-quota.json { "role": "executor", "guarantee": [ { "name": "cpus", "type": "SCALAR", "scalar": { "value": 100.0 } }, { "name": "mem", "type": "SCALAR", "scalar": { "value": 409600.0 } } ] } curl -d @executor-quota.json -X POST http://<master>:5050/quota
When Spark jobs are submitted, they must indicate the role for which the quota has been set to consume resources from this quota. For example:
dcos spark run --verbose --name=spark --submit-args="\ --driver-cores=1 \ --driver-memory=1024M \ --conf spark.cores.max=8 \ --conf spark.mesos.role=executor \ --class org.apache.spark.examples.SparkPi \ 3000"
Permissions when using quota with strict modePermissions when using quota with strict mode
Strict mode clusters (see security modes) require extra permissions to be set before you can use quota. Follow the instructions in installing and add the additional permissions for the roles you intend to use, as detailed below.
Using the example above, you would set permissions as follows:
First set the quota for the Dispatcher’s role (
dispatcher):
cat dispatcher-quota.json { "role": "dispatcher", "guarantee": [ { "name": "cpus", "type": "SCALAR", "scalar": { "value": 5.0 } }, { "name": "mem", "type": "SCALAR", "scalar": { "value": 5120.0 } } ] }
If you have downloaded the CA certificate,
dcos-ca.crtto your local machine from the
https://<dcos_url>/ca/dcos-ca.crtendpoint, set the quota from your local machine:
curl -X POST --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/mesos/quota -d @dispatcher-quota.json -H 'Content-Type: application/json'
Optionally, set the quota for the executors using the same settings as above:
cat executor-quota.json { "role": "executor", "guarantee": [ { "name": "cpus", "type": "SCALAR", "scalar": { "value": 100.0 } }, { "name": "mem", "type": "SCALAR", "scalar": { "value": 409600.0 } } ] }
If you have not already done so, set the quota from your local machine. For example, assuming you have
dcos-ca.crtlocally:
curl -X POST --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/mesos/quota -d @executor-quota.json -H 'Content-Type: application/json'
Install Spark with these minimal configuration settings:
{ "service": { "service_account": "spark-principal", "role": "dispatcher", "user": "root", "service_account_secret": "spark/spark-secret" } }
Now you are ready to run a Spark job using the principal you set and the roles:
dcos spark run --verbose --submit-args=" \ --conf spark.mesos.principal=spark-principal \ --conf spark.mesos.role=executor \ --conf spark.mesos.containerizer=mesos \ --class org.apache.spark.examples.SparkPi 100"
Setting
spark.cores.max
To improve Spark job execution reliability, set the maximum number of cores consumed by any given job. This avoids any particular Spark job from consuming too many resources in a cluster. It is highly recommended that each Spark job be submitted with a limitation on the maximum number of cores (CPUs) it can consume. This is especially important for long-running and streaming Spark jobs.
dcos spark run --verbose --name=spark --submit-args="\ --driver-cores=1 \ --driver-memory=1024M \ --conf spark.cores.max=8 \ #<< Very important! --class org.apache.spark.examples.SparkPi \ 3000"
When running multiple concurrent Spark jobs, consider setting
spark.cores.max between
<total_executor_quota>/<max_concurrent_jobs> and
<total_executor_quota>, depending on your workload characteristics and goals.
Fine-grained mode (deprecated)Fine-grained mode (deprecated)
In “fine-grained” mode, each Spark task is represented by a single Mesos task. When a Spark task finishes, the resources represented by its Mesos task are relinquished. Fine-grained mode enables finer-grained resource allocation at the cost of task startup latency.
- Executor memory:
spark.executor.memory
- Executor CPUs: Increases and decreases as tasks start and terminate.
- Number of Executors: Increases and decreases as tasks start and terminate.
- Executors per agent: At most 1
PropertiesProperties
The following is a description of the most common Spark scheduling properties on Mesos. For a full list, see the Spark configuration page and the Spark on Mesos configuration page. | https://docs.mesosphere.com/services/spark/2.8.0-2.4.0/job-scheduling/ | 2019-07-15T20:38:17 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.mesosphere.com |
Serverless
(New in version 0.7.3)
It is recommended to use an integration for your particular serverless environment if available, as those are easier to use and capture more useful information.
If you use a serverless provider not directly supported by the SDK, you can use this generic integration.
Apply the
serverless_function decorator to each function that might throw errors:
import sentry_sdk from sentry_sdk.integrations.serverless import serverless_function sentry_sdk.init(dsn="___PUBLIC_DSN___") @serverless_function def my_function(...): ...
Behavior
Exceptions raising from those functions will be reported to Sentry (and reraised).
Each call of a decorated function will block and wait for current events to be sent before returning.
When there are no events to be sent, this will not add a delay. However, if there are errors, this will delay the return of your serverless function until the events are sent. This is necessary as serverless environments typically reserve the right to kill the runtime/VM when they consider it unused.
The maximum amount of time to block overall is set by the
shutdown_timeoutclient option.
You can disable this aspect by decorating with
@serverless_function(flush=False)instead. | https://docs.sentry.io/platforms/python/serverless/ | 2019-07-15T20:46:36 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.sentry.io |
Pro 2.5.19+Insightly CRM API Integration
This documentation page assumes you have read over the CRM Integration Overview page. If you have not yet read it, please do so now. We also assume that you have a Insightly account already. If you currently have Freeform Lite, you can purchase an upgrade to Freeform Pro.
Includes support for the following:
Setup InstructionsSetup Instructions
- Create & get API Key from Insightly:
- Setup Integration on your site:
- Go to the CRM section in Freeform Integrations area (Freeform > Integrations > CRM)
- Click the New Integration at the top right.
- Select Insightly from the Service Provider select dropdown.
- Enter a name and handle for the integration.
- Paste the Insightly. | https://docs.solspace.com/craft/freeform/v2/api-integrations/crm/insightly.html | 2019-07-15T19:53:35 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.solspace.com |
All WSO2 products come with Axis2 capabilities, which allows you to deploy services as archive files. An Axis2 Service is deployed in your product in the form of an Axis2 archive file (.aar), which has all the service-related resources, service classes, and third-party libraries bundled together .
Once you have developed the Axis2 service, you can upload the .aar file to a running WSO2 AS instance as shown below.
- Log in to the management console and select Main -> Services -> Add -> AAR Service.
-.
If the file is uploaded successfully, a message appears prompting you to refresh the page. Click OK.
If the service is faulty, a "Faulty Service Groups" link will appear. You can click the link to view the errors.
Refresh the Deployed Services page in the management console to view the newly added service.
The file name of the archive is always used as the service name unless you have a different name attributed to the service file. For example, if the name of the archive file is Test.aar, then the name of the service will be Test. | https://docs.wso2.com/pages/viewpage.action?pageId=47517951 | 2019-07-15T20:02:11 | CC-MAIN-2019-30 | 1563195524111.50 | [] | docs.wso2.com |
Django internals¶
Documentation for people hacking on Django itself. This is the place to go if you’d like to help improve Django, learn or learn about how Django works “under the hood”.
Warning
Elsewhere in the Django documentation, coverage of a feature is a sort of a contract: once an API is in the official documentation, we consider it “stable” and don’t change it without a good reason. APIs covered here, however, are considered “internal-only”: we reserve the right to change these internals if we must.
- Contributing to Django
- Mailing lists
- Django committers
- Django’s security policies
- Django’s release process
- Django Deprecation Timeline
- The Django source code repository
- How is Django Formed? | https://docs.djangoproject.com/en/1.6/internals/ | 2015-05-22T16:03:14 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.djangoproject.com |
Difference between revisions of "Upgrading your template index file"
From Joomla! Documentation
Latest revision as of 23:39, 25 June 2013
This Namespace has been archived - Please Do Not Edit or Create Pages in this namespace. Pages contain information for a Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete.
Upgrading your index.php file
- Replace _VALID_MOS with _JEXEC
- Replace $mosConfig_absolute_path with $this->baseUrl
- Replace $mosConfig_live_site with JURI::base()
- Replace fixed strings with translatable strings. For example, replace echo 'Hello' with echo JText::_( 'Hello' )
- Replace calls to mosGetParam with calls to JRequest::getVar. For example, replace $id = mosGetParam( $_REQUEST, 'id', 0 ); with $id = JRequest::getVar( 'id', 0 );
- Replace mosShowHead(); with <jdoc:include
- Replace mosMainBody() with <jdoc:include
- Replace mosCount.) | https://docs.joomla.org/index.php?title=J1.5:Upgrading_your_template_index_file&diff=101077&oldid=11837 | 2015-05-22T17:12:35 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
Difference between revisions of "Where can you download the template used on the site?"
From Joomla! Documentation
Latest revision as of 16:24, 1 September. | https://docs.joomla.org/index.php?title=Where_can_you_download_the_template_used_on_the_www.joomla.org_site%3F&diff=73649&oldid=11353 | 2015-05-22T16:38:16 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
User Guide
Home > Support > BlackBerry Manuals & Help > BlackBerry Manuals > BlackBerry Smartphones > BlackBerry Q10 > User Guide BlackBerry Q10 Smartphone - 10.2
Settings and options
Related information
Next topic: Connections
Previous topic: My device isn't recognized when I connect it to my computer
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/55621/mba1337192208154.jsp | 2015-05-22T16:19:21 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.blackberry.com |
Quick Reference Guide
Local Navigation
Add a theme to a .zip file to upload to BlackBerry App World
The BlackBerry® Theme Builder packages themes for different devices into a .zip file so that you can upload the themes to the BlackBerry App World™ storefront. The .zip file contains the .cod file, the name of the bundle, the minimum BlackBerry® Device Software for the bundle, and the list of supported devices for the bundle.
After you finish:
- On the File menu, click Export.
- In the Export Type section, select the Publish for App World (ZIP) option.
- In the Theme Name field, type a name for the theme.
- In the Zip File field, specify the location where you want to store the .zip file and do one of the following:
- In the Handheld drop-down list, click the device that you created the theme for.
- In the Target OS drop-down list, click the version of the BlackBerry Device Software that you created the theme for.
- If you want to support the theme in languages other than English, select the Include international characters check box. If you select this option, the BlackBerry Theme Builder exports all of the international characters, such as accented characters and symbols, that are available for each font in the theme. The BlackBerry Theme Builder includes only the international characters that exist in the original font set. It does not export international character sets, such as Chinese, Arabic, or Cyrillic. If a font set specified in the theme does not contain a specific character, the BlackBerry device uses a system font to display the character.
- Click OK.
In BlackBerry App World, you must create an application in the vendor portal and set the category to Themes. For more information about adding applications to BlackBerry App World, see the administration guide for the BlackBerry App World storefront Vendor portal.
Before you submit an application for review, you can change any information about the application. You can choose to save a draft of the application instead of submitting the application right away. You can continue to add new theme bundles to the release while it is in draft mode.
Next topic: Test your theme on a device
Previous topic: Create a theme
Was this information helpful? Send us your comments. | http://docs.blackberry.com/fr-fr/developers/deliverables/21106/ExportAppWorld_1269321_11.jsp | 2015-05-22T16:28:14 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.blackberry.com |
Generated Object
A JTabbedPane
Value Argument
The tabbedPane() node accepts no value argument.
Attributes
In addition to JComponent attributes, tabbedPane supports these additional attributes:
- model <SingleSelectionModel> The model that tracks the tab selection. not normally set by the end user.
- tabPlacement <int> One of JTabbedPane.TOP, JTabbedPane.BOTTOM, JTabbedPane.LEFT, or JTabbedPane.RIGHT, indicates where the tabs should be palced.
- tabLayoutPolicy <int> One of JTabbedPane.WRAP_TAB_LAYOUT or JTabbedPane.SCROLL_TAB_LAYOUT. Determines the behavior of the tabs when there are too many for one line. Either multiple lines, or a single line with scroll controls (respectively)
- selectedIndex <int> The tab index to be selected initially. Do not combine with selectedComponent
- selectedComponent <Component> The tab representing the component to be selected initally. Do not combine with selectedIndex.
Content
All immediate children of the JTabbedPane that are Components are added to the tabbed pane as tabs. In addition, these children also can have the following attributes, which apply to the tab representiong those components
- title <String> The text title on the tab. If this is missing the name property of the component is used (which defaults to an empty string)
- tabIcon <Icon> The icon to be used on the tab, usually to the left of the text.
- tabDisabledIcon <Icon> The icon to be displayed when the tab is disabled. If tabIcon: is specified but tabDisabledIcon isn't then a disable icon will be automatically generated base on the Look and Feel.
- tabToolTip <String> The toolTip to be displayed when the mouse is hovering over the tab. (note that this is different than the component in the tab, use toolTip: for the component itself)
- tabBackground <Color> The background color of the tab. If this isn't specified a Look and Feel dependent value will be provided. The Look and Feel may ignore this (Windows Vista does)
- tabForeground <Color> The foreground text color of the tab. If this isn't specified a Look and Feel dependent value will be provided. The Look and Feel may ignore this (Windows Vista does)
- tabEnabled <booelan> A flag to indicate whether or not this tab should be considered activated and accessible.
- tabMnemonic <int or String or char> The mnemonic of the tab to be used for keyboard navigation. Do not combine with tabDisplayedMnemonicIndex unless the characters match up. Character must be in title/upper case
- tabDispalyedMnemonicIndex <int> The character index in the title to use for the mnemonic. Useful when a more prominent character should be used, like "Stop Operation", a mnemonic of O would ordinarily underline the 'o' in Stop. | http://docs.codehaus.org/pages/viewpage.action?pageId=24576004 | 2015-05-22T16:26:24 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.codehaus.org |
Moving sensitive files outside the web root.
while under review, please see this forum topic
WARNING: Do not attempt this procedure unless, after reading several times, you understand it. This is not for beginners, ensure you have a back up of your site before attempting. its own directory outside of public_html to contain its configuration.php file.
2. Move configuration.php to the design2-files directory and rename it!! Do not use the Joomla web administrator interface global configuration button to edit the global configuration.
If you need to change configuration settings, do so manually by downloading the relocated joomla.conf file, making the needed edits and uploading it back.. | https://docs.joomla.org/index.php?title=Moving_sensitive_files_outside_the_web_root&oldid=30998 | 2015-05-22T16:37:14 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
Difference between revisions of "Installing Joomla on Debian Linux"
From Joomla! Documentation
Revision as of 09:17, 12 November 2012
Contents
- 1 Preface
- 2 Installing Joomla!
- 3 XAMPP
- 4 LaMp
- 5 BitNami Joomla! stack installation sets the correct Ownership of the files and permissions.
- Using the CHOWN command will cause Ownership problems with xampp.
- Using nautilus to manipulate folders/files on localhost will cause Ownership problems with xam :. first enter into mysql system with:
mysql -u root -p
Then at the 'mysql>' prompt enter the following.. | https://docs.joomla.org/index.php?title=Installing_Joomla_on_Debian_Linux&diff=77522&oldid=61823 | 2015-05-22T16:21:03 | CC-MAIN-2015-22 | 1432207925696.30 | [array(['/images/c/c8/Compat_icon_1_5.png', 'Joomla 1.5'], dtype=object)
array(['/images/d/da/Compat_icon_1_6.png', 'Joomla 1.6'], dtype=object)
array(['/images/8/87/Compat_icon_1_7.png', 'Joomla 1.7'], dtype=object)] | docs.joomla.org |
Difference between revisions of "Using Github on Ubuntu"
From Joomla! Documentation
Latest revision as of 03:20, 5 September 2013
If you encounter problems using GitHub on an Ubuntu system these hints may help you.
Typical symptoms include Eclipse failing with unknown exceptions and this sort of thing on the command line:
error: RPC failed; result=56, HTTP code = 200 fatal: The remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed
Upgrade to the latest version of Git
GitHub oftens seems to use a version of git that is higher than the version distributed in the current LTS version of Ubuntu and this can cause problems if the git protocol has changed. At the time of writing the current Ubuntu LTS is 12.04 which ships with git version 1.7.8, but GitHub requires 1.7.10 minimum. (See [1] for the latest version requirements). To upgrade git you need to add git's package repository to the package manager then upgrade. This used to be easy until Unity came along, so now you have to open a terminal window and enter the following commands:-
sudo add-apt-repository ppa:git-core/ppa sudo apt-get update sudo apt-get upgrade
(you will be prompted for your password when you enter the first line).
Use the git: protocol specifier
When cloning a repository on GitHub you would normally copy the repository name given which has either an https: or an ssh: protocol specifier. For some reason these don't work and the solution is to manually change the protocol to git:. For example, the repository URI for the Joomla CMS should be, so try using git://github.com/joomla/joomla-cms.git instead. | https://docs.joomla.org/index.php?title=Using_Github_on_Ubuntu&oldid=103198&diff=prev | 2015-05-22T16:52:32 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
Installation and Configuration Guide
Local Navigation
Specify the heap memory for Eclipse
If you work with large or numerous projects, or if you use the profiler application when you debug a BlackBerry device application, you should start Eclipse with more than the default amount of heap memory or permanent generation memory. Memory between 512Mb and 768Mb is recommended.
Next topic: Turn on application preprocessing
Previous topic: Specify the default BlackBerry JRE
Was this information helpful? Send us your comments. | http://docs.blackberry.com/zh-cn/developers/deliverables/23665/Specify_the_heap_memory_for_eclipse_655407_11.jsp | 2015-05-22T16:24:03 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.blackberry.com |
Changes related to "CSS Class Names Frontend in Version 1.6"
← CSS Class Names Frontend in Version 1.6
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/Special:RecentChangesLinked/CSS_Class_Names_Frontend_in_Version_1.6 | 2015-05-22T17:20:10 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
Difference between revisions of "Components Contacts Contacts"
From Joomla! Documentation
Revision as of 07:44, 9 July 2013
Contents
- 1 Overview
- 2 How to access
- 3 Description
- 4 Screenshot
- 5 Column Headers
- 6 List Filters
- 7 Toolbar
- 8 Options
- 9 Toolbar Links
- 10 Quick Tips
- 11 Related Information:
File:Help30-New-Edit-Publish-Unpublish-Archive-Checkin-Trash-Options-Help-toolbar.png>: | https://docs.joomla.org/index.php?title=Help33:Components_Contacts_Contacts&diff=101484&oldid=101483 | 2015-05-22T16:47:59 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
Difference between revisions of "Joomla! Extension Directory FAQs"
From Joomla! Documentation
Revision as of 09:33, 25 November 2012
Contents
- 1 Extensions' submission, approval and rejection
- 2 Reviews
- 3 Categorization
-
-.
Community Leadership Team Liaison
JED Team Manager
JED Editors
VEL/JSST/J!People Liaison
Technical Advisory
Contacting the team
The editors can be reached by submitting a support ticket on the JED Help Desk. Support is no longer provided via email. | https://docs.joomla.org/index.php?title=Joomla!_Extension_Directory_FAQs&diff=77740&oldid=27192 | 2015-05-22T16:34:23 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
Information for "Mrs.siam" Basic information Display titleUser:Mrs.siam Default sort keyMrs.siam Page length (in bytes)184 Page ID22543rs.siam (Talk | contribs) Date of page creation15:22, 29 September 2011 Latest editorMrs.siam (Talk | contribs) Date of latest edit15:22, 29 September 2011 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=User:Mrs.siam&action=info | 2015-05-22T17:07:22 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
. must match the package declaration. For example, the following class:
must be located in the following directory: [mySourceDirectory]/com/mycompany/mypackage/MyClass.java. Otherwise you would get an error like this
Version 1.4 (105 issues)
Version 1.3 (43 issues)
Version 1.2 (16 issues)
Version 1.1 (15 issues) | http://docs.codehaus.org/pages/viewpage.action?pageId=233053517 | 2015-05-22T16:05:54 | CC-MAIN-2015-22 | 1432207925696.30 | [array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object) ] | docs.codehaus.org |
2.4.1 (2010-07-23)
Overview
- Fixed a security issue where logged-in CMS authors were allowed to rename files with harmful extensions in the "Files & Images" section
- Improved installer security by disallowing re-installation when a configuration file is already present.
- Installing in "live mode" instead of "dev mode" by default, and avoid setting certain domains as "dev mode" by default. This fixes an issue where attackers were able to force a site into "dev mode" by spoofing the domain name on certain server configurations.
- Fixed password encryption when saving members through the "Add Member" dialog in the "Security" admin. The saving process was disregarding password encyrption and saving them as plaintext (issue was introduced in 2.4.0)
- Fixed potential information disclosure on misconfigured servers by disallowing direct execution of *.php files in "sapphire", "cms" and "mysite" folders. If PHP was configured to show errors on screen (development setting), attackers could find out server paths and other environment information.
- Allow CMS authors to set their own localized date and time formats, independently from the defaults set through their interface language.
- More useable date picker (jQuery UI) for date form fields (both in the CMS and in website forms)
- Better URL "transliteration" of special characters like Umlauts or Macrons (Example title: "Brötchen für alle!", URL in 2.4.0: "brtchen-fr-alle", URL in 2.4.1: "broetchen-fuer-alle")
- Better batch editing of comments in the admin interface (e.g. marking multiple comments as "spam")
- More sophisticated access control for decorators on page types (tri-state permissions checks: allow, deny, ignore).
Upgrading
See API Changes.
Security: File->setName() and File->Filename handling
Setting properties on File and Image are not reflected on the filesystem until write() is called. This was a necessary change to fix a security vulnerability around File->setName() and file extension validation. This vulnerability requires a user to be logged-in to the CMS (see #5693).
This means that CMS users with access to "Files & Images" can no longer rename uploaded files to invalid extensions in 2.4.1. In SilverStripe 2.3.8, this restriction only applies when AssetAdmin::$apply_restrictions_to_admin is set to TRUE.
Security: Installation in "live mode" by default
SilverStripe used to allow setting the environment type ("dev mode", "test mode" or "live mode") from within the installer, through Director::set_dev_servers(), Director::set_test_servers() and Director::set_live_servers().
On webservers with direct IP to domain mapping (e.g. no VirtualHost directives in Apache), it is possible to spoof domain information in HTTP requests. This can lead to "live" environments being set to "dev" mode, allowing administrative actions like dev/build without access control. Note: The CMS is still secured through login in "dev mode".
We recommend setting environment types through a _ss_environment.php file instead:
<?php define('SS_ENVIRONMENT_TYPE', 'dev'); // ...
To put a "live" or "test" environment into "dev mode" temporarily (when logged in as an administrator), you can append ?isDev=1 to any SilverStripe URL. This should give you more information than the common "Website Error" that is shown when the website is in "live mode".
IMPORTANT: If you have an existing installation, we advise to remove any Director::set_dev_servers() directives from your mysite/_config.php.
Security: Disallow direct execution of *.php files
The only PHP file that should be executable through the webserver is sapphire/main.php, our main bootstrapper which kicks of URL routing. All other PHP files in SilverStripe core and modules are included by this bootstrapper, and don't need direct access through a URL.
On misconfigured webservers, accessing these files directly through URL can lead to information disclosure through PHP error messages. The production configuration recommended by php.net will fix this issue:
display_errors = 0
For additional protection, we now include .htaccess files in all SilverStripe core folders disallowing access to *.php files. Note: This only applies to webservers that understand the .htaccess format, mainly Apache.
''Important'': Consider copying mysite/.htaccess to any other SilverStripe modules and folders you might have created in your own project.
Security: New members might be saved without password encryption
Fixed password encryption when saving members through the "Add Member" dialog in the "Security" admin. The saving process was disregarding password encyrption and saving them as plaintext (#5772). The issue was introduced in 2.4.0 - if you have created any new members through "Add Member" since then (not the inline member table), please re-encrypt all existing passwords using this task:
Date/Time format handling in CMS
Classes like DateField, TimeField and DatetimeField are now aware of member-specific formats which can be set in admin/myprofile (linked on the lower right footer in the CMS interface). See i18n for more details.
Example: Setting German date formats in mysite/_config.php:
i18n::set_locale('de_DE'); i18n::set_date_format('dd.MM.YYYY'); i18n::set_time_format('HH:mm');
Please note that these form fields use ISO date format, not PHP's built-in date().
To set the locale and date/time formats for all existing members, use the following SQL (adjust to your preferred formats):
UPDATE `Member` SET `Locale` = 'de_DE', `DateFormat` = 'dd.MM.YYYY', `TimeFormat` = 'HH:mm';
Changed permission checks for decorators on DataObject->can*()
Access checks in the SiteTree class can have their access checks extended, for example to influence SiteTree->canEdit(). In 2.4.0, it was only possible to explicitly deny an action by returning FALSE, returning TRUE wouldn't have any effect. The new behaviour has three states:
- FALSE: Disallow this permission, regardless of what other decorators say
- TRUE: Allow this permission, as long as no other decorators return false
- NULL: Don't affect the outcome
To clarify: Leaving existing decorators unchanged might mean that you allow actions that were previously denied (See r104669).
// In mysite/_config.php :::php Object::add_extension('SiteTree', 'MyDecorator'); // 2.4.0 :::php class MyDecorator extends DataObjectDecorator { function canEdit($member) { if(Permission::checkMember($member, 'MYPERMISSION')) { return true; } else { return false; } } } // 2.4.1 :::php class MyDecorator extends DataObjectDecorator { function canEdit($member) { if(Permission::checkMember($member, 'MYPERMISSION')) { return null; // Means the permission check will be ignored, instead of forced to TRUE } else { return false; } } }
Removed image editor sourcecode
This feature was disabled for a while, and has now been removed from the source tree as well. Please please use thirdparty modules instead, e.g. "silverstripe-pixlr" (r104987).
URL Transliteration
Non-ASCII characters like macrons or umlauts URLs are now transliterated. This means that special characters are replaced with their ASCII equivalents rather than just removed. This does not affect existing URLs, but will impact existing pages when their title is changed.
Title: "Brötchen für alle!" URL in 2.4.0: "brtchen-fr-alle" URL in 2.4.1: "broetchen-fuer-alle"
Removed Classes
- AutocompleteTextField
Changelog
Features and Enhancements
- [rev:108024] Show a warning inside the the CMS if you've neglected to delete install.php
- [rev:108012] added getter to get array back out of an !ArrayData instance. MINOR: updated docblocks in !ArrayData
- [rev:107877] Added Latvian (Latvia) translation to sapphire (thanks Kristaps and Andris!)
- [rev:107875] Added Latvian (Latvia) translation to cms (thanks Kristaps and Andris!)
- [rev:107867] Allowing custom messages and permission codes in !BasicAuth::protect_entire_site()
- [rev:107867] Making $permissionCode argument optional for !BasicAuth::requireLogin(). If not set the logic only checks for a valid account (but no group memberships)
- [rev:107867] Using SS_HTTPResponse_Exception instead of header()/die() in !BasicAuth::requireLogin() to make it more testable
- [rev:107810] Added class to time icon in !TimeField so it can be styled
- [rev:107443] html2raw now properly replace strong tag with asterix #5494
- [rev:107438] Using jQuery UI datepicker in !DateField and !DatetimeField instead of outdated DHTML calendar.js (fixes #5397)
- [rev:107438] Abstracted optional !DateField->setConfig('showcalendar') logic to !DateField_View_JQuery
- [rev:107434] allow adding a new a field to !ArrayData
- [rev:107429] Added documentation and changed static names
- [rev:107426] Added static to set regeneration of default pages (ticket #5633)
- [rev:107415] Added Security::$force_database_is_ready to mock database_is_ready() state
- [rev:107415] Added permission check exception in !TaskRunner and !DatabaseAdmin if !SapphireTest::is_running_test() returns TRUE (necessary for !DevelopmentAdminTest)
- [rev:107380] Use array_combine() instead of custom logic for !ArrayLib::valuekey() (thanks paradigmincarnate!)
- [rev:107365] Member_!DatetimeOptionsetField toggle text is now translatable
- [rev:107334] #5352 Translatable entities for help text in Member_!DatetimeOptionsetField::getFormattingHelpText()
- [rev:107327] #5352 CMS now uses the user's preferred date and time formatting in !DateField and !TimeField
- [rev:107326] #5352 Decouple date display from i18n locales, users now have access to change their date and time formats in Member::getCMSFields() using Member_!DatetimeOptionsetField field
- [rev:107094] abstracted protocol detection out to Director::protocol() #5450
- [rev:107091] in referencing a file in combine_files() it should fall back to standard requirement tags if combining has been disabled eg dev mode
- [rev:107088] throw user error when not passing correctly formatted array rather than simply passing
- [rev:107086] added setDisabled() to set !DropdownField::$disabled
- [rev:106877] Added !TestRunner::$coverage_filter_dirs to exclude certain directories from PHPUnit test coverage reports
- [rev:106705] Calling Image->deleteFormattedImages() in Image->onBeforeWrite() (#5423)
- [rev:106200] added prefix and suffix support to !ContextSummary
- [rev:106194] Prevent image search queries all images in the site initially when the page is loaded
- [rev:106178] Enable switch between legacy image search and new version
- [rev:106118] added setRows() and setColumns() to customize the size of the textarea field outside of the controller
- [rev:105890] Added method for $this->request->latestParam() backwards compatibility with Director::urlParam()
- [rev:105732] Ability to hide form by className or for the whole !ModelAdmin
- [rev:105712] Added !MySQLDatabaseConfigurationHelper::getDatabaseVersion() which abstracts the version number away from the version check the installer requires
- [rev:105275] Preserve sort options in pagination links in !TableListField
- [rev:105271] 'Select all' and 'Select none' checkboxes for !CommentTableField for easier batch handling of comments, improved its styling in !CommentAdmin
- [rev:105269] Showing 20 comments in tabular view for !CommentAdmin (and making the setting configurable via !CommentAdmin::set_comments_per_page())
- [rev:105268] Abbreviating comment text display in !CommentAdmin to first 150 characters
- [rev:105266] Allowing batch checkbox selection of !TableListField rows with !TableListField->Markable and !TableListField->addSelectOptions()
- [rev:105126] Added CSSContentParser->getByXpath()
- [rev:105028] Added variable for the server configuration file so the config-form can display it for the installation
- [rev:104968] Added !PageComment->canView()/canEdit()/canDelete(), and using these permissions in !PageCommentInterface. Caution: canCreate() actions are still determined by !PageCommentInterface::$comments_require_login/$comments_require_permission
- [rev:104935] added Month function for consistency
- [rev:104827] added plugins to i18n to support modules that provide custom translations.
- [rev:104707] Installer now supports requireDatabaseVersion() on each database configuration helper implementation, e.g. !MySQLDatabaseConfigurationHelper. If it's not defined, the test is skipped.
- [rev:104706] Added !MySQLDatabaseConfigurationHelper::requireDatabaseVersion() to check whether the connected instance is using version 5.0+
- [rev:104671] Macrons, umlauts, etc, are now transliterated when inserted into URLS. API CHANGE: Added Transliterator class, which uses iconv() or strtr() to convert characters with diacritical marks to their ASCII equivalents. API CHANGE: Added Extension hook updateURLSegment for !SiteeTree.
- [rev:104515] initial commit
- [rev:104232] Add 'Given I load the fixture file "app/tests/xyz.yml"' step to salad
- [rev:104231] Add dev/tests/sessionloadyml to load a yml fixture into an existing test session
- [rev:104162] Added cs_CZ javascript translations (#5540, thanks Pike)
API Changes
- [rev:107439] Using !FieldHolder() instead of Field() for subfields in !DatetimeField->!FieldHolder(), in order to get configuraton settings for javascript !DateField
- [rev:107273] Don't reflect changes in File and Folder property setters on filesystem before write() is called, to ensure that validate() applies in all cases. This fixes a problem where File->setName() would circumvent restrictions in File::$allowed_extensions (fixes #5693)
- [rev:107273] Removed File->resetFilename(), use File->updateFilesystem() to update the filesystem, and File->getRelativePath() to just update the "Filename" property without any filesystem changes (emulating the old $renamePhysicalFile method argument in resetFilename())
- [rev:107273] Removed File->autosetFilename(), please set the "Filename" property via File->getRelativePath()
- [rev:107268] Deprecated File->getLinkedURL()
- [rev:107054] Deprecated !AutocompleteTextField, use third-party solutions
- [rev:106217] moved Group::addToGroupByName to $member->addToGroupByCode.
- [rev:105756] refactored methods in session to use coding conventions
- [rev:104987] Removed !ImageEditor functionality, please use thirdparty modules, e.g. "silverstripe-pixlr" ()
- [rev:104923] Added interface method !DatabaseConfigurationHelper::requireDatabaseVersion(), all database helpers that implement !DatabaseConfigurationHelper must now have this method, which as of now is MySQL, PostgreSQL, SQL Server and SQLite
- [rev:104673] Added !RsyncMultiHostPublisher::set_excluded_folders().
- [rev:104669] Moved site tree permission extension to a 3-state system (true, false, null, where null means "no effect"))
- [rev:108032] Fixed CLI installation.
- [rev:108031] Don't set any dev servers by default, host-based dev-server selection is unreliable.
- [rev:108030] Don't allow reinstalling without first making the user manually delete mysite/_config.php
- [rev:108029] Don't allow direct access to PHP files in mysite module.
- [rev:108028] Don't allow direct access to PHP files in cms module.
- [rev:108027] Don't have any host-based dev servers set by default.
- [rev:108026] Don't allow reinstalling without first making the user manually delete mysite/_config.php
- [rev:108023] Don't allow direct access to PHP files in sapphire module, except for main.php and static-main.php
- [rev:108001] #5833 Duplicate IDs when two similar date formats in Member_!DatetimeOptionsetField containing different delimiters (e.g / and .) replaced to an empty string
- [rev:107940] tests now pass when the locale is set to something other than 'en_US' in the mysite's _config.php file
- [rev:107831] dev/build always reporting index change because of a whitespace in the index column names
- [rev:107812] Styling fixes for !DateField/!TimeField/!DatetimeField in the CMS
- [rev:107811] Added a clearing div after the date and time fields, not the best way of doing it but the only way as the overflow css trick for clearing fields doesn't work with the time dropdown
- [rev:107789] Fixed !DateField->validate() with keyed, but empty array values
- [rev:107786] Using actual date format settings in !DateField/!TimeField->validate() messages
- [rev:107785] Limit 'showcalendar' javascript option to !DateField instances (rather than applying to all available)
- [rev:107585] fixed inclusion of environment file when document root is the web root
- [rev:107539] Case insensitive extension checks in File::validate() (fixes #5781, thanks simon_w)
- [rev:107537] Remove dummy entry created by Versioned if record is first written to Live stage (fixes #5596, thanks muzdowski)
- [rev:107532] Fixed Member->!PasswordEncryption defaults when writing new Member without setting a password. Fixes critical issue with !MemberTableField saving in admin/security, where new members are stored with a cleartext password by default instead of using the default SHA1 (see #5772)
- [rev:107441] Allowing !DatetimeField->saveInto() to save a partial array notation with missing 'time' value
- [rev:107428] Added quotes for postgres
- [rev:107423] Only highlight strings more than 2 characters long. #4949
- [rev:107417] Reverted 107414, wrong patch
- [rev:107415] Allowing dev/build in "live" mode when Security::database_is_ready() returns FALSE (typically happens when an existing !SilverStripe project is upgraded and database columns in Member/Permission/Group have been added) (fixes #4957)
- [rev:107414] TableListField headings i18n translation (ticket #5742)
- [rev:107390] Added Locale hidden field to HTMLEditorField->!LinkForm() in order to show correct context in "page on the site" dropdown (fixes #5743)
- [rev:107369] Fixed spelling error of $databaseConfig in cli-script.php causing database configuration to not load (thanks aimcom!)
- [rev:107116] Undo commit to wrong place
- [rev:107115] Undo incorrect commit
- [rev:107095] check the $removeAll var before removing cache files. PATCH via ajshort (#5672)
- [rev:107090] prevented HTTPRequest->shift() throwing notices when shifting multiple elements. APICHANGE: SS_HTTPRequest->shift($multiple) no longer returns an array of size $multiple spaced with nulls, it returns an array up to the size of $multiple.
- [rev:107089] fixed notice level errors getting through
- [rev:106867] Making status description in Debug::friendlyError() compatible to HTTP 1.1 spec (removing any markup and newlines)
- [rev:106777] Re-enabling theme in !ErrorPage->doPublish() (it's usually disabled in the publication context through !LeftAndMain->init())
- [rev:106755] Stricter checking that a relation exists on !ComplexTableField::saveComplexTableField()
- [rev:106671] Fixed !ImageField->!EditFileForm() to list subclasses of Image in tree dropdown (fixes #5708, thanks keeny)
- [rev:106666] Prevent !DateField->performReadonlyTransformation() from segfaulting on PHP 5.2 due to broken __toString() casting (fixes #5713, thanks charden)
- [rev:106360] re-enable broken link notification using !BackLinkTracking() (this was broken since r101127
- [rev:106351] Apply AJShort's patch to fix !SiteConfig (trac 5671)
- [rev:106225] Checking for the same combined filename in Requirements::combine_files() to avoid irrelevant error messages
- [rev:106205] updated tests for Text
- [rev:106183] fix query error when image search doesn't use legacy search
- [rev:106154] if running in cli do not output html tags when rebuilding the db
- [rev:106122] Fixed caching of homepage.
- [rev:106121] Open help in a new tab.
- [rev:106120] Replaced Versioned's unique index definition with an array syntax.
- [rev:106096] Setting 'ID' field on CMSMain->!RootForm() so it can work with formfields that require it (fixes #5671, thanks ajshort)
- [rev:106086] image search was not honouring the selected folder, so could only search in root folder
- [rev:106082] Fixed !SiteTree::!IsModifiedOnStage() for an edge-case that was identified when deleteFromStage() stopped manipulating the current record.
- [rev:106080] Don't let deleteFromStage() kill the ID of the original record.
- [rev:106079] Add a unique index to !SiteTree_versions.RecordID+Version. Fix saving methods to support this.
- [rev:106078] Throw an exception if you try an delete an unsaved or already-deleted record
- [rev:106071] MySQLDatabaseConfigurationHelper::getVersion() will fallback to trying to get the version using a query if mysql_get_server_info() returns nothing
- [rev:105907] fixed phpunit directive
- [rev:105903] reverted revision 105890 to fix build
- [rev:105889] invalid use of @covers annotation
- [rev:105876] TableListField_Item::!SelectOptionClasses() can not use it parent protected variable.
- [rev:105875] rollback r105858 which introducesa bug
- [rev:105872] updated select options classes to work with the dataobjectset returned by selectoptions rather than the array previously
- [rev:105868] fixed select all link using incorrect function
- [rev:105858] TableListField_Item::!SelectOptionClasses() can use it parent protected variable.
- [rev:105833] fixed incorrect include path
- [rev:105732] validate file in import from CSV form
- [rev:105726] If database version can't be determined, just use the database adapter class
- [rev:105711] Install now supports sending database version if available from the helper
- [rev:105705] ss2stat URL not generated correctly (has NULL values)
- [rev:105668] Moved !SiteTree->ParentID property to Hierarchy extension (fixes #5638)
- [rev:105667] More specific regex in Requirements->includeInHTML() to avoid duplicating information by matching HTML5-style
tags instead of (fixes #5640)
- [rev:105665] Can't set width or height on !MemberTableField popup (fixes #5625, thanks smurkas)
- [rev:105514] if moderation on comments is enabled then redirect the user back down to the comment section to view the message rather than trying to direct to selector which doesnt exist
- [rev:105505] avoid adding loading class to TinyMCE add link, image, flash buttons
- [rev:105468] #5349: Use TEMP_FOLDER for Zend's cache temp dir.
- [rev:105337] get_title_sql has string concat hardcoded as ||, fixed for MSSQL which uses +, fix for #5613
- [rev:105278] Stricter object type checks in !ViewableData->hasValue() and !ViewableData->XMLval(). Broke in cases when SS_HTTPResponse is returned which doesn't extend from Object, hence doesn't have an exist() method (fixes #5524, thanks hamish)
- [rev:105264] addFieldToTab segfaulting under PHP 5.2
- [rev:105225] force dateformat to en_NZ if showcalendar is enabled as calendar is compatibile with en_NZ only
- [rev:105030] Fixed correct input ID in install.js due to change in r105029
- [rev:105029] Fixed inconsistent styling of reinstall actions at the bottom of the installer, and if using IIS, warn that this will overwrite the web.config file, not .htaccess
- [rev:104995] Fixed i18nTextCollector when used with i18nEntityProvider - class manifest is now stored lowercase, which means i18n::get_owner_module() didnt work reliably
- [rev:104972] TestSession::submitForm throws proper error if form not found
- [rev:104968] Requiring CMSACCESS!CommentAdmin instead of ADMIN permissions in !PageCommentInterface and !CommentAdmin administrative actions
- [rev:104962] Fixed bug in basicauth failover to session member.
- [rev:104962] Don't use session member for test site protection feature.
- [rev:104847] catch case of plugin not returning translations for the locale
- [rev:104793] Installer now checks the database version AFTER it has determined a connection can be established, which some databases require first
- [rev:104793] Database version check failures are now a warning, so a user can install at their own risk
- [rev:104745] after reset password, the site redirect to non-exisit page (SC #1)
- [rev:104720] Fixed installation problem where version error didn't show
- [rev:104679] Make URLs lowercase
- [rev:104678] Fixed Translatable::canEdit() to suit new permission customisation scheme
- [rev:104675] Prevent !DataDifferencer from creating empty
<ins />and
<del />takes that confuse the browser.
- [rev:104672] Make !RsyncMultiHostPublisher protected; give default value.
- [rev:104670] Director::test() shouldn't break if $_SESSION isn't set.
- [rev:104666] Removed references to php5 binary in Makefile
- [rev:104608] check if a request is present before using it to prevent undefined errors
- [rev:104581] Generate stage/live links using Controller::join_links() instead of string concatenation.
- [rev:104580] Fixed Controller::join_links() handling of fragment identifiers
- [rev:104552] when using custom Member title, the join was failing - it had wrong parameters. Now changed to correctly handle the ansi sql join for all Member columns.
- [rev:104533] Fix !ModelAdmin Import hang (ticket 5569)
- [rev:104468] When finding an old page in the 404 handler, favour existing subpages over historical ones.
- [rev:104463] Fix legacy URL redirection for pre-nestedurls URLs, after it has been enabled.
- [rev:104436] Removed erroneous default config for unused templates module.
- [rev:104403] Wrong HTML syntax in !LeftAndMain.ss (fixes #5552, thanks simon_w)
Minor changes
- [rev:108246] Removed unncessary end PHP tag from cms/_config.php
-
- [rev:108049] Added warning about Director::set_dev_servers()
- [rev:108048] Documentation in CSVBulkLoader
- [rev:108025] Added test for #5662 (calling delete twice)
- [rev:108002] Fixed incorrect word "colon" with "dot"
- [rev:107878] Updated translations
- [rev:107876] Updated translations
- [rev:107838] Reverted r107831
- [rev:107789] Fixed !DateField/!TimeField validation message translation (wrong sprintf() nesting)
- [rev:107787] Fixed !TimeField validation _t() entity name
- [rev:107784] Disabled 'showcalendar' option on CMSMain->!SiteTreeFilterDateField() - it causes the CMS to load jQuery UI javascript just for this (rarely used field). To be re-enabled once we work with jQuery UI on a broader scale.
- [rev:107726] Moved class-specific documentation from doc.silverstripe.org back into class-level PHPDoc
- [rev:107725] Moved class-specific documentation from doc.silverstripe.org back into class-level PHPDoc
- [rev:107586] removed whitespace
- [rev:107525] Removed debug code in !MemberTableField
- [rev:107442] Fixed !DatetimeField display in cms
- [rev:107442] Removed obsolete .calendardate styles from cms_right.css
- [rev:107440] Using Google CDN for jQuery dependencies in !FileIFrameField
- [rev:107437] Better error handling in i18n::get_language_name()
- [rev:107430] Fixed Documentation
- [rev:107415] Using Object::create() in !DevelopmentAdmin to make objects mockable
- [rev:107400] Documentation in !DataObjectSet
- [rev:107394] Changed "no_NO" locale for Norwegian into the more commonly used "nb_NO" in i18n class, meaning translations from translate.silverstripe.com can actually be selected now (fixes #5746)
- [rev:107366] Tweaking of installer text to avoid misleading information about "exists" when there's actually an error
- [rev:107307] Reverted r107305
- [rev:107305] Code formatting fix for setting Member locale in !LeftAndMain::init()
- [rev:107276] Checking that Folder::findOrMake() can create an assets/assets/ folder
- [rev:107275] Using Filesystem::makeFolder() instead of mkdir() in Folder for file operations
- [rev:107274] Better presentation of extension error message in File and !UploadValidator
- [rev:107273] Added unit tests to !FileTest and !FolderTest (some of them copied from !FileTest, to test Folder behaviour separately)
- [rev:107272] Changed !ImageTest to use fixture files located in assets/ folder, the filesystem API doesn't support Folder objects with "sapphire/..." paths, which leads to inconsistent results
- [rev:107271] Making !FileTest->setUp()/tearDown() more resilient against in-test file/folder renames
- [rev:107270] More identifiable file naming in !FileTest
- [rev:107269] Using File::get_file_extension() instead of substr() magic in File->setName()
- [rev:107269] Using exceptions instead of user_error() in File->setName()
- [rev:107268] Avoiding duplication by using existing getFullPath() in File->getAbsoluteURL()
- [rev:107267] Made File::get_file_extension() more readable, and added unit test
- [rev:107266] Removed File->setField(), doesn't have any overloaded functionality
- [rev:107265] Documentation in File and Folder class
- [rev:107214] updated generator tag URL
- [rev:107175] force exclusive connection
- [rev:107104] Added initial docs
- [rev:107030] return false rather than error out in case SS_Query:: is not a resource
- [rev:106938] mysql_fetch_row() expects resource, this will fatal if query was e.g. UPDATE when iterating a result because !MySQLQuery::nextRecord() is used by Iterator::valid() and !MySQLQuery:: is bool in this case
- [rev:106876] Making $Email available in Security_passwordsent.ss template (fixes #5737)
- [rev:106805] Added !FileTest->testValidateExtension() (related to #5693)
- [rev:106804] Documentation
- [rev:106777] Reverted r88633, it breaks
tag in static HTML for !ErrorPage->doPublish()
- [rev:106694] Removed trailing slash in BackURL, fixed error message sentence structure in !PageCommentInterface.ss (fixes #5520)
- [rev:106687] Fixed hardcoded error message in !PasswordValidator (fixes #5734)
- [rev:106687] Added !PasswordValidatorTest
- [rev:106568] Provide a default message for FIELDISREQUIRED
- [rev:106313] Correct typo in comments
- [rev:106248] Made CMSMainTest more resilient against database ID changes (Postgres doesn't have auto-increment resets across tests at the moment)
- [rev:106190] Fixed memory limit setting in !SapphireTest (regression from r106128)
- [rev:106187] Better checking of safe_mode in !MemoryLimitTest
- [rev:106180] Add comments for !ThumbnailStripField
- [rev:106156] Don't run memory limit tests in safe mode,
- [rev:106128] Preserve memory_limit between tests (for better PHP5.1 behaviour)
- [rev:106119] Added test for Database::hasTable().
- [rev:106090] Fixed test that required a separate Page table.
- [rev:106083] Removed db/build legacy wording in !DevelopmentAdmin (fixes #5676)
- [rev:106081] Added test for #5657
- [rev:105985] add text/plain to the list of accepted mime types
- [rev:105912] Better error handling in Form::__construct() (fixes #5649)
- [rev:105732] Clear DB checkbox unchecked by default
- [rev:105517] Installer should not repeat "Could not determine your database version" twice in slightly varied words
- [rev:105516] Show better message if couldn't find MySQL version in !MySQLDatabaseConfigurationHelper
- [rev:105305] More solid markup testing in !TableListFieldTest through xpath
- [rev:105297] Fixed !TableListFieldTest->testSelectOptionsRendering()
- [rev:105282] Using ASSETS_DIR and THEMES_DIR constant in Image, !ManifestBuilder, Requirements, File (fixes #5619)
- [rev:105281] Using ASSETS_DIR constant in !StaticPublisher (fixes #5619)
- [rev:105277] Translations
- [rev:105276] Translations
- [rev:105274] Reverted r105264, breaks !CompositeFieldTest, !FieldSetTest, !TranslatableTest
- [rev:105273] Updated !TableListField sublcass template to work with new !TableListField->!SelectOptions() setting
- [rev:105272] Fixed _t() call in !PageCommentInterface.ss
- [rev:105270] missing slash / from Requirements::css() parameter
- [rev:105267] Removed jquery.livequery as a Requirement from !LeftAndMain.php, its only necessary in !SecurityAdmin for !MemberImportForm.js now.
- [rev:105198] Fixed fixture location for !DbDatetimeTest
- [rev:105196] Added !DbDatetimeTest cases to sapphire (these were previously in the sqlite3 module, but they actually test core Database functionality)
- [rev:105188] Documentation
- [rev:105139] increased height of the todo text field in the cms
- [rev:105027] Checking for headers_sent() before setting cookies in Versioned::choose_site_stage() to avoid problems with URL parameters like showqueries=1 and !ContentController calling choose_site_stage() (fixes #5557)
- [rev:105011] Documentation
- [rev:105009] Documentation
- [rev:105005] Documentation
- [rev:104996] Documentation
- [rev:104993] Language master file
- [rev:104992] Removed duplicated code in i18nTextCollector, more defensive checks for get_owner_module()
- [rev:104980] Added translations for !BrokenLinksReport, !ReportAdminForm.ss, !AssetTableField.ss (fixes #5527, thanks Martimiz)
- [rev:104978] Allowing translation of "save" button in !SiteConfig->getCMSActions()
- [rev:104970] Translations in !PageCommentInterface.ss (fixes #5598, thanks Pike)
- [rev:104924] Reverted r104923, as current database releases of mssql and sqlite3 modules don't support this yet
- [rev:104883] Fixed hidden mbstring reliance in !SiteTree->generateURLSegment() (broken in r104679)
- [rev:104835] Save and restore lang state in test
- [rev:104798] Fixed !SiteTreeTest and !SiteTreePermissionsTest to work alongside subsites module (!SiteTreeSubsites changes the canEdit() behaviour)
- [rev:104796] Fixed !SiteConfigTest to work alongsite subsites module (!SiteTreeSubsites changes the canEdit() behaviour)
- [rev:104795] Documentation
- [rev:104769] Documentation
- [rev:104767] Documentation
- [rev:104733] fixed umlauts
- [rev:104711] Added !DirectorTest->testURLParam() and !DirectorTest->testURLParams()
- [rev:104710] Installing screen now has a page title called "Installing !SilverStripe..." instead of "PHP 5 is required"
- [rev:104709] Removed double returns in installer (redundant code)
- [rev:104708] Renamed checkdatabase method to checkDatabase to be consistent
- [rev:104705] Show install MySQL version at 5.0+ as 4.1 does not work properly with !SilverStripe
- [rev:104704] Tweaks to positioning of help text in installer
- [rev:104682] fixed api doc
- [rev:104636] added illustrator formats to the allowed extensions.
- [rev:104610] Documentation
- [rev:104598] Fixed wrong _t() notation in !ChangePasswordForm (broken in r103226 and r104596)
- [rev:104596] Making strings in !ContentControllerSearchExtension translatable
- [rev:104594] Defensive coding in !MigrateSiteTreeLinkingTask
- [rev:104490] Removed !ForumAdmin.js which shouldn't belong in the CMS module
- [rev:104483] Documentation
- [rev:104404] Documentation
- [rev:104402] Documentation
- [rev:104158] Documentation migrated from doc.ss.org
- [rev:104157] Migrated various API-style documentation from doc.ss.org
Other
- [rev:105057] MINOT Translation in !SiteTree (#5603, thanks Pike)
- [rev:104674] ENHANCMENT: !RsyncMultiHostPublisher also rsyncs sapphire/static-main.php.
- [rev:104668] Sake fix: look for php binary before php5, to prevent errors on CentOS and Cygwin.
- [rev:104667] Added explicit bash handler to sake
- [rev:104442] Multi-use redemption page created
./sscreatechangelog --version 2.4.1 --branch branches/2.4 --stopbranch tags/2.4.0 | http://docs.silverstripe.org/en/changelogs/2.4.1/ | 2015-05-22T16:03:24 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.silverstripe.org |
Difference between revisions of "Replacing the logo image in the Milky Way template"
From Joomla! Documentation
Revision as of 09:59, 4 October 2009
Contents
Replacing the logo image in the Milkyway template
Since all templates have their own structure of css-Files, this chunk is special to the Milkyway template. Have a look below or at category:templates for other templates.
Upload your logo image
The new logo image should have the same size as the original joomla image.
- login into administration area
- choose Media from the Site menu
- upload your image into the opening main directory for images
- the file name will be renamed to non capital letters
- Memorize the exact spelling of your logo file name on the server!
Activate the new logo
- login into administration area
- choose Templates from the Extensions menu within the top menu bar
- click rhuk_milkyway for editing the Milkyway template (it should have been marked with a star)
- click edit CSS within the top icon bar
- mark template.css and click edit (again in the top area)
- Now you should act careful and avoid unintended changes:
- change the defined url for background
- replace line
background: url(../images/mw_joomla_logo.png) no-repeat;
- by a line like this
background: url(../../../images/your_logo.png) no-repeat;
- ATTENTION:
- mind the different numbers of directory levels, each marked by ../
- remember the exact file name of your logo
See also! | https://docs.joomla.org/index.php?title=J1.5:Replacing_the_logo_image_in_the_Milky_Way_template&diff=15837&oldid=14698 | 2015-05-22T16:09:04 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
JUpdate:: getStackLocation
From Joomla! Documentation
Revision as of 21::_getStackLocation
Description
Gets the reference to the current direct parent.
Description:JUpdate:: getStackLocation [Edit Descripton]
protected function _getStackLocation ()
- Returns object
- Defined on line 47 of libraries/joomla/updater/update.php
See also
JUpdate::_getStackLocation source code on BitBucket
Class JUpdate
Subpackage Updater
- Other versions of JUpdate::_getStackLocation
SeeAlso:JUpdate:: getStackLocation [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JUpdate::_getStackLocation&direction=next&oldid=57961 | 2015-05-22T16:05:43 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
Menus Menu Item Finder Search
From Joomla! Documentation
Revision as of 10:59, 5 January 2013 by JoomlaWikiBot (Talk | contribs).> | https://docs.joomla.org/index.php?title=Help33:Menus_Menu_Item_Finder_Search&oldid=79698 | 2015-05-22T17:04:13 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
JURI/base
From Joomla! Documentation
Revision as of 05:25, 17 September 2008 by Chris Davenport (Talk | contribs)
A static method that returns the base URI of the Joomla site. If Joomla has been installed in the web server's document root then this method will return "/" for the path.
Syntax
string base( $pathonly )
where: | https://docs.joomla.org/index.php?title=JURI/base&oldid=10776 | 2015-05-22T16:48:24 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
Difference between revisions of "Uninstalling an extension"
From Joomla! Documentation
Revision as of 15:33, 27 March 2009
If you wish to uninstall an extension on your Joomla site, then follow these simple steps:
1) Select "Extensions" and then "Install / Uninstall" from the dropdown menu 2) Select the type of extension you wish to uninstall. You will have the choice between Components, Modules, Plugins, Languages and Templates. 3) Find the extension you wish to uninstall and check the checkbox to the left of the extension title. 4) In the upper-right corner of the screen, press "Uninstall". | https://docs.joomla.org/index.php?title=Uninstalling_an_extension&diff=13770&oldid=7939 | 2015-05-22T17:14:49 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.joomla.org |
Return the Blackman window.
The Blackman window is a taper formed by using the the first three terms of a summation of cosines. It was designed to have close to the minimal leakage possible. It is close to optimal, only slightly worse than a Kaiser window.
Notes
The Blackman window is defined | http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.blackman.html | 2015-05-22T16:06:13 | CC-MAIN-2015-22 | 1432207925696.30 | [] | docs.scipy.org |
Tutorial: How to develop a simple game¶
To help you getting started with creating your own games this section gives step by step instructions for how to implement a so-called Ultimatum Game.
The Ultimatum Game¶
In an Ultimatum Game the participants play in groups of two. One of them takes the role of the proposer. He gets an initial endowment and decides upon how to divide it between him and the other participant – the responder. The responder then decides whether he accepts the proposal. If he does so, the final payoffs of the participants are according to the proposal. If the responder does not accept the proposal both participants receive nothing as final payoff.
Building the game¶
To create a new game click on “new game” in the top bar in the overview mode. Type in a game name of your choice and change the availability to “private”. Thus other users cannot see “your” Ultimatum Game. Click “save”. ClassEx will automatically switch into editing mode and you can start building the game.
Assignment & Matching¶
Go to the tab “assignment & matching”. Select “role & group” in the drop-down menu “assignment” and select “2” in the drop-down menu “number of roles”. This means the participants in this game are selected into groups of two with every group consisting of one participant of each role. Leaving “partner” as the matching mode means the groups stay together over several rounds of the game.
Stage 1: Proposer’s decision¶
Go on to the tab “stage 1”. In this first stage of the game the instructions for all participants should be on the lecturer’s screen (all participants can see it) and the proposers should decide how much they want to transfer to the responder. The stage should be started by the lecturer pressing a start button. In the first stage we don’t have to think about the responder whose decisions will be designed in stage 2.
First type in a name for the stage, e. g. “UltimatumProposer”. “Late arrival” should be “possible”. That means, participants can still log in after the stage has been started by the lecturer.
Note
After writing the name of the stage into the empty field and clicking into another field you might have noticed that the page refreshed itself. Doing this it automatically saves every change you make. If you pause designing your game you can just click into another field and make sure everything is saved. Later you can get back to where you left via the overview mode. In the default screen of the overview mode the general folder will be opened. There you find your created game and can open it by clicking on “edit”.
The start button is already implemented by default in the first stage in the lecturer field on the right side. To add the instructions click on “add new element” in the lecturer field and select “text box”. Click on the little symbol “paste element” on the right side above the start button field to insert the instructions above the start button.
Note
If you want to move an element, you can do this by clicking on the scissors symbol in the element field you want to move. Doing this you cut out the element. You can insert it again in the spot you like by clicking on “paste element”.
Change the selection in the dropdown menu in the text box from “display always” to “before start”. Now insert the game instructions that should be visible for all participants. An exemple for those instructions could be: “You play with another participant here in the lecture hall. One player in each group gets 10 € at the start of the game. This person is called “the proposer” and decides how to divide those 10 € between you. Then the other person, called “the responder”, can decide to accept or reject the offer. If the responder accepts the offer, both will get this payoff. If the responder rejects, both player will get nothing.”
Note
If you want to have further options for text-editing you can press on the small notepad-icon on the left side of each text field. You can leave again from there by pressing “x” in the top lefthand corner of the box.
Now we start editing the participant’s field on the left side. An input element with one input field and some places to insert text is implemented by default. Change the “Type of input field” to “Numeric input field”. Every participant in the role of a proposer will be able to decide upon the amount he wants to keep for himself by entering it into this field. The field “variable names” next to the “Type of input field” defines the variable name. This is the internal name of the variable and will not be visible for the participant. Type in e. g. “keep”. Only the proposers should have an input field in this stage while the responder has to wait for the offer. Therefore change the field “for all roles” to “only role 1”. In the text field you can insert a description of the input value being visible for the participant. Type in e. g. “For me:”. Below the text field you can edit some more settings of the input field. The “Minimum” should be “0” because the responder cannot keep a negative amount. The “Maximum” should be equal to the initial endowment because the proposer cannot keep more than this for himself.
Note
At this point it could be helpful to add fields calculating the amount of money a proposer sends to the responder. These fields could dynamicly display these information to the proposer. When the proposer would enter a “5” in his “send”-field, classEx could display “The amount you send to the responder is ‘5’”. Although this calculation is easy for any participant, it ensures they are always aware of the game mechanism and do not make a calculation mistake. You will learn e.g. how to insert such calculation fields in the chapters on creating own games following this tutorial.
The initial endowment could be a parameter you maybe will want to change in the future. You should only need to change one value to do so. Therefore you should define the initial endowment as a general parameter of the game. To do so you need to add a “program code (subjects)” element (participants field -> add new element -> program code (subjects)). Click on the little paste-symbol above the input field to insert the program field. In the program field you now can define the initial endowment as a variable. Type in “$endow = 10;” for an initial endowment of 10.
Now you can use this variable to define the upper threshold of the amount the proposer can keep (if you deviate from “10” remember to also adopt your instructions in the lecturer text box). Type “$endow;” in to the “Maximum” field of your Numeric Input Field. Type in “0” into “decimal place” to not allow for decimal numbers and define the “unit” e. g. as “€”.
For clarification you should add a more general explanation of the stage for the proposers that is displayed above the input element. Click on “add new element” in the participants field and select “text box”. Click on paste between the “program code (subject)” and the input element. Again change the field “for all roles” to “only role 1”. Then insert the instructions, e. g. “You decide how to divide $endow; € between you and participant 2. Participant 2 decides, if he accepts or rejects. If he rejects, both of you get nothing. If participant 2 accepts, payoffs will be according to your proposal.”
Note
What have we done by now? We are done with assignment & matching and the first stage. So after logging in participants are assigned to groups and roles. The instructions get displayed to both the proposer and the responder. We have a start button and everything prepared for the proposer to participate in the game. In the next two steps we will model the decision of the responder, displaying the results and ending the game.
Stage 2: Responder’s decision¶
In the second stage the responders are informed about the proposals and they decide whether to accept or to reject.
Also the second stage is already provided by default. Type in a name for stage 2 (e. g. “UltimatumResponder”). “Late arrival” should be “not possible” in this stage, because partners are already matched and newcomers cannot be integrated once the first stage has been played. The first thing we do is to inform the responder about the proposal. To do so you need a “program code (subjects)” field (-> add new element -> program code (subjects)). Change “for all roles” to “only role 2”. Type in the following code:
$keep = $findVariablePartner("keep",$round); $receive=$endow-$keep;
The first line defines a variable “keep” and assigns to it the value of the participant’s matching partner’s “keep”-variable. The second line calculates how much the receiver gets and assigns the value to a variable “receive”. Now you can use both new variables to inform the responder about the proposal made to him. Therefore we need to create a new text box in the participants field below the program code field (-> add new element -> text box -> paste element). Change “for all roles” to “only role 2” in the text box and type in the following instructions:
Participant 1 has decided to split $endow; as follows: $keep; for participant 1 and $receive; for you. You can accept the proposal or reject it. If you reject it, both get nothing.
Now you need an input element via which the responder can accept or reject the proposal. Insert an input element beneath the text box and insert a “new input field” within the input element. As the responder can only decide between “Accept” and “Reject” we change the type of input field to “Buttons (Single Choice)”. Set the variable name to e. g. “accepted” and define the input field as visible for “only role 2”. Write a text into the text box that should appear above the “accept” and “reject” button (e. g. “Your decision”). To insert these buttons type “2” into the text field next to “add new possible answer” and click on the little plus left of it. Insert “Accept” and “Reject” into the new text fields. The values assigned to the decision buttons are very important. Choose the value “1” for the accept button and the value “0” for the reject button.
The second stage should start for a responder automatically as soon as “his” proposer has sent a proposal. Therefore delete the “results” field in the lecturer field by clicking on the rubbish bin icon in the top right corner of the field. Then insert an “automatic start” via “add new element”. Change the mode to “wait for others”. To display how many proposers and responders have already made their decisions on the lecturer’s screen, set the counter to “display” and the count to “by role”.
Stage 3: Results¶
When the responders have accepted or rejected the proposals you can display the results in a third stage. Add a new stage and name it e. g. “Results”. “Late arrival” again is “Not possible”. The two fields next to the “late arrival” field define how often stages get repeated and where to jump after finishing this stage. Using this you can define the number of rounds you want to play. Choose “back to stage 1” and e. g. “2x” (for repeating the the stages two times).
For both participants the payoff depends on whether the responder accepted the proposal or not. You have to distinguish these two cases. To do so you use two program code (subjects) fields in the participant field. Insert them above the default text box. You need one for “only role 1” and one for “only role 2”.
The program for role 1 is:
$accepted=$findVariablePartner("accepted"); $payoff=$keep*$accepted; if($accepted==0) { $text="Participant 2 has rejected your proposal."; } else { $text="Participant 2 has accepted your proposal."; }
The program for role 2 is:
$payoff=$receive*$accepted; if($accepted==0) { $text="You have rejected the proposal."; } else { $text="You have accepted the proposal."; }
Afterwards insert two text boxes in the participants field. Again one for role 1 and one for role 2. In these text boxes you inform the participants about their final payoff. For role 1 the text could be:
You proposed to keep $keep; € from the initial endowment $endow; €. $text; Your payoff is $payoff; €.
For role 2 the text could be:
Participant 1 has proposed to split $endow; as follows: $keep; € for him and $receive; € for you. $text; Your payoff is $payoff; €.
In the lecturer field you can show the results. Delete the start button that is implemented in a new stage by default. Then add a results matrix element. Change “decision role 1” from “stage 2 # 1” to “stage 1 # 1”. Change “count” to “by role” and “display results” to “by round”.
Testing the game¶
Congratulations! You just finished designing your first own game!
To test the game, change into lecture mode. If you already started another game, your self-designed game won’t open automatically. In this case you can just “close” the running game by clicking on “select game” in the top bar and picking your “own” game. You can test the game on your own PC without other devices by clicking on “new test participant” in the top bar of the lecture mode. This opens a participant screen in a new tab. You will see the game just as your participants will see it when actually playing the game. You can open as many screens as you want, where each screen represents a participant. After opening enough test participant screens click “Start” in the lecturer screen. Then you can go through the game with all test participants. | https://classex-doc.readthedocs.io/en/latest/Tutorial.html | 2021-10-16T11:13:53 | CC-MAIN-2021-43 | 1634323584567.81 | [] | classex-doc.readthedocs.io |
Date: Wed, 19 Jul 2017 19:03:39 +0200 From: fml <[email protected]> To: Valeri Galtsev <[email protected]> Cc: [email protected] Subject: Re: Can I use FreeBSD as a desktop system? Message-ID: <[email protected]> In-Reply-To: <[email protected]>>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help. Thanks, f.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=252643+0+/usr/local/www/mailindex/archive/2017/freebsd-questions/20170723.freebsd-questions | 2021-10-16T13:15:27 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.freebsd.org |
This registration method does not support personal accounts (gmail.com, live.com, hotmail.com) with ARM. You must authenticate with a corporate email, connected with your Azure AD Tenant. To use a personal account, connect to Azure Resource Manager using your own Active Directory Application.
Overview
This page walks you through the steps to connect your Azure Resource Manager account with RightScale for management purposes using a RightScale-owned Service Principal. This approach is simple and puts the responsibility for key rotation with RightScale, but it also limits the control that you have on the privileges granted to the Service Principal. An alternative method to connecting for management purposes is to connect to Azure Resource Manager using your own Active Directory Application.
If you are part of the Azure CSP program and wish to connect your partner data to Optima for cost reporting purposes, see Connect Azure CSP to Optima for Cost Reporting.
If you wish to connect your Azure Enterprise Agreement to Optima for cost reporting purposes, see Connect Azure Enterprise Agreement to Optima for Cost Reporting.
Prerequisites
- You must have a Azure Resource Manager subscription to register with RightScale
- You must be
adminor
enterprise_manageron the RightScale account
- For the initial creation of the
RightScaleService Principal, the Azure AD user being used to register with RightScale must be a Member of the Active Directory Tenant containing the subscription (not a Guest) and have the rights to create Enterprise Applications. By default, Members of the tenant have this ability, but that right can be revoked.
- The Azure AD user being used to register the Subscription with RightScale must have the Owner role on the Subscription. If you temporarily add permissions to a user to complete registration, you may revoke those permissions after the subscription is registered, as RightScale will only use the
RightScaleService Principal for authentication.
Alternatively, you can create connect to Azure Resource Manager using your own Active Directory Application.
Steps. | https://docs.rightscale.com/clouds/azure_resource_manager/getting_started/register.html | 2021-10-16T12:51:49 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['/img/arm_multiple_tenants_no_subscription_available.jpg',
'arm_multiple_tenants_no_subscription_available.jpg'], dtype=object)
array(['/img/arm-view-all-arm-regions.png',
'arm-view-all-arm-regions.png'], dtype=object)
array(['/img/arm-ad-application-arm-portal.png',
'arm-ad-application-arm-portal.png'], dtype=object)
array(['/img/arm-subscription-users2.png', 'arm-subscription-users2.png'],
dtype=object) ] | docs.rightscale.com |
Because the entire purpose of transform groups is to pass UDTs between the client and the database platform transparently, this section describes some of the important points relative to how client software deals with UDTs. To be accurate, UDTs are not passed to the client. They are transformed into a predefined type and data having that predefined type is passed to the client.
This information is by no means comprehensive. To get the full picture of how various client utilities deal with UDTs, consult the appropriate Teradata Tools and Utilities user documentation.
The first thing to understand is that the Teradata Tools and Utilities are not UDT-aware. Client software does not see UDT values, but values having a predefined data type representing a UDT on the Teradata platform. How a platform UDT is transformed to a client predefined type is entirely at the discretion of the transform developer.
- For distinct UDTs, you must specify the underlying predefined (intrinsic) data. The database handles all conversions to and from the UDT by means of transform groups. The Teradata Tools and Utilities do not know that a value will eventually reside in, or that it came from, a distinct UDT.
- For structured UDTs, you have two options:
- Specify the client representation of the UDT as a single field with a single predefined data type.For example, suppose you have a structured UDT called circle that is constructed from 3 attributes, each having the predefined FLOAT data type. The attributes are as follows:
- The x coordinate of the center of the circle.
- The y coordinate of the center of the circle.
- The radius of the circle.
Pursuing the first option, you would combine the attributes into a single field and represent the structured UDT on the client as circle BYTE(24).
The database handles the conversion to and from the external and internal UDT representations by means of user-defined transforms.
- Specify the client representation of the UDT as 3 separate fields.Again using the circle example, you would specify 3 separate fields, each having the predefined FLOAT data type, for example:
- x_coord FLOAT
- y_coord FLOAT
- radius FLOAT
You would then construct the structured circle UDT on the Teradata platform side using the NEW expression in the VALUES clause of the INSERT, or the SET clause of the UPDATE. For more information, see Teradata Vantage™ - SQL Data Manipulation Language, B035-1146.
For example, for circle UDT, the syntax is the following:
NEW circle(:x,:y,:r)
See the appropriate Teradata Tools and Utilities documents for details. | https://docs.teradata.com/r/6p0kNi~2vaaSHEbbM_bP1g/O2QliIcy8DaKmeackVZ0EA | 2021-10-16T12:51:58 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.teradata.com |
Capture Debug Information
You can use this feature to run Debug on Discovery, Poller, SNMP, Alerts. This output information could be helpful for you in troubleshooting a device or when requesting help.
This feature can be found by going to the device that you are
troubleshooting in the webui, clicking on the settings icon menu on
far right and selecting Capture.
Discovery
Discovery will run and output debug information.
Poller
Poller will run and output debug information.
SNMP
SNMP will run SNMP Bulk Walk on the device and output the information.
Alerts
Alerts Capture is handy when you are creating alerts and need to see if your alert rule matches.
| https://docs.librenms.org/Support/Device-Troubleshooting/ | 2021-10-16T12:30:17 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['/img/capture-debug-icon.png', 'Capture-Debug-Icon'], dtype=object)
array(['/img/device-troubleshooting.png', 'device-troubleshooting'],
dtype=object) ] | docs.librenms.org |
Investigate counter metrics
Counter metrics are one of the most common metric types. A counter metric has a value that always increases when it changes, except when it is reset to zero on restart. In other words, it increases monotonically.
You use counter metrics to count things. Automobile odometers provide a simple example of a counter metric. Odometers indicate the number of miles that a car has been driven. Odometer values never go down, except when they are reset to zero.
Counter metrics tend to count events. For example, most networking metrics involve event counts, whether you are talking about website visits, network interface errors, packets sent or received, or disk operations.
Periodic and accumulating counters
There are two types of counter metrics: periodic counters and accumulating counters. The following table describes these metric types, lists the metric protocols that they are associated with, and lists the key SPL that you use to query them.
Sum up periodic counters
Because of the way that periodic counters are reset to zero each time the metrics client sends them to the Splunk platform, they are reported as a series of independent measurements. To see how these measurements work as a counter, you run a
mstats,
stats, or
tstats search that aggregates them with the
sum(x) function. Alternatively you could run a
timechart search that aggregates them with one of the
per_*(x) functions.
Get the count rate for an accumulating counter
People who track accumulating counter metrics often find the count rate over time to be a more interesting measurement than the count over time. The count rate tells you when metric activity is speeding up or slowing down, and that can be significant information for some metrics.
The manner in which you determine counter rates depends mostly on the version of your Splunk platform implementation. If you are using 7.0.x or 7.1.x, you use
streamstats in conjunction with
latest(x) and
eval to return the rate of an accumulating counter. If your Splunk platform implementation is version 7.2.x or higher, you use
mstats with the
rate(x) function to get the counter rate.
The two methods of getting the counter rate return slightly different results. This happens because they compare different sets of count values.
When constructing SPL for a counter rate search, make sure that you do not mix counter metrics. If you need to report on multiple counter metrics, use the
BY clause to separate them. You should also set
name=indexerpipe processor=index_thruput to keep the focus on one specific counter metric.
Use streamstats, latest(x), and eval to return counter rate
Use
streamstats, the
latest(x) function, and
eval if your Splunk platform version is 7.0.x or 7.1.x, or if you have a scenario for which the
rate(x) function is inappropriate. You might stick to
streamstats if you can't count on having two metric data points per timespan, for example.
When you use this method, be sure to set
current=f to force the search to use the latest value from the previous timespan.
Here is an example of a counter rate search that uses
streamstats,
latest(x), and
eval for its calculations:
| mstats latest(pipeline.cumulative_hits) as curr_hits where index=_metrics
name=indexerpipe processor=index_thruput span=1s
| streamstats current=f latest(curr_hits) as prev_hits
| eval delta_hits=curr_hits-prev_hits
| where NOT (delta_hits < 0)
| timechart sum(delta_hits) as sum_hits span=1h
| addinfo | eval bucket_span=info_max_time - _time
| eval bucket_span=if(bucket_span > 3600, 3600, bucket_span)
| eval rate_hits=sum_hits/bucket_span
| fields - sum_hits, bucket_span, info_max_time, info_min_time, info_search_time, info_sid
And here is an example of the line chart returned by this search.
Walkthrough
Here is a step-by-step walkthrough of that example search.
- Use a combination of
mstats,
streamstats, and
evalto get the delta count on each second.
| mstats latest(pipeline.cumulative_hits) as curr_hits where index=_metrics name=indexerpipe processor=index_thruput span=1s | streamstats current=f latest(curr_hits) as prev_hits | eval delta_hits=curr_hits-prev_hits | where NOT (delta_hits < 0)
Note that
streamstatsuses
current=f. This forces the search to use the latest value from the previous timespan.
- Calculate the sum of the delta counts for each hour.
| timechart sum(delta_hits) as sum_hits span=1h
- Calculate the time span of the bucket. It should be 1h, unless it is the last bucket, in which case it can be less than 1h.
| addinfo | eval bucket_span=info_max_time - _time | eval bucket_span=if(bucket_span > 3600, 3600, bucket_span)
- Lastly, calculate the rate with the following function
rate = delta_count/time_range.
| eval rate_hits=sum_hits/bucket_span | fields - sum_hits, bucket_span, info_max_time, info_min_time, info_search_time, info_sid
Use mstats with the rate(x) function to return counter rate
Use
mstats in conjunction with the
rate(x) function to determine counter rates if you are using Splunk platfom version 7.2.x or higher.
To get a proper rate measurement with
mstats and
rate(x) you need to have at least two counter events per time span in your search. The Splunk platform uses the difference between those two values to determine the actual rate. If you cannot guarantee that there will be two metric data points per timespan you might instead use the
streamstats method.
The
rate(x) function uses the following calculation to derive its value:
(
latest(<counter_field>) -
earliest(<counter_field>)) / (
latest_time(<counter_field>) -
earliest_time(<counter_field>))
See Time functions in the Search Reference for more information about these functions.
Here is an example of a counter rate search that uses
mstats and
rate(x) to get counter rates.
| mstats rate(pipeline.cumulative_hits) as rate_hits where index=_metrics name=indexerpipe processor=index_thruput span=1h
And here is an example of the line chart returned by this search..
This documentation applies to the following versions of Splunk® Enterprise: 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.2.0, 8.2.1, 8.2.2, 8.1.1, 8.1.0
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/8.1.4/Metrics/CounterMetrics | 2021-10-16T13:05:00 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Date: Sat, 25 Jan 2014 19:52:57 +0000 From: Frank Leonhardt <[email protected]> To: [email protected] Subject: Re: Why was nslookup removed from FreeBSD 10? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On 25/01/2014 19:37, Mark Tinka wrote: > On Saturday, January 25, 2014 09:13:08 PM Frank Leonhardt > wrote: > >> Unbelievable, but true - someone somewhere thought that >> removing nslookup from the base system was the way to >> go. >> >> Why? Can anyone shed any light on how this decision was >> made? > If you read: > > > > Under the "2.3. Userland Changes" section, you will notice: > > "BIND has been removed from the base system. > unbound(8), which is maintained by NLnet Labs, has > been imported to support local DNS resolution > functionality with DNSSEC. Note that it is not a > replacement of BIND and the latest versions of BIND > is still available in the Ports Collection. With > this change, nslookup and dig are no longer a part > of the base system. Users should instead use > host(1) and drill(1) Alternatively, nslookup and > dig can be obtained by installing dns/bind-tools > port. [r255949]" > > So install /usr/ports/dns/bind-tools and you're a happy guy. > > As to the philosophy of it all, no point arguing. Fait > accompli. > > Mark. As you and Waitman both pointed out, nslookup IS part of BIND, yet as I said in the diatribe following the question in my post, so is "host" and that's still there. Also Windoze has nslookup but doesn't include BIND. I agree there's no point arguing unless you know the rational behind what appears an arbitrary decision; hence my question. Was this simply an oversight or is there a thought-out reason for it that one can take issue with? IIRC, nslookup was present in 4.3BSD, and I'm pretty sure it existed before that. (That's BSD, not FreeBSD). Its relied on in scripts. The reason for dropping it from the base system must be pretty spectacular. FreeBSD 10.0 might be better known as FreeBSD Vista, at this rate. Regards, Frank.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=870133+0+/usr/local/www/mailindex/archive/2014/freebsd-questions/20140126.freebsd-questions | 2021-10-16T12:44:08 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.freebsd.org |
Environments
The CodeIgniter framework allows you to set an ENVIRONMENT constant in your bootstrap index.php file. This constant can be used to pull in different configuration information. However, it can become a hassle to remember to change that value when you switch environments. FUEL has added the ability to set up an array of server host values that correspond to different environments and will automatically set the ENVIRONMENT constant value. To set your environments, edit the /fuel/application/config/environments.php file. If there is no matching environment it is setup to default to the development environment. The default can be changed in the index.php bootstrap file.
The /fuel/application/config/environments.php file by default includes the following development environments:
$environments = array( 'development' => array('localhost*', '192.*', '*.dev'), );
You can use regular expression or asterisks as a wildcards. | https://docs.getfuelcms.com/general/environments | 2021-10-16T12:45:30 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.getfuelcms.com |
Tester Module Documentation
This Tester documentation is for version 1.0.
Overview
The Tester module allows you to easily create tests and run them in the FUEL admin. To create a test, add a test folder within your applications folder. Tester will read that folder to create it's list of tests you are able to run. It will even scan other modules for test directories to include in it's list of tests to run. You can also run your tests via the command line in a terminal prompt:
php index.php tester/run fuel Fuel_cache_test.php
If you are on a Mac and having trouble where the script is outputting nothing, you may need to make sure you are calling the right php binary. For exmaple, you may need to use something like /Applications/MAMP/bin/php/php5.3.6/bin/php. Here is a thread that talks about it more: Hopefully it saves you some time too!
Some other important features to point out:
- If you have SQL that you want to include in your test, add it to a tests/sql folder and you can call it in your test class's setup method (see below)
- Test classes should always end with the suffix _test (e.g. my_app_test.php)
- Test class methods should always begin with the prefix test_
- Test database information can be set in the config/tester.php file
- The constant TESTING is created when running a test so you can use this in your application for test specific code
Tester Configuration
The following configuration parameters can be found in the modules/tester/config/tester.php configuration file. It is recommended that you copy the config file and place it in your fuel/application/config directory which will override the defaults and make it easier for future updates.
You must use a database user that has the ability to create databases since a separate database is created for testing.
Example
<?php if (!defined('BASEPATH')) exit('No direct script access allowed'); class My_site_test extends Tester_base { public function __construct() { parent::__construct(); } public function setup() { $this->load_sql('test_generic_schema.sql'); // load a basic MY_Model to test require_once('test_custom_records_model.php'); } public function test_find_by_key() { $test_custom_records_model = new Test_custom_records_model(); // test find_by_key $record = $test_custom_records_model->find_by_key(1); $test = $record->full_name; $expected = 'Darth Vader'; $this->run($test, $expected, 'find_by_key custom record object property test'); // test get_full_name() method version $test = $record->get_full_name(); $this->run($test, $expected, 'find_one custom record object method test'); } public function test_goto_page() { // $post['test']= 'test'; $home = $this->load_page('home', $post); $test = pq("#content")->size(); $expected = 1; $this->run($test, $expected, 'Test for content node'); $test = pq("#logo")->size(); $expected = 1; $this->run($test, $expected, 'Test for logo node'); } } | https://docs.getfuelcms.com/modules/tester | 2021-10-16T12:56:34 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.getfuelcms.com |
API Overview
The Agora RTM SDK provides a stable messaging mechanism for you to implement real-time messaging scenarios.
This page lists the core APIs of the Agora RTM SDK. Unless otherwise specified, most of the core APIs should only be called after the loginByToken method call succeeds and after you receive the
AgoraRtmLoginErrorOkerror code.
Following are the core functionalities that the Agora RTM SDK provides.
- Login and logout
- Sending a peer-to-peer message
- Querying the online status of specified users
- Subscribing to or unsubscribing from the online status of specified users
- User attribute operations
- Channel attribute operations
- Retrieving channel member count of specified channels
- Uploading or downloading multimedia files
- Joining and leaving a Channel
- Sending a channel message
- Retrieving a member list of the current channel
- Call invitation
- Renewing the current RTM Token
- Log file settings and version check
- Customized method
- Geofencing and cloud proxy
Login and logout
The connection state between the SDK and the Agora RTM system is a core concept for you to understand before developing an RTM app. For more information, see:
- iOS: Manage Connection States.
- macOS: Manage Connection States.
Sending a peer-to-peer message
Querying the online status of the specified users
Subscribing to or unsubscribing from the online status of specified users
User attribute operations
Channel attribute operations
Retrieving channel member count of specified channels
Uploading or downloading multimedia files
Joining or leaving a channel
Channel message
Retrieving a member list of the channel
Call invitation management
- API calling sequence for canceling a sent call invitation:
- API calling sequence for accepting a received call invitation:
- API calling sequence for declining a received call invitation:
Renew the Token
Logfile settings and version Check
- Logfile-related operations can be done after creating and initializing the AgoraRtmKit instance and before calling the loginByToken method.
-
getSDKVersionis a static method. You can call it before creating and initializing an AgoraRtmKit instance.
Customized method
Region settings
Status Codes
AgoraRtmConnectionState
Connection states between the SDK and the Agora RTM system.
AgoraRtmConnectionChangeReason
Reasons for a connection state change.
AgoraRtmMessageType
Message types.
AgoraRtmPeerSubscriptionOptions
Subscription types.
AgoraPeerOnlineState
The online states of a peer.
AgoraRtmLocalInvitationState
RETURNED TO THE CALLER. States of an outgoing call invitation.
AgoraRtmRemoteInvitationState
RETURNED TO THE CALLEE. States of an incoming call invitation.
AgoraRtmLogFilter
Log Filter types.
Error Codes
AgoraRtmChannelMemberCountErrorCode
Error codes related to retrieving the channel member count of specified channels.
AgoraRtmGetMembersErrorCode
Error codes related to retrieving a channel member list.
AgoraRtmInvitationApiCallErrorCode
Error codes for the call invitation methods.
AgoraRtmJoinChannelErrorCode
Error codes related to joining a channel.
AgoraRtmLeaveChannelErrorCode
Error codes related to leaving a channel.
AgoraRtmLoginErrorCode
Error codes related to login.
AgoraRtmLogoutErrorCode
Error codes related to logout.
AgoraRtmPeerSubscriptionStatusErrorCode
Error codes related to subscribing to or unsubscribing from the status of specified peers.
AgoraRtmProcessAttributeErrorCode
Error codes related to the attrubute operations.
AgoraRtmQueryPeersBySubscriptionOptionErrorCode
Error codes related to getting a list of the peers by suscription option type.
AgoraRtmQueryPeersOnlineErrorCode
Error codes related to querying the online status of the specified peers.
AgoraRtmLocalInvitationErrorCode
RETURNED TO THE CALLER. Error codes of an outgoing call invitation.
AgoraRtmRemoteInvitationErrorCode
RETURNED TO THE CALLEE. Error codes of an incoming call invitation.
AgoraRtmRenewTokenErrorCode
Error codes related to renewing the token.
AgoraRtmSendChannelMessageErrorCode
Error codes related to sending a channel message.
AgoraRtmSendPeerMessageErrorCode
Error codes related to sending a peer-to-peer message. | https://docs.agora.io/en/Real-time-Messaging/API%20Reference/RTM_oc/v1.4.0/docs/headers/API-Overview.html | 2021-10-16T11:25:08 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.agora.io |
Dynamic Monitoring Mode and Network Visibility
Like Server Visibility Agents, Network Agents support Dynamic Monitoring Mode (DMM).). See Network Visibility Metrics
By default, all Network Agents run in KPI mode. The recommended workflow is to:
- Run all Network Agents in KPI mode.
- When you notice a performance issue on a specific node or network link, increase the metric level on the associated Network Agents to Diagnostic, and collect KPI metrics for the connections.
- Based on the connection KPIs, identify the connections with performance issues.
- To troubleshoot an individual connection, increase the metric level on the associated Network Agents to Advanced Diagnostic, and collect advanced metrics for the connection.
- When the issue is resolved, reset the Agents back to KPI mode.
Change the DMM on a Network Agent
- Click the gear icon () in the top-right corner of the Controller page, select AppDynamics Agents, and go to the Network Visibility Agents table.
- Select the Agents of interest, right-click, and select. Network Visibility can detect these TCP performance issues:
- many short-lived connections. TCP is most efficient when long, stable connections are used.
- Some TCP sessions have unusually high round-trip times (RTTs). When TCP is performing well, RTTs are stable and determined by the network path between two nodes.
The Network Agent does not collect any connection metrics in KPI mode (the default setting). To diagnose a node or network path that is monitored by an Agent, you can change the Dynamic Monitoring Mode on a Network Agent.
Example Workflow
After you complete the initial setup, you can set the Dynamic Monitoring Mode on individual Network Agents, as needed. This is an example workflow:
- noticed on the network link.
- Reconfigures TCP on the two nodes, and monitors the connection. The Nagle and latency spikes no longer occur.
- Resets DMM on the TA-N1 and TB-N3 Agents back to KPI. | https://docs.appdynamics.com/21.7/en/infrastructure-visibility/network-visibility/network-visibility-concepts/dynamic-monitoring-mode-and-network-visibility | 2021-10-16T12:35:20 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.appdynamics.com |
Date: Sat, 16 Oct 2021 05:57:30 -0700 (PDT) Message-ID: <211366594.3025.1634389050763@clx-confluence.c.mariadb-clx-confluence.internal> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3024_1061409825.1634389050763" ------=_Part_3024_1061409825.1634389050763 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
nanny ensures that all jobs neede= d for the successful function of Xpand are running.
All jobs for which nanny is res= ponsible are designed to run indefinitely.
If any of nanny's monitored jobs stop for any reason, nanny will immediately attempt to restart it.=
nanny allows stopping and star= ting of the jobs it controls via the clx utility or = the nanny port (2424) .
The following jobs are kept running by nanny:
To run the nanny command of the This is a list of all the nanny commands =
available.
This is a list of all the nanny commands = available. | https://docs.clustrix.com/exportword?pageId=10912459 | 2021-10-16T12:57:30 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.clustrix.com |
Updates, Upgrades & Rollbacks
Installing updates with Kinoite is easy and fast (much faster than other operating systems). It also has a special rollback feature, in case anything goes wrong.
Updating Kinoite
OS updates in Kinoite 32 to Fedora 33) can be
completed using the Software application. Alternatively, Kinoite Kinoite 33, the command is:
$ rpm-ostree rebase fedora:fedora/35. | https://docs.fedoraproject.org/en-US/fedora-kinoite/updates-upgrades-rollbacks/ | 2021-10-16T12:38:52 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.fedoraproject.org |
Date: Wed, 11 Sep 1996 16:13:29 +0200 From: doomlike <[email protected]> To: [email protected] Subject: games ? Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
I'm a 25 old french student, having a PC with win 95 and a modem. Not Unix station. Can I use your software to play action 3d games like doom or quake through the internet with other persons ? Thanks for your answer . Doomlike.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=443570+0+/usr/local/www/mailindex/archive/1996/freebsd-questions/19960908.freebsd-questions | 2021-10-16T12:21:21 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.freebsd.org |
Setting up a schedule action integration
You can set up a scheduler to start a new item in a process using time-based triggers.
After starting an integration, select Schedule Action.
Actions can be scheduled Daily, Weekly, Monthly, or Yearly.
Daily triggers
By default, this scheduler will trigger daily at the time that you enter.
If you click Show advanced options, you will have additional settings.
- You can select how often the scheduler runs based on the number of days.
- You can select when the scheduler should end:
- Never: The trigger will never end.
- On: The trigger will end at a scheduled calendar date.
- After n occurrences: The trigger will end after a specified number of runs.
Weekly triggers
On the weekly scheduler, select the days of the week you want the scheduler to run, and also the time.
When you click Show advanced options, you’ll be able to define:
- how often the scheduler runs
- when the scheduler should end
Monthly triggers
Select the day of the month you want the scheduler to run and also the time.
If you select 29, 30, or 31 the scheduler will only run on months that have that many days. Select Last day of the month to make sure it happens every month.
When you click Show advanced options, you’ll be able to define:
- the frequency
- when the action should end
Yearly triggers
Select the month, date, and time you want the scheduler to run.
Starting a new item
After setting up the scheduler, click the Add button (
) to start a new item in your process. On the left side you will see a dropdown of possible fields in the process you want to initiate. On the right side, you can enter relevant values to map to the selected field.
When the trigger fires, this integration will take the data you have mapped and create and submit a new item into the workflow. | https://docs.kissflow.com/article/t727groljo-setting-up-a-schedule-action-integration | 2021-10-16T11:16:25 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.kissflow.com |
If you have specified FALLBACK in creating the table, either explicitly or by default, the system automatically maintains a duplicate copy of the data in the table. This fallback copy is then used if the primary copy becomes unavailable.
Fallback is very important when a system needs to reconstruct data from fallback copies when a single-bit read error occurs when it attempts to read the primary copy of the data. When a hardware read error occurs in this case, the file system reads the fallback copy of the rows and reconstructs a memory-resident image of them on their home AMP. Without this feature, the file system fault isolation logic would abort the transaction and, depending on the error, possibly mark the table as being down. See SET DOWN and RESET DOWN Options.
-.
To avoid the overhead of this substitution, you must rebuild the primary copy of the data manually from the fallback copy using the Table Rebuild utility. For information about Table Rebuild, see Teradata Vantage™ - Database Utilities, B035-1102.
To enable the file system to detect all hardware read errors for tables, set CHECKSUM to ON.
To create a fallback copy of a new table, specify the FALLBACK option in your CREATE TABLE statement. If there is to be no fallback copy of a new table in a database or user space for which FALLBACK is in effect, specify NO FALLBACK as part of the CREATE TABLE statement defining the table. Alternatively, if you want to accept the database or user default for fallback, do not specify the option for the table. | https://docs.teradata.com/r/6p0kNi~2vaaSHEbbM_bP1g/vbTxcrRKK4R6MT1W2PKDEQ | 2021-10-16T12:13:05 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.teradata.com |
# to use the
npx @shopware-pwa/clicommand instead to have always latest version of the package.
* - discord.
# Issue: The API Client's timeout is too low. I'm getting HTTP 408 errors.
- Edit shopware-pwa.config.js file
- Add entry:
shopwareApiClient: { timeout: 10000; // 10 seconds of axios timeout setting }
# Issue: There is no language available in language switcher
TIP
Learn the details how to set the new language properly.
- Check if there are entries in
.shopware-pwa/sw-plugins/domains.json.
- If not, run
yarn shopware-pwa domainscommand.
# Issue: There's no "thank you page" on Apple devices
- The issue is described broadly here (opens new window).
- The possible solution in your project may look like in the PR (opens new window):
- copy
api-client.jsplugin from (
%PROJECT_DIR%/node_modules/@shopware-pwa/nuxt-module/plugins/api-client.js), do the changes from mentioned PR and put it in your project's
src/pluginsfolder (you will lose the compatibility on further upgrades)
- adjust every
handlePaymentusages to pass the context token parameter as a part of success/failure URL. | https://shopware-pwa-docs.vuestorefront.io/landing/resources/troubleshooting.html | 2021-10-16T11:00:16 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['/assets/img/shopware-pwa-components-override.8994f490.gif',
'overriding theme components'], dtype=object) ] | shopware-pwa-docs.vuestorefront.io |
Should you develop a module?
Developing Ansible modules is easy, but often it is not necessary. Before you start writing a new module, ask:
Does a similar module already exist?
An existing module may cover the functionality you want. Ansible collections include thousands of modules. Search our list of included collections or Ansible Galaxy to see if an existing module does what you need.
Should you use or develop an action plugin instead of a module?
An action plugin may be the best way to get the functionality you want. Action plugins run on the control node instead of on the managed node, and their functionality is available to all modules. For more information about developing plugins, read the developing plugins page.
Should you use a role instead of a module?
A combination of existing modules may cover the functionality you want. You can write a role for this type of use case. Check out the roles documentation.
Should you create a collection instead of a single module?
The functionality you want may be too large for a single module. If you want to connect Ansible to a new cloud provider, database, or network platform, you may need to develop a new collection.
Each module should have a concise and well defined functionality. Basically, follow the UNIX philosophy of doing one thing well.
A module should not require that a user know all the underlying options of an API/tool to be used. For instance, if the legal values for a required module parameter cannot be documented, that’s a sign that the module would be rejected.
Modules should typically encompass much of the logic for interacting with a resource. A lightweight wrapper around an API that does not contain much logic would likely cause users to offload too much logic into a playbook, and for this reason the module would be rejected. Instead try creating multiple modules for interacting with smaller individual pieces of the API.
If your use case isn’t covered by an existing module, an action plugin, or a role, and you don’t need to create multiple modules, then you’re ready to start developing a new module. Choose from the topics below for next steps:
I want to get started on a new module.
I want to review tips and conventions for developing good modules.
I want to write a Windows module.
I want an overview of Ansible’s architecture.
I want to document my module.
I want to contribute my module back to Ansible Core.
I want to add unit and integration tests to my module.
I want to add Python 3 support to my module.
I want to write multiple modules.
See also
- Collection Index
Browse existing collections, modules, and plugins
- Mailing List
Development mailing list
- irc.libera.chat
#ansible IRC chat channel | https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html | 2021-10-16T11:20:06 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.ansible.com |
About the Outer & Inner Contours of the Animated Matte Generator
T-COMP2-002-009
An animated matte generator has two separate contours: The inner contour and the outer contour. It automatically generates both contours based on the source drawing it is connected to, so both contours start out having the exact same shape and position, and their points start out being locked together. Therefore, until you separate the points of the outer and inner contours, your Animated Matte Generator will behave as if it had a single contour.
You can define the Animated Matte Generator's inner and outer contours separately by disabling one of them, then manipulating the points in the other one. Each point that is animated on one of the contours while the other one is disabled will become independent from the other contour.
There are two cases where you need an inner and an outer contour:
- To create a feathered effect. When the Animated Matte Generator's Output Type is set to Feathered, it generates a gradient going from the inner contour to the outer contour, allowing you to create light and shadow effects that match very precise shapes.
- To create an animation based on both contours. By setting the Animated Matte Generator's Output Type to Interpolate Between Contours, you can make your matte drawing morph from the outer contour's shape to the inner contour's shape, and vice versa, by animating the Animated Matte Generator's Interpolation Factor.
The Animated Matte Generator keeps track of which points correspond with each other across the inner and outer contours. You can enable the Point Id
option in the Animated Matte Generator view to display the identification number of each matte point in the Camera view. Points from the inner contour that match points from the outer contour will have the same identification number.
| https://docs.toonboom.com/help/harmony-20/premium/effects/animated-matte-generator/about-outer-inner-contour.html | 2021-10-16T12:49:09 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['../../Resources/Images/HAR/Stage/Effects/aniamted-matte-generator-double-contour-example.png',
None], dtype=object)
array(['../../Resources/Images/HAR/Stage/Effects/animated-matte-generator-pont-id-example.png',
None], dtype=object) ] | docs.toonboom.com |
The "Javascript Service Hosting" feature facilitates a number of built-in objects to provide access to various data sources from your service. You can refer to more detail on each host object from the WSO2 Mashup Server documentation as follows.
- APP: A collection of objects facilitating publishing using the Atom Publishing Protocol (APP) :
- Email: An object that allows emails to be sent from mashups :
- IM: An object that allows Instant Messages to be sent from mashups :
- Feed: A generic collection of objects allowing the consuming and persisting of Atom/RSS feeds :
- File: An object that allows files to be read and written to the .resources folder :
- Request: A global object that provides the ability for users to get information regarding the request it received :
- Session: A global object that aids in a persisting state for the duration of a session :
- Scraper: An object that allows Web pages to be converted into XML for data extraction and other manipulations :
- System: A global object providing methods for useful tasks such as including JavaScript files :
- WSRequest: An object for invoking a Web service without a stub, or necessarily even a WSDL :
Overview
Content Tools
Activity | https://docs.wso2.com/display/AS520/Hosted+Objects | 2021-10-16T11:29:15 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.wso2.com |
Cloud Foundry CLI Integration
The Cloud Foundry (CF) CLI (Command Line Interface) is the official command line client for Cloud Foundry server. You can use the CF CLI to manage apps, service instances, orgs, spaces, and users in your environment.
Install the CF CLI Plugin
To download the CF CLI plugin, do the following steps:
- Open the Distribution site.
- Download the CF CLI plugin with .xldp extension at XLDEPLOYSERVER_HOME/plugins/ directory.
- Restart the Deploy
Features
- Create an Infrastructure
- Creating an Environment
Create and Deploy an Application
- Deploy a Space
- Deploy or Push an App
- Deploy a Route
- Deploy a Service
- Controls tasks
Create an Infrastructure
- Create an overthere host in Infrastructure where CF CLI is installed.
- Create a new cf client (
cf.Client) under the overthere host.
- Specify the properties related to the CF CLI client like username, password, apiEndpoint, CLI version, binary home, tls, etc.
- Under the
cf.Clientcontainer, create a new Org (organisation (
cf.Org)).
- Specify the properties of the Org like
orgNameand
CIname.
Note: You can create multiple Org container CIs if you want to map other Orgs in different Deploy’s environments.
- Specify the Org that you want to use in the
orgToUseproperty under the
cf.Client
- This
cf.Orgwill be the container for deploying Space and Space will be the container for deploying other deployable like Pushed App, Routes, and Services.
- Create an environment of the
cf.Org
Create an Environment
To create an environment, do the following steps:
Note: You can create environment in each Org.
Create an Environments using Org and specify the properties shown in below image.
Note:
cf.Org will be the container for
cf.SpaceSpec which deploys
cf.Space container and
cf.Space will be the container for other the deplorables.
Create and Deploy an Application
The following are the four deploy tasks that can be performed:
cf.SpaceSpec: Deploys a Space to an Org (Organisation) environment (that has cf.Org container).
cf.PushSpec: Deploys or push an App to a Space environment (env that has cf.Space container deployed by cf.SpaceSpec deployable).
cf.RouteSpec: Deploys a Route to a Space environment.
cf.ServiceSpec: Deploy a Service to a Space environment.
Note: If you have multiple spaces in a single Org, it is advised to create a separate Environment which has a container reference to a single Space.
Deploy a Space
To deploy a space, do the following steps:
- Create a deployment package and specify the deployable as
SpaceSpec.
- Specify the properties that you want for Space as shown in below image.
Note: You must have to provide already available space name in Default Space Name property as your initial default space while login into cf CLI.
- Select the desired environment containing Org and deploy to it.
- After creating a Space, you can verify that the Space and Org are bound to the same environment. This is important as all other deployable are deployed to
cf.Spacecontainer.
Deploy or Push an App
To deploy or push an App, do the following steps:
- Create a deployment package and specify the deployable as
PushSpec.
- Specify the properties that you want for Push.
- The properties of Pushing an App in Deploy are similar as mentioned in and.
Note: Only some properties in
PushSpec are specific to v7 of CF CLI.
- The push artifacts such as zip file of project (
-poption), manifest file (
-foption) and variables files (
--var-fileoption) is also supported for
PushSpecas children artifacts as types-
cf.AppZip,
cf.ManifestFile, and
cf.VarFile.
- AppZip is a zip containing the apps with the corresponding manifest file in the root of a zip file. The file can be a war, jar, zip, or any artifact file.
- To deploy the war file to CF, create a
PushSpecwith the App name, buildback- and memory- 1G. Then deploy it to a Space environment.
- ManifestFile is the manifest to deploy the artifact in CF (Instead of manifest the Push options in PushSpec can also be given as the option in Manifest and Push Options are the same).
Note: Please do not specify the app name in the
manifest.yml file as it is being supplied from the PushSpec’s property appName. Make sure not to supply different options (specifying the same options in
manifest.yml and Push Options of PushSpec).
- VarFile is a file that container the entries for variable substitution for ManifestFile.
- The maximum of AppZip and ManifestFile artifacts are limited to 1, where as you can supply as multiple VarFile artifacts.
- For rolling deployment you have to select the strategy as
rollingwhile creating deployment package.
- If you want to push app with sidecar process, then you have to put information related to sidecar in
manifest.ymlfile.
- After the creation of
cf.Appdeployable, you can deploy its package to the environments containing Space infrastructure container.
- You can also update this deployment and un-deploy.
- You cannot change the appName of Pushed App as it uniquely identifies the name of an App during the update process.
How to verify the CF CLI command on XLD before executing the task?
To verify the CF CLI command before executing, do the following steps:
- After you select the deployable for deployment of app click on Preview.
- Double click on any step, will show the command that will be run on CF CLI.
Deploy a Route
To deploy a Route, do the following steps:
- Create a deployment package with deployable
cf.RouteSpec.
- Specify the properties of RouteSpec.
- Specify a Valid Route Domain. There are special properties that correspond to routeType http or tcp.
- After the creation of deployable, you can deploy its package to the Space Environment.
- Deploy, Update and Un-deployment of a Route is supported.
Deploy a Service
To deploy a service, do the following steps:
- Add service to your targeted Organization. Refer Managing Services.
- Create a deployment package with deployable
cf.ServiceSpec.
- Specify the properties of ServiceSpec.
- Specify service to use, plan, and service instance. You can also use options like specify broker, configuration (JSON string), and Service Tags.
- There are upgrade-specific options under the category update options also which need to be given only in case of updating a service.
- You cannot change service instance or service while updating the service as these are the properties that identify a service uniquely.
- After the creation of deployable, you can deploy its package to the Space Environment.
- Deploy, Update and Un-deployment of a Service is supported.
Controls Tasks
There are a number of control tasks that you can perform using cloudfoundry-cli-integration.
On cf.Org deployed
The following control task that can be performed
cf.Org deployed:
List all Organizations:
This control task will list all the organizations in the current CF servers. Supply the space name as a parameter to this control task to support login.
On cf.App deployed:
The following are the four control tasks that can be performed
cf.App deployed:
Restart the App:
Note” Restarts the running App in Cloud Foundry Space. There are params you can supply if the deployment is like strategy and no wait. The strategy should be specified if a deployment is rolling update.
Stop the App
Note: Stop the App:
Start the App
Note: Starts the stopped App in Cloud Foundry Space.
Scale the App:
Note: Scales the currently running App on the ground of instances, disk limit, and memory limit.
On cf.Route deployed
The following control task that can be performed
cf.Route deployed:
Map Route:
Note: Maps the current route(deployed) to an instance of an App. Supply the App name in the control task parameter and this control task will map the deployed route to an App.
On cf.Service deployed
The following are the four control tasks that can be performed
cf.Service deployed:
bindService:
Note:Bind this service instance (deployed) to an app. Supply parameters- App Name, bindingName, and conf (configuration in JSON string) to this control task.
unbindService:
Note:Unbind this service instance (deployed) from an app. Supply parameters- App Name to this control task.
bindRouteService:
Note:Bind this service instance (deployed) to an HTTP route. Supply parameters- domain, hostname, path, and conf (configuration in JSON string) to this control task.
unbindRouteService:
Note:unbind this service instance (deployed) to an HTTP route. Supply parameters- domain, hostname, and path to this control task. | https://docs.xebialabs.com/v.10.1/deploy/concept/deploy-cf-cli-plugin/ | 2021-10-16T12:22:37 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['/static/deploy-cf-cli-create-an-environment-1-b436fdf3f68d08cc598f4e59b7428a24.png',
'image'], dtype=object)
array(['/static/deploy-cf-cli-create-and-deploy-an-application-1-3b5a990611a6564e467e23c4ea0b33a9.png',
'image'], dtype=object)
array(['/static/deploy-cf-cli-deploy-a-space-1-b3eb343af41ee10788a1283c7432e8d9.png',
'image'], dtype=object)
array(['/static/deploy-cf-cli-on-cf.app-deployed-1-7c21f5cc605fbea36815a44f102a63b0.png',
'image'], dtype=object)
array(['/static/deploy-cf-cli-on-cf.app-deployed-2-36982940be028cdd0d3e987b044e31ab.png',
'image'], dtype=object)
array(['/static/deploy-cf-cli-on-cf.Route-deployed-1-1f9c215019815f7b4b4256fe7444d879.png',
'image'], dtype=object) ] | docs.xebialabs.com |
Both the FROM user part (caller ID) and the dialed number from the TO header can be modified by the use of a simple and yet flexible mechanism described below.
The modifications can take place at different points of the request processing in both billing and routing processes which run independently from one another. This allows you to change, for example, only the dialed number before it is passed to the billing function while leaving the dialed number unchanged for routing process.
For billing you can modify the parameters when a call comes in and before it is matched with a rate from the tariff. This option is configurable in the client account Tariff rules field.
In the routing process the dialed number can be modified when the call arrives and before it is compared with the routing plan. This setting is in the client account’s Dialing rules field. Next, the dialed number can be changed in the routing plan, before passing it to the destination. And the last point is at the destination definition. You can modify the dialed number before it is send out from the softswitch to the destination endpoint.
The modifying functions are responsible for the following tasks:
- Adding characters at the beginning of the string
- Removing matching characters from the beginning of the string
- Adding characters at the end of the string
- Validating length of the string
- Replace the string sent from a client with a fixed string
The above operations can be carried sequentially in a predefined order. In the VSM you can enter a modifying function’s notation directly into the relevant input field – Routing rules or Tariff rules or you can click on the extended menu button next to the field and use the dialog window.
The dialog window allows to set following operations:
- Forward from client - is equivalent to an empty prefix field and it will forward the prefix exactly how it was received from the client.
- Always send – the modification function notation is !123 where 123 is the number you wish to send to the destination. That means the entire prefix received from the client will always be substituted with the defined value.
- Change - will create a rule as follows: "X->Y|Z" where X is the string entered in the Prefix field, Y is the string in the To field and Z is the Suffix. This means that if the prefix received from the client starts with X then replace it with Y and add Z at the end of the new string;
- Prefix - the value defined in this field will be replaced with the value set in the To field. If the prefix does not match the beginning of the dialed number, no action will be taken. To remove the first characters from the number, the To field should be empty so the prefix will be replaced with nothing - in other words, will be removed;
- To - the string which will be added at the beginning of the dialed number. When you leave the Prefix field blank the value of the To field will always be added to the dialed number.
- Add suffix - it will add the defined string at the end of the dialed number
- Required number length - allows to define a required string length, the calls with the dialed number shorter than that will be declined. The same option is available for the Caller ID (FROM) where you can set its minimal length. | http://docs.voipswitch.com/display/doc/1.5+Header+strings+modifications | 2017-03-23T08:09:52 | CC-MAIN-2017-13 | 1490218186841.66 | [array(['/download/attachments/32802113/clients41.jpg?version=1&modificationDate=1441102335000&api=v2',
None], dtype=object)
array(['/download/attachments/32802113/clients42.jpg?version=1&modificationDate=1441102373000&api=v2',
None], dtype=object) ] | docs.voipswitch.com |
Category:Virtual Servers
From WebarchDocs
Information about Webarchitects and Ecodissident Virtual Servers.
Pages in category ‘Virtual Servers’
The following 19 pages are in this category, out of 19 total.
C
- Can I start with a small VPS (virtual private server) and later change the settings/increase the RAM?
- Can I use SSH to manage various sites? Say, for updating Drupal deployments using Drush?
- Can you advise on ISPConfig, and alternatives to it?
- Could we implement a SwissSign SSL certificate with a virtual server? | https://docs.webarch.net/wiki/Category:Virtual_Servers | 2017-03-23T08:07:43 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.webarch.net |
Recent Development (as of Jan. 2007).:
Outstanding Issues
| http://docs.codehaus.org/pages/viewpage.action?pageId=9241650 | 2014-04-16T07:56:11 | CC-MAIN-2014-15 | 1397609521558.37 | [array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object) ] | docs.codehaus.org |
Bugs often get fixed in master before release branches.
When a bug is fixed in the master branch it might be desirable or necessary to backport the fix to a stable branch.
This page is intended to help organize support (and prioritization) for backporting bug fixes of importance to the community.
GlusterFs 3.6
Requested Backports for 3.6.0
The tracker bug for 3.6.0 :
Please add 'glusterfs-3.6.0' in the 'Blocks' field of bugs to propose inclusion in GlusterFS 3.6.0.
GlusterFs 3.5
Requested Backports for 3.5.3
Current list of bugs planned for inclusion.
- File a new bug for backporting a patch to 3.5.3: [... new glusterfs-3.5.3 backport request]
GlusterFs 3.4
Requested Backports for 3.4.6
The tracker bug for 3.4.6 :
Please add 'glusterfs-3.4.6' in the 'Blocks' field of bugs to propose inclusion in GlusterFS 3.4.6.
Requested Backports for 3.4.4 - "self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs" - "structure needs cleaning" message appear when accessing files. - glusterfs mount crash after remove brick, detach peer and termination
Requested Backports for 3.4.3 - "self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs" - "structure needs cleaning" message appear when accessing files. - large NFS writes to Gluster slow down then stop - glusterfs mount crash after remove brick, detach peer and termination
Requested Backports for 3.3.3
Enable fusermount by default, make nightly autobuilding work
Requested Backports for 3.4.2
Please enter bugzilla ID or patch URL here:
1) Until RDMA handling is improved, we should output a warning when using RDMA volumes -
2) Unable to shrink volumes without dataloss -
3) cluster/dht: Allow non-local clients to function with nufa volumes. -
Requested Backports for 3.4.1
Please enter bugzilla ID or patch URL here. - "quota context not set in inode" - "NFS crash bug"
A note for whoever reviews this list: These are the fixes for issues that have caused actual service disruption in our production installation and thus are absolutely required for us (-- Lubomir Rintel): - "Setting ACL entries fails with glusterfs-3.4.0" - "fd leaks observed while running dbench with "open-behind" volume option set to "on" on a replicate volume"
These are issues that we've stumbled upon during the git log review and that seemed scary enough for us to cherry-pick them to avoid risk, despite not being actually hit. Hope that helps deciding whether it's worthwhile cherry-picking them (-- Lubomir Rintel): "CLI crash upon executing "gluster peer status" command" "quick-read and open-behind xlator: Make options (volume_options ) structure NULL terminated." "nfs-root-squash: rename creates a file on a file residing inside a sticky bit set directory" "DHT : files are stored on directory which doesn't have hash range(hash layout)" "statedump crashes in ioc_inode_dump" "cli crashes when setting diagnostics.client-log-level is set to trace" "glusterfsd crashes on smallfile benchmark", "tests: call 'cleanup' at the end of each test",, backport of 983975, "glusterfs-api.pc.in contains an rpath",, backport of 1002220 "glusterd.service (systemd), ensure glusterd starts before any local gluster mounts",, backport of 1004795 meta, check that glusterfs.spec.in has all relevant updates - Glusterd would not store all the volumes when a global options were set leading to peer rejection
Requested Backports
- Please backport gfapi: Closed the logfile fd and initialize to NULL in glfs_fini into release-3.5
- Done
- Please backport cluster/dht: Make sure loc has gfid into release-3.4
- Please backport Bug 887098 into release-3.3 (FyreFoX) - Done
- Please backport Bug 856341 into release-3.2 and release-3.3 (the-me o/b/o Debian) - Done for release-3.3
- Please backport Bug 895656 into release-3.2 and release-3.3 (semiosis, x4rlos) - Done for release-3.3
- Please backport Bug 918437 into release-3.3 (tjstansell) - Done
- Please backport into Bug 884597 release-3.3 (nocko) - Done
Unaddressed bugs
- Bug 838784
- Bug 893778
- Bug 913699; possibly related to Bug 884597 | https://docs.gluster.org/en/v3/Developer-guide/Backport-Wishlist/ | 2018-12-10T02:51:14 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.gluster.org |
Date : July 6, 2010
To : The Commission
(Meeting of July 8, 2010)
From : Gretchen Dumas
Public Utilities Counsel IV
Subject: Filing of Comments in Response to FCC's Notice of Inquiry on the Framework for Broadband Internet Service --
GN Docket No. 10-127
RECOMMENDATION: The Commission should submit comments in response to the Federal Communications Commission's ("FCC") Notice of Inquiry ("NOI") on the Framework for Broadband Internet Service. 1 California's interest in this proceeding derives from its statutory and constitutional role as a state consumer protection agency. Opening Comments are due July 15, 2010.
BACKGROUND: The FCC has initiated several rulemakings to implement the National Broadband Plan, released on March 16, 2010. Further, in its recent Open Internet Notice of Proposed Rulemaking ("NPRM),2 the FCC noted that it has considered the issue of Internet openness in many contexts and proceedings, including "a unanimous policy statement, a notice of inquiry on broadband industry practices, public comment on several petitions for rulemaking, [and] conditions associated with significant communications industry mergers."3
The FCC historically has relied for its jurisdictional authority over broadband services on Title I of the Telecommunications Act ("Title I"). However, on April 6, 2010, the Court of Appeals for the D.C. Circuit overturned the FCC's reliance on Title I in connection with its 2008 Comcast Order.4 Specifically, the Court held that the "Commission has failed to make [the requisite] showing" that enforcement of the policies in its Internet Policy Statement was "reasonably ancillary to the ... effective performance of its statutorily mandated responsibilities."5 And in particular, the Court found that the FCC had failed to "link the cited policies to express delegations of regulatory authority."6
Given the legal nexus between the Court's holding on the 2008 Comcast Order and some elements of its broadband agenda, the FCC issued this NOI on June 17, 2010 requesting comments on the scope of the legal framework governing the FCC's jurisdiction over Broadband Internet Service. In the NOI, the FCC set forth two legal frameworks for comment. First is Option I, the continued use of Title I, which was significantly limited by the Comcast decision. Regarding this option, in recent comments on the Open Internet NPRM, the CPUC stated "[a]fter reviewing all of the comments, relevant case law, including the recently-decided Comcast decision . . . and applicable FCC regulations relevant to this seminal jurisdictional question," the CPUC "agrees with the Court in Comcast that the FCC's reliance on Title I as a source of jurisdictional authority for Broadband Internet Service is not securely linked to an express delegation of regulatory authority."7
The alternative approach the FCC proposes is to regulate "Broadband Internet Service" under Title II of the Telecommunications Act. The FCC then offers the parties two options - Option 2 and Option 3 - for how Title II regulation of Broadband Internet Service could be achieved. Under Option 2, the FCC would forbear from regulation on a case-by-case basis. Option 3 would have the FCC assert its jurisdiction over Broadband Internet Service but forbear as a general matter from rate regulation and a number of traditional regulatory requirements. In its recent comments the Open Internet NPRM, the CPUC addressed these options globally by stating that "[i]f the [FCC] were to assert its jurisdiction under Title II, it should do so in a very limited manner, so as to ensure continued growth and development of both technology and content."8 The CPUC then suggested that the FCC could forbear from imposing many aspects of traditional common carrier regulation on Internet access providers. Section 160(a) of the 1934 Communications Act ("Act"), as amended, expressly authorizes the Commission to forbear from "applying any regulation or any provision of this chapter to a telecommunications carrier or telecommunications service, or class of telecommunications carriers or telecommunications services, in any or some of its or their geographic markets..."9 To do so, the FCC must make specified determinations as set forth in Sections 160(a)(1), (2), and (3) of the Act. The FCC has made such determinations on a number of occasions in other contexts and pertaining to other types of telecommunications services and service providers, and the FCC would be within its rights to make such a determination relative to Broadband Internet Service providers.
Thus the CPUC has already expressed support for the FCC's proposed use of Title II authority to both regulate Broadband Internet Service and its forbearance from rate regulation and other aspects of that historical regulatory regime.
1) what are the parts of the transport system that allow an end user to connect to the internet; in other words, what does the FCC envision would be included in its definition of Broadband Internet Service;
2) what sections of the Telecommunications Act should the FCC continue to enforce; and
3) what should be the States' role in the regulatory scheme envisioned by the FCC?
The answers to these overarching legal and policy questions will help define what other policy issues need to be addressed and will impact how the various issues raised in the NOI matters will be resolved.
A. DEFINITION OF "BROADBAND INTERNET SERVICE"
In the NOI, the FCC states that "the term, `Broadband Internet Services,' refers to the bundle of services that facilities-based providers sell to end users in the retail market. This bundle allows the end users to connect to the Internet."12
The definition of the term "Broadband Internet Services" that emerges from the NOI will affect directly the manner in which the FCC and the states exercise jurisdiction in an IP- enabled world. Thus, it is essential that this term be clearly defined now. And yet, this definition should be flexible enough to cover unforeseen technological in both the short- and long-term. Specifically, the FCC's definition should focus on ensuring that consumers are able to connect to the Internet regardless of the technology employed for access, and that providers cannot unreasonably limit customer access to content enabled by that connection.
Finally, the NOI states that Voice over Internet Protocol ("VOIP") is not within its scope. However, the answer to the question of how Broadband Internet Services are defined will directly affect how the FCC ultimately decides VoIP should be regulated.
B. FORBEARANCE - SPECIFIC ISSUES
The NOI raises many questions about how the FCC and the states will regulate telecommunications carriers in an IP-enabled world. Therefore, the following points are not meant to be a complete list of the issues related to the various sections of the Act that need to be addressed. Rather, this list addresses the most important of these issues that staff has identified to date.
1. Section 254 - Universal Service; Section 255 - Disabilities13
Currently Section 254 (b)(2) of the Act requires the FCC to base policies for the preservation and advancement of universal service on the principle, among others, that "[a]ccess to advanced telecommunications and information services should be provided in all regions of the Nation."14 As the CPUC has stated in numerous prior comments to the FCC,15 many states have their own universal service programs and many of these programs provide support for services beyond what the current federal universal service programs provide. For example, California has a Deaf and Disabled Telecommunications Program that provides qualified disabled individuals with equipment on a loan basis to enable their access to the Public Switched Telephone Network ("PSTN").
If the FCC were to forbear from enforcing Section 254, such forbearance could conceivably have the effect of preempting state jurisdiction over IP services, and take away state jurisdiction to establish and continue already existing state universal service programs to support Broadband deployment. On the other hand, if the FCC and the states continue to have jurisdiction over Universal Service and Disability programs that promote Broadband deployment, the FCC should acknowledge that both it and the states have jurisdiction to set forth the best method for determining contributions from IP services to both federal and state funds and the best ways to collect funding for state and federal USF mechanisms for "Broadband Internet Services."
2. Privacy - Section 222
We support the FCC's proposal not to grant forbearance from Sec. 222 of the Communications Act. This section requires telecommunications carriers to protect the confidentiality of customer information obtained by virtue of the carrier's provision of a telecommunications service. Section 222 and related FCC regulations mandate how the carrier may use, disclose and permit access to customer proprietary information, to ensure that the carrier may utilize the information as necessary to provide service but at the same time ensure maximum privacy protection of the customer's information. Given California's Constitutional right to privacy, it is important that California support the FCC's position on this key privacy section of the Act.
3. Interconnection - Section 251
In comments in the FCC's IP Network proceeding, the CPUC stated that "the entrance of IP-enabled voice and data providers into the communications market implicates many issues pertaining to interconnection. Changes to the current interconnection rules are necessary to ensure continued interconnection and a level playing field among all facilities-based providers." 16
Moreover, in those Comments, the CPUC raised a number of questions relating to interconnection which cannot be effectively addressed if the FCC should forbear from applying Section 251 to the provision of Broadband Internet Services. "Some of the questions that the FCC should consider going forward are the following:
Should some or all of the general duties required of telecommunications carriers by Section 251(a) and (c) of the Act be expanded to include all facilities based providers of IP-based services? Should any or all facilities-based IP enabled providers be required to provide resale and unbundled elements similar to LEC requirements under Section 251(b)? Should the Sec. 251(f) (1) exemption for certain rural carriers be eliminated? Should this be a matter to be determined at the state level as currently provided? Should States retain their role in the interconnection regime established in Sections 251 and 252 of the Act? Should IP service providers seeking interconnection in a state be subject to the arbitration authority of States to resolve their interconnection disputes or should such disputes migrate to the types of arrangements that characterize an IP communications world (e.g., peering agreements, etc.) and not be subject to State arbitration under the Act?"17
4. Emergency Services
i. Loss of Separately Powered PSTN Communications Network
In its IP Network comments, the CPUC also raised a concern regarding the transition to an IP-based world - specifically that the loss of a separately powered communications system could impede a customer's access to emergency services. Some of the issues the CPUC raised in that proceeding were as follows: "Should there be a requirement that IP-voice providers provide back-up power at the customer premise? How would such a requirement be enforced? Alternatively, would education of customers be adequate to address this issue? Who should be required to educate the customer? Should the states be allowed to require back-up power if there is no federal mandate, and be allowed to set the duration for back-up power to meet each State's individual, unique circumstances? Should there also be comparable back-up power requirements on the facility provider side -- so that not only the end-user customer is assured of back-up power but so too the service and application providers using the foundational broadband facility?"18 Clearly, the FCC and the states must address these important questions. Thus, as it pursues its "Third Way" option, the FCC should not use forbearance on any sections related to these issues in connection with Broadband Internet Service.
ii. E-911
In its IP Network comments, the CPUC stated that "[t] he California designated entity to implement 9-1-1 has a central role in ensuring that all residents have reliable and free access to 911. Currently, the California PUC has jurisdiction to regulate rates of 9-1-1 data base intrastate access services." If state jurisdiction over IP is pre-empted because the FCC chose to forbear from statutes that bear on this state authority, then the following question arises: "Who should establish and enforce E 9-1-1 reliability standards for IP-based service providers? Issues the FCC should consider when deciding to forbear are: Should states continue to have the authority to require tariffs and establish rate levels for the transport, switching and delivery of E9-1-1 voice and data to the Public Switched Answering Points (PSAPs) in an all-IP-based world? In the event that states do not have jurisdiction over IP services or providers, should states continue to regulate the rates, access and use of 9-1-1 data bases that contain confidential, unpublished information?"19 Given the importance of emergency services, the FCC should not forbear from regulation in this area in connection with Broadband Internet Service.
5. Service Quality and Consumer Protection
Service quality and consumer protection is obviously an important matter for customers of communications providers. Currently, states have jurisdiction over the quality of voice service provided by LECs, and the "terms and conditions" of wireless service. Given the FCC vision that the "Third Way" would be similar to the current regulation of wireless service, it is important that the FCC and the States have authority over "terms and conditions" of service for Broadband Internet Service.
6. Sections 201 and 202
The FCC is correct that Sections 201-202 are important tools for the FCC to use to provide the FCC with direct statutory authority to protect consumers and promote fair play.20 Section 201 prohibits just and unreasonable charges and Section 202 prohibits unreasonable discrimination. Since the FCC is modeling the "Third Way" on its regulation of the wireless industry, it should be noted here that the FCC rejected the wireless industries' forbearance request with regard to these sections. Rather, the FCC "found that in a competitive market those provisions are critical to protecting consumers."21
8. Sections 208 and 209
In ¶¶ 77 and 78 of the NOI, the FCC seeks comment on whether or not it should forbear from Sections 208 and 209 of the Act and the associated enforcement regime. These sections deal with the FCC's authority to hear complaints and impose fines. Because these sections serve as an important tool for the FCC to use as it battles against unlawful practices, these sections should be retained as part of FCC active oversight over Broadband Internet Service.
C. ROLE OF THE STATES
In the comments above, in relation to the discussion of specific sections of the Act, the current role and the necessity for an ongoing role for the CPUC is addressed. As a more general comment, while "cooperative federalism" has been an important concept in refining the working relationship between the federal and state governments, there are areas of regulation that the states should either share with the FCC or retain because of the longstanding expertise of the states in a given area. A good example of this is enforcing compliance with consumer protection laws.
Finally, given the importance and urgency of the need to implement the National Broadband Plan, it is important that the FCC build a complete legal and evidentiary record to confirm the agency's oversight authority under Title II. The FCC's Option 3 proposal to invoke Title II, also known as the "Third Way." corresponds most closely with the CPUC's prior positions. The Third Way offers the quickest path to resolution of jurisdictional authority and will allow the FCC and state regulators to move forward more quickly to effectuate the National Broadband Agenda. In this regard, however, the CPUC should caution that the FCC must support its proposed jurisdictional move with evidence that a modified policy is needed as a result of fast and ubiquitous changes that have occurred in the broadband market.
For example, in 2002, when the FCC determined in its Cable Modem Declaratory Ruling22 that Broadband Internet Service should be categorized as an information service, it did so premised on the assumption that competition for broadband services would increased significantly in the years ahead. However, eight years later, in 2010, the Broadband market still is still controlled by the carrier that controls the local loop to almost all current and potential Broadband customers. In most parts of the country, Broadband service is provided by the Incumbent Local Exchange Carriers (ILEC) and/or the relevant Cable operator. Indeed, in 78% of the nation's geographic area, customers can choose between only one cable operator and one ILEC, while in 13% of the nation, only one provider offers Broadband service. This duopoly control of local Broadband access makes some regulation of ILEC Broadband Internet Services (and ultimately both downstream and upstream markets) essential to the universal deployment of Broadband Internet Services.
Since the FCC determined in 2002 that Broadband Internet Service was to be regulated as an information service, potential competitors have been unable to gain access to the advanced network components needed to provide competing services. Further, experience has shown that, absent competitive pressures, incumbent LECs will not rush to invest in and offer new broadband technologies to underserved areas.23
Assigned staff: Gretchen Dumas - Legal Division (3-1210)
Roxanne Scott - Communications Division (3-5263)
Bill Johnston - Communications Division (3-2124)
1 Notice of Inquiry, In the Matter of the Framework for Broadband Internet Service, GN Docket 10 127, rel. June 17, 2010.
2 In the Matter of Preserving the Open Internet, GN Docket No. 09-191, WC 07-52.
3 Id. at ¶ 2.
4 Comcast v. FCC, D.C. Circuit Appeal 08-1291 (available at) ("Slip Opinion"), vacating the Commission's Memorandum Opinion and Order in Formal Complaint of Free Press and Public Knowledge Against Comcast Corporation for Secretly Degrading Peer-to-Peer Application, 23 FCC Rcd. 13028 (2008), in which the FCC enforced its 2005 Internet Policy Statement.
5 Slip Opinion, at 3, quoting Am. Library Ass'n v. FCC, 406 F3d 689, 692 (D.C. Cir. 2005).
6 Slip Opinion, at 24.
7 See, CPUC Comments in GN No. 09-191, WC 07-52, at page 12 and 13.
8 See, CPUC Comments in GN No. 09-191, WC 07-52, at page 13.
9 Id.
10 The FCC's Broadband Report is available on line at FCC.GOV.
11 See, NOI, at ¶ 1.
12 NOI, at ¶ 1.
13 Section 214(e) of the Act provides the framework for determining which carriers are eligible to participate in universal support programs, and Section 251(a) (2) of the Act directs telecommunications carriers not to install network features, functions, or capabilities that do not comply with the guidelines and standards established pursuant to Section 251(a) (2) and Section 225, which establishes the telecommunications relay service programs.
14 47 U.S.C. 254(b) (2).
15 See, e.g., In the matter of Comment Sought on Transition from Circuit-Switched Network to all-IP Network - NBP Public Notice #25, GN Docket Nos. 09-47, 09-51, 09-137, CPUC's December 18, 2009 Comments at pages 4-5.
16 In the matter of Comment Sought on Transition from Circuit-Switched Network to all-IP Network - NBP Public Notice #25, GN Docket Nos. 09-47, 09-51, 09-137, CPUC's December 18, 2009 Comments at pages 4-5.
17 Id., at pages 7-8.
18 Id., at pages 8-9.
19 Id., at pages 9-10.
20 NOI, at ¶ 76.
21 See PCIA Forbearance Order, 13 FCC Rcd at 16865, para. 15.B. "[S]ections 201 and 202 lie at the heart of consumer protection under the Act. Congress recognized that the core nature of sections 201 and 202 when it excluded them from the scope of the Commission's forbearance authority under section 332(C) (1) (A), 16868, para. 23."
22 17 FCC Rcd at 4804, (2002), ¶ 10.
23 See, The Transition to IP Telecom: Evolution, Not Revolution, Presentation to California Public Utilities Commission by Dr. Lee S. Selwyn, June 16, 2010, (available on CPUC website, at cpuc.ca.gov. | http://docs.cpuc.ca.gov/PUBLISHED/REPORT/120246.htm | 2017-01-16T19:15:38 | CC-MAIN-2017-04 | 1484560279248.16 | [] | docs.cpuc.ca.gov |
Foreword¶
Fortunate.
As I reflect upon the BFG web framework and this book by Chris to document it, I keep coming back to the same word. Certainly the conventional wisdom is clear: “Don’t we have too many web frameworks, paired with outdated books?” Yes we do, but to the contrary and for that very reason, we are fortunate to have this book and this framework.
Chris McDonough first came to work with us at Digital Creations almost a decade ago, just after there existed a Zope. We were all pioneers: the first open source application server, one of the first open source web companies to get serious investment, and entrants in nearly every book and article about the open source space. Zope wasn’t just a unique business model, though. It really was, as quoted at the time, one of the places where open source delivered fresh ideas in design and architecture.
Then a decade happened. Bubbles burst and the new new thing became the old new thing, many times in succession. All of us changed jobs, worked on a variety of endeavors, and big dreams yielded to small realities. Somehow, though the trajectory was unforeseen, we have orbited back to the same spot. Older, wiser, but with similar ideas and familiar faces. Back to dream again.
We are fortunate to have BFG. It really does carve out a unique spot in the Python web frameworks landscape. It permits the core good ideas from Zope, while not requiring them. Moreover, the reason you’ll love it has less to do with Zope and more to do with the old fashioned stuff:
- A superb commitment to outstanding and constantly updated documentation
- An equal commitment to quality: test coverage, performance, and documented compatibility
- Adult supervision and minimalism in the design, with “pay only for what you eat” simplicity
For those of us from the Zope world, BFG permits our still-unique ideas while teleporting us into the modern world of Python web programming. It is fascinating, liberating, and rejuvenating. We are able to cast off old sins and legitimately reclaim the title of best damn game in town. Quite a coup: whether you considered Zope but turned away, or became an adopter, you’ll find BFG the new new new thing.
We are also fortunate to have this book. We never had such a resource in Zope, even though we funded the writing of the first book a decade ago. In retrospect, the answer is obvious: a second group tried to retrofit a book onto code created by the first group. The true magic in BFG is that the top-notch documentation is written by the same person as the top-notch code, a person with equal passion and commitment to both. Rarely are we so fortunate.
Which brings us to the final point. We are fortunate to have Chris. I personally consider myself lucky to have worked with him and to be his friend this past decade. He has changed my thinking in numerous ways, fundamentally improving the way I view many things. He’s the best person I know in the world of open source, and I get to be in business with him. Fortunate indeed.
I very much hope you enjoy this book and get involved with BFG. We use it for applications as small as “hello world” demos up to scalable, re-usable, half-a-million-dollar projects. May you find BFG, and the book, to be a high-quality, honest, and durable framework choice for your work as well. | http://docs.pylonsproject.org/projects/pyramid/en/1.0-branch/foreword.html | 2017-01-16T19:11:24 | CC-MAIN-2017-04 | 1484560279248.16 | [] | docs.pylonsproject.org |
Setting Up a Development Environment¶
This.
Testing Neutron¶
Why Should You Care¶
There’s two ways to approach testing:
- Write unit tests because they’re required to get your patch merged. This typically involves mock heavy tests that assert that your code is as written.
- Putting as much thought in to your testing strategy as you do to the rest of your code. Use different layers of testing as appropriate to provide high quality coverage. Are you touching an agent? Test it against an actual system! Are you adding a new API? Test it for race conditions against a real database! Are you adding a new cross-cutting feature? Test that it does what it’s supposed to do when run on a real cloud!
Do you feel the need to verify your change manually? If so, the next few sections attempt to guide you through Neutron’s different test infrastructures to help you make intelligent decisions and best exploit Neutron’s test offerings.
Definitions¶
We will talk about three classes of tests: unit, functional and integration. Each respective category typically targets a larger scope of code. Other than that broad categorization, here are a few more characteristic:
- Unit tests - Should be able to run on your laptop, directly following a ‘git clone’ of the project. The underlying system must not be mutated, mocks can be used to achieve this. A unit test typically targets a function or class.
- Functional tests - Run against a pre-configured environment (tools/configure_for_func_testing.sh). Typically test a component such as an agent using no mocks.
- Integration tests - Run against a running cloud, often target the API level, but also ‘scenarios’ or ‘user stories’. You may find such tests under tests/tempest/api, tests/tempest/scenario, tests/fullstack, and in the Tempest and Rally projects.
Tests in the Neutron tree are typically organized by the testing infrastructure used, and not by the scope of the test. For example, many tests under the ‘unit’ directory invoke an API call and assert that the expected output was received. The scope of such a test is the entire Neutron server stack, and clearly not a specific function such as in a typical unit test.
Testing Frameworks¶
The different frameworks are listed below. The intent is to list the capabilities of each testing framework as to help the reader understand when should each tool be used. Remember that when adding code that touches many areas of Neutron, each area should be tested with the appropriate framework. Overlap between different test layers is often desirable and encouraged.
Unit Tests¶
Unit tests (neutron/tests.
At the start of each test run:
- RPC listeners are mocked away.
- The fake Oslo messaging driver is used.
At the end of each test run:
- Mocks are automatically reverted.
- The in-memory database is cleared of content, but its schema is maintained.
- The global Oslo configuration object is reset.
The unit testing framework can be used to effectively test database interaction, for example, distributed routers allocate a MAC address for every host running an OVS agent. One of DVR’s DB mixins implements a method that lists all host MAC addresses. Its test looks like this:
def test_get_dvr_mac_address_list(self): self._create_dvr_mac_entry('host_1', 'mac_1') self._create_dvr_mac_entry('host_2', 'mac_2') mac_list = self.mixin.get_dvr_mac_address_list(self.ctx) self.assertEqual(2, len(mac_list))
It inserts two new host MAC address, invokes the method under test and asserts its output. The test has many things going for it:
- It targets the method under test correctly, not taking on a larger scope than is necessary.
- It does not use mocks to assert that methods were called, it simply invokes the method and asserts its output (In this case, that the list method returns two records).
This is allowed by the fact that the method was built to be testable - The method has clear input and output with no side effects.
You can get oslo.db to generate a file-based sqlite database by setting OS_TEST_DBAPI_ADMIN_CONNECTION to a file based URL as described in this mailing list post. This file will be created but (confusingly) won’t be the actual file used for the database. To find the actual file, set a break point in your test method and inspect self.engine.url.
$ OS_TEST_DBAPI_ADMIN_CONNECTION=sqlite:///sqlite.db .tox/py27/bin/python -m \ testtools.run neutron.tests.unit... ... (Pdb) self.engine.url sqlite:////tmp/iwbgvhbshp.db
Now, you can inspect this file using sqlite3.
$ sqlite3 /tmp/iwbgvhbshp.db
Functional Tests¶. Note that when run at the gate, the functional tests compile OVS from source. Check out neutron/tests/contrib/gate_hook.sh. Other jobs presently use OVS from packages.
Let’s examine the benefits of the functional testing framework. Neutron offers a library called ‘ip_lib’ that wraps around the ‘ip’ binary. One of its methods is called ‘device_exists’ which accepts a device name and a namespace and returns True if the device exists in the given namespace. It’s easy building a test that targets the method directly, and such a test would be considered a ‘unit’ test. However, what framework should such a test use? A test using the unit tests framework could not mutate state on the system, and so could not actually create a device and assert that it now exists. Such a test would look roughly like this:
- It would mock ‘execute’, a method that executes shell commands against the system to return an IP device named ‘foo’.
- It would then assert that when ‘device_exists’ is called with ‘foo’, it returns True, but when called with a different device name it returns False.
- It would most likely assert that ‘execute’ was called using something like: ‘ip link show foo’.
The value of such a test is arguable. Remember that new tests are not free, they need to be maintained. Code is often refactored, reimplemented and optimized.
- There are other ways to find out if a device exists (Such as by looking at ‘/sys/class/net’), and in such a case the test would have to be updated.
- Methods are mocked using their name. When methods are renamed, moved or removed, their mocks must be updated. This slows down development for avoidable reasons.
- Most importantly, the test does not assert the behavior of the method. It merely asserts that the code is as written.
When adding a functional test for ‘device_exists’, several framework level methods were added. These methods may now be used by other tests as well. One such method creates a virtual device in a namespace, and ensures that both the namespace and the device are cleaned up at the end of the test run regardless of success or failure using the ‘addCleanup’ method. The test generates details for a temporary device, asserts that a device by that name does not exist, create that device, asserts that it now exists, deletes it, and asserts that it no longer exists. Such a test avoids all three issues mentioned above if it were written using the unit testing framework.
Functional tests are also used to target larger scope, such as agents. Many good examples exist: See the OVS, L3 and DHCP agents functional tests. Such tests target a top level agent method and assert that the system interaction that was supposed to be perform was indeed performed. For example, to test the DHCP agent’s top level method that accepts network attributes and configures dnsmasq for that network, the test:
- Instantiates an instance of the DHCP agent class (But does not start its process).
- Calls its top level function with prepared data.
- Creates a temporary namespace and device, and calls ‘dhclient’ from that namespace.
- Assert that the device successfully obtained the expected IP address.
Fullstack Tests¶
Why?¶
The idea behind “fullstack” testing is to fill a gap between unit + functional tests and Tempest. Tempest tests are expensive to run, and target black box API tests exclusively. Tempest requires an OpenStack deployment to be run against, which can be difficult to configure and setup. Full stack testing addresses these issues by taking care of the deployment itself, according to the topology that the test requires. Developers further benefit from full stack testing as it can sufficiently simulate a real environment and provide a rapidly reproducible way to verify code while you’re still writing it.
How?¶
Full stack tests set up their own Neutron processes (Server & agents). They assume a working Rabbit and MySQL server before the run starts. Instructions on how to run fullstack tests on a VM are available below.
Each test defines its own topology (What and how many servers and agents should be running).
Since the test runs on the machine itself, full stack testing enables “white box” testing. This means that you can, for example, create a router through the API and then assert that a namespace was created for it.
Full stack tests run in the Neutron tree with Neutron resources alone. You may use the Neutron API (The Neutron server is set to NOAUTH so that Keystone is out of the picture). VMs may be simulated with a container-like class: neutron.tests.fullstack.resources.machine.FakeFullstackMachine. An example of its usage may be found at: neutron/tests/fullstack/test_connectivity.py.
Full stack testing can simulate multi node testing by starting an agent multiple times. Specifically, each node would have its own copy of the OVS/LinuxBridge/DHCP/L3 agents, all configured with the same “host” value. Each OVS agent is connected to its own pair of br-int/br-ex, and those bridges are then interconnected. For LinuxBridge agent each agent is started in its own namespace, called “host-<some_random_value>”. Such namespaces are connected with OVS “central” bridge to each other.
Segmentation at the database layer is guaranteed by creating a database per test. The messaging layer achieves segmentation by utilizing a RabbitMQ feature called ‘vhosts’. In short, just like a MySQL server serve multiple databases, so can a RabbitMQ server serve multiple messaging domains. Exchanges and queues in one ‘vhost’ are segmented from those in another ‘vhost’.
Please note that if the change you would like to test using fullstack tests involves a change to python-neutronclient as well as neutron, then you should make sure your fullstack tests are in a separate third change that depends on the python-neutronclient change using the ‘Depends-On’ tag in the commit message. You will need to wait for the next release of python-neutronclient, and a minimum version bump for python-neutronclient in the global requirements, before your fullstack tests will work in the gate. This is because tox uses the version of python-neutronclient listed in the upper-constraints.txt file in the openstack/requirements repository.
When?¶
- You’d like to test the interaction between Neutron components (Server and agents) and have already tested each component in isolation via unit or functional tests. You should have many unit tests, fewer tests to test a component and even fewer to test their interaction. Edge cases should not be tested with full stack testing.
- You’d like to increase coverage by testing features that require multi node testing such as l2pop, L3 HA and DVR.
- You’d like to test agent restarts. We’ve found bugs in the OVS, DHCP and L3 agents and haven’t found an effective way to test these scenarios. Full stack testing can help here as the full stack infrastructure can restart an agent during the test.
Example¶
Neutron offers a Quality of Service API, initially offering bandwidth capping at the port level. In the reference implementation, it does this by utilizing an OVS feature. neutron.tests.fullstack.test_qos.TestQoSWithOvsAgent.test_qos_policy_rule_lifecycle is a positive example of how the fullstack testing infrastructure should be used. It creates a network, subnet, QoS policy & rule and a port utilizing that policy. It then asserts that the expected bandwidth limitation is present on the OVS bridge connected to that port. The test is a true integration test, in the sense that it invokes the API and then asserts that Neutron interacted with the hypervisor appropriately.
API Tests¶
API tests (neutron/tests/tempest.
The neutron/tests/tempest/api directory was copied from the Tempest project around the Kilo timeframe. At the time, there was an overlap of tests between the Tempest and Neutron repositories. This overlap was then eliminated by carving out a subset of resources that belong to Tempest, with the rest in Neutron.
API tests that belong to Tempest deal with a subset of Neutron’s resources:
- Port
- Network
- Subnet
- Security Group
- Router
- Floating IP
These resources were chosen for their ubiquity. They are found in most Neutron deployments regardless of plugin, and are directly involved in the networking and security of an instance. Together, they form the bare minimum needed by Neutron.
This is excluding extensions to these resources (For example: Extra DHCP options to subnets, or snat_gateway mode to routers) that are not mandatory in the majority of cases.
Tests for other resources should be contributed to the Neutron repository. Scenario tests should be similarly split up between Tempest and Neutron according to the API they’re targeting.
Scenario Tests¶
Scenario tests (neutron/tests/tempest/scenario), like API tests, use the Tempest test infrastructure and have the same requirements. Guidelines for writing a good scenario test may be found at the Tempest developer guide:
Scenario tests, like API tests, are split between the Tempest and Neutron repositories according to the Neutron API the test is targeting.
Rally Tests¶
Rally tests (rally-jobs/plugins) use the rally infrastructure to exercise a neutron deployment. Guidelines for writing a good rally test can be found in the rally plugin documentation. There are also some examples in tree; the process for adding rally plugins to neutron requires three steps: 1) write a plugin and place it under rally-jobs/plugins/. This is your rally scenario; 2) (optional) add a setup file under rally-jobs/extra/. This is any devstack configuration required to make sure your environment can successfully process your scenario requests; 3) edit neutron-neutron.yaml. This is your scenario ‘contract’ or SLA.
Development Process¶
It is expected that any new changes that are proposed for merge come with tests for that feature or code area. Any bugs fixes that are submitted must also have tests to prove that they stay fixed! In addition, before proposing for merge, all of the current tests should be passing.
Structure of the Unit Test Tree¶.
Note
At no time should the production code import anything from testing subtree (neutron.tests). There are distributions that split out neutron.tests modules in a separate package that is not installed by default, making any code that relies on presence of the modules to fail. For example, RDO is one of those distributions.
Running Tests¶
Before submitting a patch for review you should always ensure all tests pass; a tox run is triggered by the jenkins gate executed on gerrit for each patch pushed for review.:
PEP8 and Unit Tests¶
Functional Tests¶.
Fullstack Tests¶ /opt/stack/logs/dsvm-fullstack-logs (for example, a test named “test_example” will produce logs to /opt/stack/logs/dsvm-fullstack-logs/test_example/), so that will be a good place to look if your test is failing. Logging from the test infrastructure itself is placed in: /opt/stack/logs/dsvm-fullstack-logs/test_example.log. Fullstack test suite assumes 240.0.0.0/4 (Class E) range in root namespace of the test machine is available for its usage.
API & Scenario Tests¶
To run the api or scenario tests, deploy Tempest and Neutron with DevStack and then run the following command, from the tempest directory:
tox -e all-plugin
If you want to limit the amount of tests that you would like to run, you can do, for instance:
export DEVSTACK_GATE_TEMPEST_REGEX="<you-regex>" # e.g. "neutron" tox -e all-plugin $DEVSTACK_GATE_TEMPEST_REGEX
Running Individual Tests¶
For running individual test modules, cases or tests, you just need to pass the dot-separated path you want as an argument to it.
For example, the following would run only a single test or test case:
$ tox -e py27 neutron.tests.unit.test_manager $ tox -e py27 neutron.tests.unit.test_manager.NeutronManagerTestCase $ tox -e py27 neutron.tests.unit.test_manager.NeutronManagerTestCase.test_service_plugin_is_loaded
- If you want to pass other arguments to ostestr, you can do the following::
- $ tox -e -epy27 – –regex neutron.tests.unit.test_manager –serial
Coverage¶
Neutron has a fast growing code base and there are plenty of areas that need better coverage.
To get a grasp of the areas where tests are needed, you can check current unit tests coverage by running:
$ tox -ecover
Since the coverage command can only show unit test coverage, a coverage document is maintained that shows test coverage per area of code in: doc/source/devref/testing_coverage.rst. You could also rely on Zuul logs, that are generated post-merge (not every project builds coverage results). To access them, do the following:
- Go to:<first-2-digits-of-sha1>/<sha1>/post/neutron-coverage/.
- Spec is a work in progress to provide a better landing page.
Debugging¶
By default, calls to pdb.set_trace() will be ignored when tests are run. For pdb statements to work, invoke tox as follows:
$. | http://docs.openstack.org/developer/neutron/devref/development.environment.html | 2017-01-16T19:18:06 | CC-MAIN-2017-04 | 1484560279248.16 | [array(['../_images/fullstack_multinode_simulation.png',
'../_images/fullstack_multinode_simulation.png'], dtype=object)] | docs.openstack.org |
Creating a Custom 404 Error Page
This tutorial will show you how to create a custom 404 error page for use in your Joomla web-site.
Contents you are using Joomla 1.6, 1.7, 2.5, or 3.x please use this detection code:
if (($this->error->getCode()) == '404') { header('Location: /index.php?option=com_content&view=article&id=75'); exit; }
Replace the location information (index.php?option..) with the URL from the menu item you created.
If you are using Joomla 1.5 and below please use this detection code instead:
if (($this->error->code) == '404') { header('Location: /index.php?option=com_content&view=article&id=75'); exit; } | https://docs.joomla.org/Creating_a_Custom_404_Error_Page | 2017-01-16T19:18:31 | CC-MAIN-2017-04 | 1484560279248.16 | [] | docs.joomla.org |
This feature allows any logged in user to subscribe to the following email notifications:
- Changes in issues assigned to me or reported by me
- New alerts
- New false positives
- New issues
for all projects or a specific list of projects.
To subscribe to notifications, go to UserName > My profile and tick the events"
See Notifications - Administration to administer this notification mechanism. | http://docs.codehaus.org/pages/viewpage.action?pageId=239371147 | 2014-10-20T18:09:08 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.codehaus.org |
Configure the connection to an incoming email server so the instance can receive email messages from users.
You can enable features through which people can post content to the application by sending an email to a particular address.
With incoming email monitoring, you support posting replies to discussions via email. With this feature enabled, the application retrieves email that lands in a mailbox you specify and uses the email's contents to post a discussion reply.
Replies posted via this feature must be emails that are replies to notification email. Although sent as text in email, the reply content will appear in the community as if the recipient had posted it with a web browser. This way, users can post when they're unable to log in to the community but are able to read content through their notifications.
Notification emails sent from the application will include a token in the subject line. The token is needed for the application to correlate the incoming email with its reply thread. Users should take care not to alter the token.
Note that this feature supports only discussion replies -- posts of new content and replies to other kinds of content aren't supported. To support those features, use the advanced incoming email feature.
When you configure this feature, start with settings for your incoming email server.
Select Enable Incoming Email Monitoring to turn on the reply-by-email feature for discussions.
With the advanced incoming email monitor, you can set up the application to support posting most kinds of content via email. This includes replies to content as well as new content. This feature overrides the incoming email monitoring feature. When you enable advanced incoming email monitoring, incoming email monitoring is unavailable. The advanced feature includes the functionality of the basic feature (although it works differently).
When posting content via email, a person uses one of a number of email addresses that are specifically designed for posting a particular kind of content in a particular place (such as a space or social group).
With this feature enabled, the application receives email directly, rather than checking for messages dropped in a particular mailbox. Because of this, configuring this feature requires setting up email routing so that Jive receives emails containing content. Note that this might require your email server administrator to prepare the system.
You'll need to configure email servers and route requests on port 25 to the port on which the application is listening. Here are the details:
Example: Your community is deployed to community.example.com. It's unlikely that an MX record exists for the DNS A record community.example.com, so you most likely don't have to add or configure any DNS records. Mail transfer agents will first attempt to look up an MX record for community.example.com; if they don't find one, they'll use the "implicit" or "fallback" A record.
Example: Your community is deployed to example.com/community. DNS A record for 'example.com' probably already has a corresponding MX record that handles mail for mail for the employees of example.com. To work around this, you'll need to create a separate DNS A record such as 'community.example.com' which points to the same IP address as the server that Jive is deployed on, then (optionally) create an MX record that points to the A record 'community.example.com'. If you don't create an MX record, email transfer agents use the A record as a fallback.
iptables -t nat -I PREROUTING -p tcp --dport 25 -j REDIRECT --to-ports 2500 /sbin/iptables-save
After you've set up email routing to ensure that Jive will receive content email, you'll want to configure the application to handle email sent to it. You do that in the Admin Console. | http://docs.jivesoftware.com/jive/6.0/community_admin/topic/com.jivesoftware.help.sbs.online_6.0/admin/ConfiguringIncomingEmailMonitoring.html | 2014-10-20T17:56:02 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.jivesoftware.com |
Upgrade from 0.1 to 0.2
If you upgrade from 0,1 to 0.2 you'll need to run a new analysis to see the widgets of authors activity and commits per author.
Description / Features
The plugin computes and feeds Sonar<<
Usage, Installation and Configuration
- Install the SCM Stats plugin through the Update Center or download it into the SONAR_HOME/extensions/plugins directory
- If you plan to use this plugin with non-maven projects, or SCM access is available only with username/password
- Set the SCM user / password (if needed) by setting the sonarc.scm.user.secured and sonar.scm.password.secured properties of SCM Activity plugin
- Launch a new quality analysis and the metrics will be fed
Grabbing stats for multiple periods
Since<<
Compatibility Matrix
Metrics Definitions
Future Work (Open Issues)
Plenty !!! Waiting for your ideas as well!
Open Issues (17 issues)
Change Log
Release 0.2 (10 issues)
| http://docs.codehaus.org/pages/viewpage.action?pageId=230398995 | 2014-10-20T18:13:46 | CC-MAIN-2014-42 | 1413507443062.21 | [array(['/download/attachments/229741975/scm-stats-commits-per-user.png?version=1&modificationDate=1349171731612&api=v2',
None], dtype=object)
array(['https://dl.dropbox.com/u/16516393/authors_activity.png', None],
dtype=object)
array(['/download/attachments/229741975/scm-stats-commits-clockhour.png?version=1&modificationDate=1347001896777&api=v2',
None], dtype=object)
array(['https://dl.dropbox.com/u/16516393/widget_set_period.png', None],
dtype=object)
array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object) ] | docs.codehaus.org |
User Guide
Local Navigation
Pending songs are not downloaded to my smartphone
If the songs in your Wi-Fi Music Sync list are pending and aren't being downloaded from your computer to your BlackBerry smartphone, try the following actions:
- Verify that your smartphone is within range of the Wi-Fi network that your computer is connected to.
- Verify that your smartphone has enough available storage. Try transferring media files that you have stored on your smartphone to a media card.
- Verify that the songs aren't being played on your computer or have not been deleted from your computer.
- Verify that your smartphone is charged. If the battery power level drops below 5 per cent, you cannot download your songs.
Related reference
Next topic: Legal notice
Previous topic: Some song details aren't appearing on my smartphone
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/42975/Computer_songs_not_added_to_device_1196176_11.jsp | 2014-10-20T18:14:55 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.blackberry.com |
Description
Exposing structure
The current grid coverage API is based on the idea of a coverage reader containing a single coverage, which can be, according to the read parameters, subsetted, rescaled, eventually reprojected too. Some readers, like image mosaic, are actually based on a collection of internal raster data (referred to as "granules") which:
-.
The current image mosaic is not the only example of "structured" readers, NetCDF data sources, PostGIS rasters and other in database rasters are another example where variables and dimensions could be exposed with higher control and better details. This can be leveraged by callers to have higher flexibilty in exposing dimensions, as well as allowing callers to inspect the inner structure of a complex coverage and better tailor their requests (WCS-EO is an example that allows to expose the inner structure of a mosaic and allows the caller to work against the single granules composing it).
Allowing granule addition/removal
Very often these structured readers need to be modified in terms of the granules they contain by:
- adding new granules
- removing existing granules
A simple and typical case is keeping a moving time window of data, e.g., the last month of satellite observations for a certain atmospheric gas. In order to support these cases we propose interfaces mimicking the vector data source, GranuleSource and GranuleStore, that allow access and modification of the granules internals. A structured reader might be read only, in which case it can only return a GranuleSource, or be writable, in which case a GranuleStore is returned instead. FeatureSource/FeatureStore have not been used directly to keep the work needed to implement a structured reader to a minimum (their interface is significantly larger).
In order to allow each reader to acquire extra data the way it prefers a "harvest" operation has been added that works against the file system, and allows each reader to add data into its backend storage the preferred way: a file oriented tool like ImageMosaic will just add references to the files being harvested internally, a database oriented tool like PostGIS raster can instead read the files and copy them into its internal storage.
Implementors of the harvest operation will have to consider the case of harvesting from another structured data source, for example, a image mosaic could with NetCDF files, which have in turn their own internal structure, taking that into account and eventually building not only new granules in the respective existing coverages: for example, a mosaic could be made of NetCDF files having each three variables, NO2, O3 and BrO, each file contains granules for the three gases at different batches of times and elevation, and the mosaic exposes the same three coverages, but hiding the fact the bits and pieces are split among various source NetCDF files.
Implementation
Implementation wise the proposal will come with two implementations:
-.
Status
This proposal is under construction.
Voting has not started yet:
- Andrea Aime +1
- Ben Caradoc-Davies +0
- Christian Mueller +0
- Ian Turton +0
- Justin Deoliveira +1
- Jody Garnett +1
- Simone Giannecchini +1
Tasks
This section is used to make sure your proposal is complete (did you remember documentation?) and has enough paid or volunteer time lined up to be a success
Allow granule based readers to expose inner structure
StructuredGridCoverageReader for ImageMosaic
StructuredGridCoverageReader for NetCDF
- Documentation update
- (add StructuredGridCoverageReader example)
- (code example)
- NetCDF page with code example
API Changes. | http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=230400085&selectedPageVersions=13&selectedPageVersions=14 | 2014-10-20T18:31:25 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.codehaus.org |
)
There are too many pages linking to this page. It needs a major rewrite or redirect to an appropriate article. JED tried to start a JED best practices on the forum which looks like it flopped. No one contributed to the post. JED has some pages on docs, so what we need is a landing page, explanations and pointing at what is and what isn't the best development practices for developers. Coding, security, how to get listed on the JED, licensing, etc... Tom Hutchison (talk) 08:44, 28 October 2013 (CDT)
This page needs to expand the info about the /media/ folder.
I'm adding some links here that provides information: | http://docs.joomla.org/index.php?title=Talk:Development_Best_Practices&oldid=104723 | 2014-10-20T18:01:15 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.joomla.org |
The eWay (Shared) payment method is integrated into the WordPress Shopping Cart plugin and redirects your customers to a secure eWay payment page where they fill in their credit card details and finalize their payment with eWay before returning back to your merchant website where their transaction results will be shown.
Configure the Plugin
To configure your WordPress Shopping Cart plugin to work with eWay, go to the Checkout > Configuration section in your WordPress dashboard. Navigate to the ‘eWay (Shared)’ settings box where you will see the eWay configuration settings for the shared payment method. The following settings are available.
Title
Set this to a value that you want to display on your billing page to your customers. If you have more than one payment method active for your shop, multiple payment methods will be presented to your customers to choose from before they make a payment. By default, this value is ‘eWay Credit Card’.
Customer ID
This is your eWay customer/merchant ID which is provided to your by eWay. Fill your customer ID into this field to start accepting payments on a specific eWay merchant account. For testing purposes, you may use the customer ID 87654321 which is the default value provided with the plugin installation.
Invoice Description
The invoice description is passed through to eWay and they will print this on their invoice to the customer as the order description.
Testing eWay with the Plugin
To test the eWay payment gateway with the Shopping Cart plugin, you may use the customer ID 87654321 inside Checkout > Configuration in the ‘eWay (Shared)’ settings box. Additionally, when you get to the secure eWay payment page after leaving your merchant site, use the credit card number 4444333322221111 with any name and any date in the future. | http://docs.tribulant.com/wordpress-shopping-cart-plugin/1417 | 2014-10-20T17:54:24 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.tribulant.com |
The.
For more information on BlackBerry Travel, see the BlackBerry Travel User Guide. | http://docs.blackberry.com/en/smartphone_users/deliverables/62002/amc1392149833413.html | 2014-10-20T18:36:28 | CC-MAIN-2014-42 | 1413507443062.21 | [] | docs.blackberry.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.